Mar 3 13:40:24.323570 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Mar 3 10:59:45 -00 2026 Mar 3 13:40:24.323603 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=51ade538e3d3c371f07ae1ec6fa9803fff0566ec060cf4b56dc685fc36d0e01c Mar 3 13:40:24.323615 kernel: BIOS-provided physical RAM map: Mar 3 13:40:24.323629 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 3 13:40:24.323637 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 3 13:40:24.323646 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 3 13:40:24.323657 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Mar 3 13:40:24.323667 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 3 13:40:24.323675 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Mar 3 13:40:24.323685 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Mar 3 13:40:24.323694 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Mar 3 13:40:24.323703 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Mar 3 13:40:24.323718 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Mar 3 13:40:24.323729 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Mar 3 13:40:24.323739 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Mar 3 13:40:24.323748 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 3 13:40:24.323756 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Mar 3 13:40:24.323769 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Mar 3 13:40:24.323778 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Mar 3 13:40:24.323788 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Mar 3 13:40:24.323799 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Mar 3 13:40:24.323809 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 3 13:40:24.323957 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 3 13:40:24.323969 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 3 13:40:24.323978 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Mar 3 13:40:24.323988 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 3 13:40:24.323998 kernel: NX (Execute Disable) protection: active Mar 3 13:40:24.324008 kernel: APIC: Static calls initialized Mar 3 13:40:24.324023 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Mar 3 13:40:24.324033 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Mar 3 13:40:24.324125 kernel: extended physical RAM map: Mar 3 13:40:24.324137 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 3 13:40:24.324146 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 3 13:40:24.324156 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 3 13:40:24.324166 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Mar 3 13:40:24.324175 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 3 13:40:24.324185 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Mar 3 13:40:24.324195 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Mar 3 13:40:24.324205 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Mar 3 13:40:24.324219 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Mar 3 13:40:24.324235 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Mar 3 13:40:24.324245 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Mar 3 13:40:24.324255 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Mar 3 13:40:24.324266 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Mar 3 13:40:24.324279 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Mar 3 13:40:24.324289 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Mar 3 13:40:24.324300 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Mar 3 13:40:24.324311 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 3 13:40:24.324321 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Mar 3 13:40:24.324332 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Mar 3 13:40:24.324342 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Mar 3 13:40:24.324353 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Mar 3 13:40:24.324364 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Mar 3 13:40:24.324375 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 3 13:40:24.324385 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 3 13:40:24.324399 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 3 13:40:24.324410 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Mar 3 13:40:24.324420 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 3 13:40:24.324431 kernel: efi: EFI v2.7 by EDK II Mar 3 13:40:24.324442 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Mar 3 13:40:24.324452 kernel: random: crng init done Mar 3 13:40:24.324463 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Mar 3 13:40:24.324474 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Mar 3 13:40:24.324484 kernel: secureboot: Secure boot disabled Mar 3 13:40:24.324495 kernel: SMBIOS 2.8 present. Mar 3 13:40:24.324505 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Mar 3 13:40:24.324521 kernel: DMI: Memory slots populated: 1/1 Mar 3 13:40:24.324531 kernel: Hypervisor detected: KVM Mar 3 13:40:24.324542 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Mar 3 13:40:24.324552 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 3 13:40:24.324563 kernel: kvm-clock: using sched offset of 14086573272 cycles Mar 3 13:40:24.324574 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 3 13:40:24.324585 kernel: tsc: Detected 2445.426 MHz processor Mar 3 13:40:24.324596 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 3 13:40:24.324607 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 3 13:40:24.324618 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Mar 3 13:40:24.335141 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 3 13:40:24.335181 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 3 13:40:24.335193 kernel: Using GB pages for direct mapping Mar 3 13:40:24.335203 kernel: ACPI: Early table checksum verification disabled Mar 3 13:40:24.335216 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Mar 3 13:40:24.335226 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 3 13:40:24.335236 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 3 13:40:24.335245 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 3 13:40:24.335254 kernel: ACPI: FACS 0x000000009CBDD000 000040 Mar 3 13:40:24.335264 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 3 13:40:24.335278 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 3 13:40:24.335289 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 3 13:40:24.335299 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 3 13:40:24.335309 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 3 13:40:24.335319 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Mar 3 13:40:24.335330 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Mar 3 13:40:24.335340 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Mar 3 13:40:24.335350 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Mar 3 13:40:24.335363 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Mar 3 13:40:24.335375 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Mar 3 13:40:24.335387 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Mar 3 13:40:24.335398 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Mar 3 13:40:24.335405 kernel: No NUMA configuration found Mar 3 13:40:24.335412 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Mar 3 13:40:24.335419 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Mar 3 13:40:24.335426 kernel: Zone ranges: Mar 3 13:40:24.335434 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 3 13:40:24.335441 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Mar 3 13:40:24.335451 kernel: Normal empty Mar 3 13:40:24.335458 kernel: Device empty Mar 3 13:40:24.335465 kernel: Movable zone start for each node Mar 3 13:40:24.335471 kernel: Early memory node ranges Mar 3 13:40:24.335478 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 3 13:40:24.335490 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Mar 3 13:40:24.335502 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Mar 3 13:40:24.335513 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Mar 3 13:40:24.335522 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Mar 3 13:40:24.335531 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Mar 3 13:40:24.335545 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Mar 3 13:40:24.335554 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Mar 3 13:40:24.335566 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Mar 3 13:40:24.335577 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 3 13:40:24.335598 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 3 13:40:24.335611 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Mar 3 13:40:24.335622 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 3 13:40:24.335635 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Mar 3 13:40:24.335646 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Mar 3 13:40:24.335656 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Mar 3 13:40:24.335666 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Mar 3 13:40:24.335680 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Mar 3 13:40:24.335691 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 3 13:40:24.335702 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 3 13:40:24.335716 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 3 13:40:24.335727 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 3 13:40:24.335742 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 3 13:40:24.335752 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 3 13:40:24.335761 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 3 13:40:24.335772 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 3 13:40:24.335784 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 3 13:40:24.335796 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 3 13:40:24.335805 kernel: TSC deadline timer available Mar 3 13:40:24.336140 kernel: CPU topo: Max. logical packages: 1 Mar 3 13:40:24.336159 kernel: CPU topo: Max. logical dies: 1 Mar 3 13:40:24.336170 kernel: CPU topo: Max. dies per package: 1 Mar 3 13:40:24.336187 kernel: CPU topo: Max. threads per core: 1 Mar 3 13:40:24.336199 kernel: CPU topo: Num. cores per package: 4 Mar 3 13:40:24.336212 kernel: CPU topo: Num. threads per package: 4 Mar 3 13:40:24.336224 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Mar 3 13:40:24.336236 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 3 13:40:24.336248 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 3 13:40:24.336260 kernel: kvm-guest: setup PV sched yield Mar 3 13:40:24.336272 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Mar 3 13:40:24.336284 kernel: Booting paravirtualized kernel on KVM Mar 3 13:40:24.336300 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 3 13:40:24.336313 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 3 13:40:24.336325 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Mar 3 13:40:24.336337 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Mar 3 13:40:24.336350 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 3 13:40:24.336361 kernel: kvm-guest: PV spinlocks enabled Mar 3 13:40:24.336373 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 3 13:40:24.336387 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=51ade538e3d3c371f07ae1ec6fa9803fff0566ec060cf4b56dc685fc36d0e01c Mar 3 13:40:24.336403 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 3 13:40:24.336416 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 3 13:40:24.336428 kernel: Fallback order for Node 0: 0 Mar 3 13:40:24.336440 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Mar 3 13:40:24.336452 kernel: Policy zone: DMA32 Mar 3 13:40:24.336464 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 3 13:40:24.336476 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 3 13:40:24.336488 kernel: ftrace: allocating 40099 entries in 157 pages Mar 3 13:40:24.336500 kernel: ftrace: allocated 157 pages with 5 groups Mar 3 13:40:24.336516 kernel: Dynamic Preempt: voluntary Mar 3 13:40:24.336528 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 3 13:40:24.336542 kernel: rcu: RCU event tracing is enabled. Mar 3 13:40:24.336554 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 3 13:40:24.336566 kernel: Trampoline variant of Tasks RCU enabled. Mar 3 13:40:24.336579 kernel: Rude variant of Tasks RCU enabled. Mar 3 13:40:24.336590 kernel: Tracing variant of Tasks RCU enabled. Mar 3 13:40:24.336602 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 3 13:40:24.336615 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 3 13:40:24.336630 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 3 13:40:24.336642 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 3 13:40:24.336654 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 3 13:40:24.336666 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 3 13:40:24.336679 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 3 13:40:24.336691 kernel: Console: colour dummy device 80x25 Mar 3 13:40:24.336703 kernel: printk: legacy console [ttyS0] enabled Mar 3 13:40:24.336716 kernel: ACPI: Core revision 20240827 Mar 3 13:40:24.336728 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 3 13:40:24.336741 kernel: APIC: Switch to symmetric I/O mode setup Mar 3 13:40:24.336751 kernel: x2apic enabled Mar 3 13:40:24.336761 kernel: APIC: Switched APIC routing to: physical x2apic Mar 3 13:40:24.336771 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 3 13:40:24.336782 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 3 13:40:24.336795 kernel: kvm-guest: setup PV IPIs Mar 3 13:40:24.336806 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 3 13:40:24.336964 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 3 13:40:24.336982 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 3 13:40:24.337001 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 3 13:40:24.337012 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 3 13:40:24.337022 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 3 13:40:24.337032 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 3 13:40:24.337127 kernel: Spectre V2 : Mitigation: Retpolines Mar 3 13:40:24.337142 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 3 13:40:24.337155 kernel: Speculative Store Bypass: Vulnerable Mar 3 13:40:24.337167 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 3 13:40:24.337178 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 3 13:40:24.337193 kernel: active return thunk: srso_alias_return_thunk Mar 3 13:40:24.337203 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 3 13:40:24.337213 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 3 13:40:24.337224 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 3 13:40:24.337235 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 3 13:40:24.337247 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 3 13:40:24.337258 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 3 13:40:24.337270 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 3 13:40:24.337287 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 3 13:40:24.337298 kernel: Freeing SMP alternatives memory: 32K Mar 3 13:40:24.337310 kernel: pid_max: default: 32768 minimum: 301 Mar 3 13:40:24.337323 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 3 13:40:24.337335 kernel: landlock: Up and running. Mar 3 13:40:24.337345 kernel: SELinux: Initializing. Mar 3 13:40:24.337355 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 3 13:40:24.337365 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 3 13:40:24.337375 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 3 13:40:24.337391 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 3 13:40:24.337403 kernel: signal: max sigframe size: 1776 Mar 3 13:40:24.337415 kernel: rcu: Hierarchical SRCU implementation. Mar 3 13:40:24.337428 kernel: rcu: Max phase no-delay instances is 400. Mar 3 13:40:24.337438 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 3 13:40:24.337448 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 3 13:40:24.337458 kernel: smp: Bringing up secondary CPUs ... Mar 3 13:40:24.337467 kernel: smpboot: x86: Booting SMP configuration: Mar 3 13:40:24.337478 kernel: .... node #0, CPUs: #1 #2 #3 Mar 3 13:40:24.337494 kernel: smp: Brought up 1 node, 4 CPUs Mar 3 13:40:24.337506 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 3 13:40:24.337518 kernel: Memory: 2414476K/2565800K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46200K init, 2560K bss, 145388K reserved, 0K cma-reserved) Mar 3 13:40:24.337530 kernel: devtmpfs: initialized Mar 3 13:40:24.337541 kernel: x86/mm: Memory block size: 128MB Mar 3 13:40:24.337553 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Mar 3 13:40:24.337566 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Mar 3 13:40:24.337578 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Mar 3 13:40:24.337589 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Mar 3 13:40:24.337604 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Mar 3 13:40:24.337614 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Mar 3 13:40:24.337624 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 3 13:40:24.337634 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 3 13:40:24.337646 kernel: pinctrl core: initialized pinctrl subsystem Mar 3 13:40:24.337658 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 3 13:40:24.337670 kernel: audit: initializing netlink subsys (disabled) Mar 3 13:40:24.337683 kernel: audit: type=2000 audit(1772545213.026:1): state=initialized audit_enabled=0 res=1 Mar 3 13:40:24.337696 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 3 13:40:24.337710 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 3 13:40:24.337720 kernel: cpuidle: using governor menu Mar 3 13:40:24.338024 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 3 13:40:24.338134 kernel: dca service started, version 1.12.1 Mar 3 13:40:24.338151 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Mar 3 13:40:24.338162 kernel: PCI: Using configuration type 1 for base access Mar 3 13:40:24.338172 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 3 13:40:24.338181 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 3 13:40:24.338191 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 3 13:40:24.338208 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 3 13:40:24.338297 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 3 13:40:24.338391 kernel: ACPI: Added _OSI(Module Device) Mar 3 13:40:24.338401 kernel: ACPI: Added _OSI(Processor Device) Mar 3 13:40:24.338411 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 3 13:40:24.338421 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 3 13:40:24.338431 kernel: ACPI: Interpreter enabled Mar 3 13:40:24.338441 kernel: ACPI: PM: (supports S0 S3 S5) Mar 3 13:40:24.338453 kernel: ACPI: Using IOAPIC for interrupt routing Mar 3 13:40:24.338537 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 3 13:40:24.338550 kernel: PCI: Using E820 reservations for host bridge windows Mar 3 13:40:24.338562 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 3 13:40:24.338573 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 3 13:40:24.339371 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 3 13:40:24.339665 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 3 13:40:24.340011 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 3 13:40:24.340039 kernel: PCI host bridge to bus 0000:00 Mar 3 13:40:24.340784 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 3 13:40:24.341489 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 3 13:40:24.341686 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 3 13:40:24.342020 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Mar 3 13:40:24.342303 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Mar 3 13:40:24.342489 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Mar 3 13:40:24.342699 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 3 13:40:24.343616 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Mar 3 13:40:24.344187 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Mar 3 13:40:24.344405 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Mar 3 13:40:24.344601 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Mar 3 13:40:24.344792 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Mar 3 13:40:24.345242 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 3 13:40:24.345447 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 10742 usecs Mar 3 13:40:24.345655 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Mar 3 13:40:24.346005 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Mar 3 13:40:24.346429 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Mar 3 13:40:24.346622 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Mar 3 13:40:24.346984 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Mar 3 13:40:24.347287 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Mar 3 13:40:24.347478 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Mar 3 13:40:24.347669 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Mar 3 13:40:24.348147 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Mar 3 13:40:24.348347 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Mar 3 13:40:24.348538 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Mar 3 13:40:24.348725 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Mar 3 13:40:24.349187 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Mar 3 13:40:24.349396 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Mar 3 13:40:24.349586 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 3 13:40:24.350030 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Mar 3 13:40:24.350317 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Mar 3 13:40:24.350506 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Mar 3 13:40:24.350788 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Mar 3 13:40:24.351239 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Mar 3 13:40:24.351260 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 3 13:40:24.351273 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 3 13:40:24.351284 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 3 13:40:24.351294 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 3 13:40:24.351304 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 3 13:40:24.351314 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 3 13:40:24.351326 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 3 13:40:24.351345 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 3 13:40:24.351355 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 3 13:40:24.351365 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 3 13:40:24.351376 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 3 13:40:24.351385 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 3 13:40:24.351398 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 3 13:40:24.351411 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 3 13:40:24.351420 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 3 13:40:24.351430 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 3 13:40:24.351445 kernel: iommu: Default domain type: Translated Mar 3 13:40:24.351455 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 3 13:40:24.351468 kernel: efivars: Registered efivars operations Mar 3 13:40:24.351480 kernel: PCI: Using ACPI for IRQ routing Mar 3 13:40:24.351490 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 3 13:40:24.351501 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Mar 3 13:40:24.351510 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Mar 3 13:40:24.351520 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Mar 3 13:40:24.351532 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Mar 3 13:40:24.351548 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Mar 3 13:40:24.351558 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Mar 3 13:40:24.351568 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Mar 3 13:40:24.351579 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Mar 3 13:40:24.351767 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 3 13:40:24.352217 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 3 13:40:24.352410 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 3 13:40:24.352425 kernel: vgaarb: loaded Mar 3 13:40:24.352446 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 3 13:40:24.352458 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 3 13:40:24.352468 kernel: clocksource: Switched to clocksource kvm-clock Mar 3 13:40:24.352478 kernel: VFS: Disk quotas dquot_6.6.0 Mar 3 13:40:24.352489 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 3 13:40:24.352499 kernel: pnp: PnP ACPI init Mar 3 13:40:24.352787 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Mar 3 13:40:24.352808 kernel: pnp: PnP ACPI: found 6 devices Mar 3 13:40:24.353196 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 3 13:40:24.353209 kernel: NET: Registered PF_INET protocol family Mar 3 13:40:24.353220 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 3 13:40:24.353230 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 3 13:40:24.353241 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 3 13:40:24.353279 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 3 13:40:24.353294 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 3 13:40:24.353304 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 3 13:40:24.353315 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 3 13:40:24.353332 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 3 13:40:24.353345 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 3 13:40:24.353355 kernel: NET: Registered PF_XDP protocol family Mar 3 13:40:24.353557 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Mar 3 13:40:24.353749 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Mar 3 13:40:24.354186 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 3 13:40:24.354368 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 3 13:40:24.354543 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 3 13:40:24.354727 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Mar 3 13:40:24.355219 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Mar 3 13:40:24.355399 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Mar 3 13:40:24.355415 kernel: PCI: CLS 0 bytes, default 64 Mar 3 13:40:24.355427 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 3 13:40:24.355443 kernel: Initialise system trusted keyrings Mar 3 13:40:24.355456 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 3 13:40:24.355469 kernel: Key type asymmetric registered Mar 3 13:40:24.355480 kernel: Asymmetric key parser 'x509' registered Mar 3 13:40:24.355495 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 3 13:40:24.355506 kernel: io scheduler mq-deadline registered Mar 3 13:40:24.355516 kernel: io scheduler kyber registered Mar 3 13:40:24.355529 kernel: io scheduler bfq registered Mar 3 13:40:24.355541 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 3 13:40:24.355553 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 3 13:40:24.355564 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 3 13:40:24.355579 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 3 13:40:24.355592 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 3 13:40:24.355605 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 3 13:40:24.355615 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 3 13:40:24.355625 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 3 13:40:24.355636 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 3 13:40:24.356138 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 3 13:40:24.356164 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 3 13:40:24.356360 kernel: rtc_cmos 00:04: registered as rtc0 Mar 3 13:40:24.356544 kernel: rtc_cmos 00:04: setting system clock to 2026-03-03T13:40:22 UTC (1772545222) Mar 3 13:40:24.356724 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Mar 3 13:40:24.356740 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 3 13:40:24.356754 kernel: efifb: probing for efifb Mar 3 13:40:24.356767 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Mar 3 13:40:24.356777 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Mar 3 13:40:24.356793 kernel: efifb: scrolling: redraw Mar 3 13:40:24.356803 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 3 13:40:24.356971 kernel: Console: switching to colour frame buffer device 160x50 Mar 3 13:40:24.356988 kernel: fb0: EFI VGA frame buffer device Mar 3 13:40:24.356999 kernel: pstore: Using crash dump compression: deflate Mar 3 13:40:24.357009 kernel: pstore: Registered efi_pstore as persistent store backend Mar 3 13:40:24.357020 kernel: NET: Registered PF_INET6 protocol family Mar 3 13:40:24.357030 kernel: Segment Routing with IPv6 Mar 3 13:40:24.357040 kernel: In-situ OAM (IOAM) with IPv6 Mar 3 13:40:24.357151 kernel: NET: Registered PF_PACKET protocol family Mar 3 13:40:24.357163 kernel: Key type dns_resolver registered Mar 3 13:40:24.357173 kernel: IPI shorthand broadcast: enabled Mar 3 13:40:24.357183 kernel: sched_clock: Marking stable (8886081951, 1147320541)->(10798400984, -764998492) Mar 3 13:40:24.357193 kernel: registered taskstats version 1 Mar 3 13:40:24.357203 kernel: Loading compiled-in X.509 certificates Mar 3 13:40:24.357217 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: bf135b2a3d3664cc6742f4e1848867384c1e52f1' Mar 3 13:40:24.357229 kernel: Demotion targets for Node 0: null Mar 3 13:40:24.357240 kernel: Key type .fscrypt registered Mar 3 13:40:24.357254 kernel: Key type fscrypt-provisioning registered Mar 3 13:40:24.357264 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 3 13:40:24.357275 kernel: ima: Allocated hash algorithm: sha1 Mar 3 13:40:24.357289 kernel: ima: No architecture policies found Mar 3 13:40:24.357301 kernel: clk: Disabling unused clocks Mar 3 13:40:24.357311 kernel: Warning: unable to open an initial console. Mar 3 13:40:24.357322 kernel: Freeing unused kernel image (initmem) memory: 46200K Mar 3 13:40:24.357332 kernel: Write protecting the kernel read-only data: 40960k Mar 3 13:40:24.357344 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Mar 3 13:40:24.357362 kernel: Run /init as init process Mar 3 13:40:24.357372 kernel: with arguments: Mar 3 13:40:24.357383 kernel: /init Mar 3 13:40:24.357393 kernel: with environment: Mar 3 13:40:24.357403 kernel: HOME=/ Mar 3 13:40:24.357416 kernel: TERM=linux Mar 3 13:40:24.357434 systemd[1]: Successfully made /usr/ read-only. Mar 3 13:40:24.357449 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 3 13:40:24.357465 systemd[1]: Detected virtualization kvm. Mar 3 13:40:24.357477 systemd[1]: Detected architecture x86-64. Mar 3 13:40:24.357491 systemd[1]: Running in initrd. Mar 3 13:40:24.357502 systemd[1]: No hostname configured, using default hostname. Mar 3 13:40:24.357514 systemd[1]: Hostname set to . Mar 3 13:40:24.357524 systemd[1]: Initializing machine ID from VM UUID. Mar 3 13:40:24.357535 systemd[1]: Queued start job for default target initrd.target. Mar 3 13:40:24.357549 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 3 13:40:24.357566 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 3 13:40:24.357578 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 3 13:40:24.357590 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 3 13:40:24.357601 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 3 13:40:24.357616 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 3 13:40:24.357630 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 3 13:40:24.357641 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 3 13:40:24.357656 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 3 13:40:24.357668 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 3 13:40:24.357683 systemd[1]: Reached target paths.target - Path Units. Mar 3 13:40:24.357694 systemd[1]: Reached target slices.target - Slice Units. Mar 3 13:40:24.357705 systemd[1]: Reached target swap.target - Swaps. Mar 3 13:40:24.357716 systemd[1]: Reached target timers.target - Timer Units. Mar 3 13:40:24.357727 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 3 13:40:24.357741 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 3 13:40:24.357758 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 3 13:40:24.357770 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 3 13:40:24.357781 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 3 13:40:24.357792 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 3 13:40:24.357805 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 3 13:40:24.357960 systemd[1]: Reached target sockets.target - Socket Units. Mar 3 13:40:24.357973 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 3 13:40:24.357985 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 3 13:40:24.357996 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 3 13:40:24.358014 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 3 13:40:24.358028 systemd[1]: Starting systemd-fsck-usr.service... Mar 3 13:40:24.358040 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 3 13:40:24.358139 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 3 13:40:24.358151 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 3 13:40:24.358162 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 3 13:40:24.358180 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 3 13:40:24.358194 systemd[1]: Finished systemd-fsck-usr.service. Mar 3 13:40:24.358248 systemd-journald[203]: Collecting audit messages is disabled. Mar 3 13:40:24.358290 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 3 13:40:24.358302 systemd-journald[203]: Journal started Mar 3 13:40:24.358328 systemd-journald[203]: Runtime Journal (/run/log/journal/d4c301aaff4448d5ac3d71f77cabfd18) is 6M, max 48.1M, 42.1M free. Mar 3 13:40:24.313123 systemd-modules-load[204]: Inserted module 'overlay' Mar 3 13:40:24.386749 systemd[1]: Started systemd-journald.service - Journal Service. Mar 3 13:40:24.377031 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 3 13:40:24.424732 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 3 13:40:24.457315 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 13:40:24.462010 systemd-tmpfiles[215]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 3 13:40:24.481771 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 3 13:40:24.510428 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 3 13:40:24.548779 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 3 13:40:24.596298 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 3 13:40:24.605136 kernel: Bridge firewalling registered Mar 3 13:40:24.605370 systemd-modules-load[204]: Inserted module 'br_netfilter' Mar 3 13:40:24.618763 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 3 13:40:24.624223 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 3 13:40:24.676641 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 3 13:40:24.692027 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 3 13:40:24.710798 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 3 13:40:24.756267 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 3 13:40:24.769507 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 3 13:40:24.860758 systemd-resolved[239]: Positive Trust Anchors: Mar 3 13:40:24.861146 systemd-resolved[239]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 3 13:40:24.861191 systemd-resolved[239]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 3 13:40:25.069401 kernel: hrtimer: interrupt took 2745645 ns Mar 3 13:40:24.865644 systemd-resolved[239]: Defaulting to hostname 'linux'. Mar 3 13:40:25.080307 dracut-cmdline[244]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=51ade538e3d3c371f07ae1ec6fa9803fff0566ec060cf4b56dc685fc36d0e01c Mar 3 13:40:24.872660 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 3 13:40:25.002969 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 3 13:40:25.543190 kernel: SCSI subsystem initialized Mar 3 13:40:25.566008 kernel: Loading iSCSI transport class v2.0-870. Mar 3 13:40:25.611215 kernel: iscsi: registered transport (tcp) Mar 3 13:40:25.665280 kernel: iscsi: registered transport (qla4xxx) Mar 3 13:40:25.665361 kernel: QLogic iSCSI HBA Driver Mar 3 13:40:25.738166 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 3 13:40:25.803574 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 3 13:40:25.804764 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 3 13:40:26.044687 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 3 13:40:26.048713 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 3 13:40:26.194394 kernel: raid6: avx2x4 gen() 22327 MB/s Mar 3 13:40:26.216394 kernel: raid6: avx2x2 gen() 18496 MB/s Mar 3 13:40:26.244531 kernel: raid6: avx2x1 gen() 8479 MB/s Mar 3 13:40:26.244619 kernel: raid6: using algorithm avx2x4 gen() 22327 MB/s Mar 3 13:40:26.273542 kernel: raid6: .... xor() 4278 MB/s, rmw enabled Mar 3 13:40:26.273615 kernel: raid6: using avx2x2 recovery algorithm Mar 3 13:40:26.364225 kernel: xor: automatically using best checksumming function avx Mar 3 13:40:26.957153 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 3 13:40:26.977768 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 3 13:40:26.993657 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 3 13:40:27.072278 systemd-udevd[452]: Using default interface naming scheme 'v255'. Mar 3 13:40:27.086687 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 3 13:40:27.109151 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 3 13:40:27.176511 dracut-pre-trigger[454]: rd.md=0: removing MD RAID activation Mar 3 13:40:27.291202 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 3 13:40:27.295361 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 3 13:40:27.958984 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 3 13:40:27.967241 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 3 13:40:28.114683 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 3 13:40:28.162169 kernel: cryptd: max_cpu_qlen set to 1000 Mar 3 13:40:28.162231 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 3 13:40:28.227394 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 3 13:40:28.227469 kernel: GPT:9289727 != 19775487 Mar 3 13:40:28.227489 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Mar 3 13:40:28.227504 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 3 13:40:28.227517 kernel: GPT:9289727 != 19775487 Mar 3 13:40:28.227538 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 3 13:40:28.227551 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 3 13:40:28.231409 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 3 13:40:28.231974 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 13:40:28.278674 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 3 13:40:28.296401 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 3 13:40:28.320546 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 3 13:40:28.354215 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 3 13:40:28.400162 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 3 13:40:28.444958 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 3 13:40:28.517157 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 3 13:40:28.529664 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 3 13:40:28.568270 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 3 13:40:29.005555 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 3 13:40:29.006366 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 13:40:29.045393 kernel: libata version 3.00 loaded. Mar 3 13:40:29.054264 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 3 13:40:29.066201 kernel: ahci 0000:00:1f.2: version 3.0 Mar 3 13:40:29.074682 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 3 13:40:29.076067 disk-uuid[535]: Primary Header is updated. Mar 3 13:40:29.076067 disk-uuid[535]: Secondary Entries is updated. Mar 3 13:40:29.076067 disk-uuid[535]: Secondary Header is updated. Mar 3 13:40:29.161233 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Mar 3 13:40:29.161522 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Mar 3 13:40:29.161751 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 3 13:40:29.162221 kernel: AES CTR mode by8 optimization enabled Mar 3 13:40:29.162239 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 3 13:40:29.162254 kernel: scsi host0: ahci Mar 3 13:40:29.162977 kernel: scsi host1: ahci Mar 3 13:40:29.171655 kernel: scsi host2: ahci Mar 3 13:40:29.178064 kernel: scsi host3: ahci Mar 3 13:40:29.187319 kernel: scsi host4: ahci Mar 3 13:40:29.204738 kernel: scsi host5: ahci Mar 3 13:40:29.241442 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Mar 3 13:40:29.241507 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Mar 3 13:40:29.252484 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Mar 3 13:40:29.264064 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Mar 3 13:40:29.275331 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Mar 3 13:40:29.275398 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Mar 3 13:40:29.399493 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 13:40:29.601238 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 3 13:40:29.608231 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 3 13:40:29.618062 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 3 13:40:29.628039 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 3 13:40:29.638377 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 3 13:40:29.651505 kernel: ata3.00: LPM support broken, forcing max_power Mar 3 13:40:29.651547 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 3 13:40:29.651565 kernel: ata3.00: applying bridge limits Mar 3 13:40:29.665248 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 3 13:40:29.678037 kernel: ata3.00: LPM support broken, forcing max_power Mar 3 13:40:29.678175 kernel: ata3.00: configured for UDMA/100 Mar 3 13:40:29.693204 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 3 13:40:29.778237 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 3 13:40:29.778674 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 3 13:40:29.807615 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 3 13:40:30.168284 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 3 13:40:30.172578 disk-uuid[536]: The operation has completed successfully. Mar 3 13:40:30.523220 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 3 13:40:30.523588 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 3 13:40:30.540448 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 3 13:40:30.543075 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 3 13:40:30.546677 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 3 13:40:30.560528 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 3 13:40:30.664307 sh[639]: Success Mar 3 13:40:30.578545 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 3 13:40:30.652999 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 3 13:40:30.737355 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 3 13:40:30.737427 kernel: device-mapper: uevent: version 1.0.3 Mar 3 13:40:30.748564 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 3 13:40:30.766310 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 3 13:40:30.821676 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Mar 3 13:40:30.929202 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 3 13:40:30.943388 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 3 13:40:30.998483 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 3 13:40:31.048680 kernel: BTRFS: device fsid f550cb98-648e-4600-9237-4b15eb09827b devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (659) Mar 3 13:40:31.048739 kernel: BTRFS info (device dm-0): first mount of filesystem f550cb98-648e-4600-9237-4b15eb09827b Mar 3 13:40:31.048759 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 3 13:40:31.065304 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 3 13:40:31.065350 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 3 13:40:31.069482 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 3 13:40:31.070684 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 3 13:40:31.081949 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 3 13:40:31.084304 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 3 13:40:31.138504 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 3 13:40:31.220413 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (682) Mar 3 13:40:31.240648 kernel: BTRFS info (device vda6): first mount of filesystem af9be1e8-b0f0-42a3-a696-521642a3b9f8 Mar 3 13:40:31.240712 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 3 13:40:31.267286 kernel: BTRFS info (device vda6): turning on async discard Mar 3 13:40:31.267332 kernel: BTRFS info (device vda6): enabling free space tree Mar 3 13:40:31.290075 kernel: BTRFS info (device vda6): last unmount of filesystem af9be1e8-b0f0-42a3-a696-521642a3b9f8 Mar 3 13:40:31.298180 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 3 13:40:31.322337 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 3 13:40:31.953740 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 3 13:40:31.975443 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 3 13:40:32.153806 systemd-networkd[834]: lo: Link UP Mar 3 13:40:32.154295 systemd-networkd[834]: lo: Gained carrier Mar 3 13:40:32.165725 systemd-networkd[834]: Enumeration completed Mar 3 13:40:32.166292 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 3 13:40:32.182997 systemd[1]: Reached target network.target - Network. Mar 3 13:40:32.189197 systemd-networkd[834]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 3 13:40:32.240958 ignition[737]: Ignition 2.22.0 Mar 3 13:40:32.189204 systemd-networkd[834]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 3 13:40:32.240974 ignition[737]: Stage: fetch-offline Mar 3 13:40:32.203081 systemd-networkd[834]: eth0: Link UP Mar 3 13:40:32.241406 ignition[737]: no configs at "/usr/lib/ignition/base.d" Mar 3 13:40:32.203467 systemd-networkd[834]: eth0: Gained carrier Mar 3 13:40:32.241423 ignition[737]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 3 13:40:32.203482 systemd-networkd[834]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 3 13:40:32.242062 ignition[737]: parsed url from cmdline: "" Mar 3 13:40:32.341697 systemd-networkd[834]: eth0: DHCPv4 address 10.0.0.57/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 3 13:40:32.242069 ignition[737]: no config URL provided Mar 3 13:40:32.242077 ignition[737]: reading system config file "/usr/lib/ignition/user.ign" Mar 3 13:40:32.242175 ignition[737]: no config at "/usr/lib/ignition/user.ign" Mar 3 13:40:32.242362 ignition[737]: op(1): [started] loading QEMU firmware config module Mar 3 13:40:32.242372 ignition[737]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 3 13:40:32.296771 ignition[737]: op(1): [finished] loading QEMU firmware config module Mar 3 13:40:33.311651 ignition[737]: parsing config with SHA512: 69238eab40848561e97b23c09c0ae08c2d3a701c78adb5fd678ee5ea3775e88b43c80e286f83474b1645ae6a6361c7b4f5dc53afdfdeeb7f9233603a7cf90154 Mar 3 13:40:33.508392 systemd-networkd[834]: eth0: Gained IPv6LL Mar 3 13:40:33.570640 unknown[737]: fetched base config from "system" Mar 3 13:40:33.570657 unknown[737]: fetched user config from "qemu" Mar 3 13:40:33.573325 ignition[737]: fetch-offline: fetch-offline passed Mar 3 13:40:33.578601 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 3 13:40:33.573497 ignition[737]: Ignition finished successfully Mar 3 13:40:33.593396 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 3 13:40:33.595667 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 3 13:40:33.813305 ignition[842]: Ignition 2.22.0 Mar 3 13:40:33.813378 ignition[842]: Stage: kargs Mar 3 13:40:33.813549 ignition[842]: no configs at "/usr/lib/ignition/base.d" Mar 3 13:40:33.813562 ignition[842]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 3 13:40:33.832264 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 3 13:40:33.823695 ignition[842]: kargs: kargs passed Mar 3 13:40:33.850261 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 3 13:40:33.824012 ignition[842]: Ignition finished successfully Mar 3 13:40:33.952566 ignition[850]: Ignition 2.22.0 Mar 3 13:40:33.952662 ignition[850]: Stage: disks Mar 3 13:40:33.959496 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 3 13:40:33.953400 ignition[850]: no configs at "/usr/lib/ignition/base.d" Mar 3 13:40:33.970372 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 3 13:40:33.953417 ignition[850]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 3 13:40:33.974511 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 3 13:40:33.955744 ignition[850]: disks: disks passed Mar 3 13:40:33.993272 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 3 13:40:33.955810 ignition[850]: Ignition finished successfully Mar 3 13:40:34.031586 systemd[1]: Reached target sysinit.target - System Initialization. Mar 3 13:40:34.049793 systemd[1]: Reached target basic.target - Basic System. Mar 3 13:40:34.070981 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 3 13:40:34.182290 systemd-fsck[860]: ROOT: clean, 15/553520 files, 52789/553472 blocks Mar 3 13:40:34.196077 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 3 13:40:34.219370 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 3 13:40:34.658681 kernel: EXT4-fs (vda9): mounted filesystem f0c751de-febc-4e57-b330-c926d38ed5ec r/w with ordered data mode. Quota mode: none. Mar 3 13:40:34.668745 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 3 13:40:34.715715 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 3 13:40:34.756505 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 3 13:40:34.796431 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 3 13:40:34.850613 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 3 13:40:34.852041 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 3 13:40:34.852104 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 3 13:40:34.942637 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 3 13:40:34.964108 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (868) Mar 3 13:40:34.966429 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 3 13:40:34.997986 kernel: BTRFS info (device vda6): first mount of filesystem af9be1e8-b0f0-42a3-a696-521642a3b9f8 Mar 3 13:40:34.998039 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 3 13:40:35.041497 kernel: BTRFS info (device vda6): turning on async discard Mar 3 13:40:35.041560 kernel: BTRFS info (device vda6): enabling free space tree Mar 3 13:40:35.044595 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 3 13:40:35.162425 initrd-setup-root[892]: cut: /sysroot/etc/passwd: No such file or directory Mar 3 13:40:35.187711 initrd-setup-root[899]: cut: /sysroot/etc/group: No such file or directory Mar 3 13:40:35.213773 initrd-setup-root[906]: cut: /sysroot/etc/shadow: No such file or directory Mar 3 13:40:35.249031 initrd-setup-root[913]: cut: /sysroot/etc/gshadow: No such file or directory Mar 3 13:40:35.609492 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 3 13:40:35.634716 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 3 13:40:35.636786 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 3 13:40:35.693790 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 3 13:40:35.709807 kernel: BTRFS info (device vda6): last unmount of filesystem af9be1e8-b0f0-42a3-a696-521642a3b9f8 Mar 3 13:40:35.751495 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 3 13:40:36.568211 ignition[981]: INFO : Ignition 2.22.0 Mar 3 13:40:36.568211 ignition[981]: INFO : Stage: mount Mar 3 13:40:36.568211 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 3 13:40:36.568211 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 3 13:40:36.600592 ignition[981]: INFO : mount: mount passed Mar 3 13:40:36.600592 ignition[981]: INFO : Ignition finished successfully Mar 3 13:40:36.621421 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 3 13:40:36.633589 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 3 13:40:36.686240 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 3 13:40:36.787097 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (994) Mar 3 13:40:36.795061 kernel: BTRFS info (device vda6): first mount of filesystem af9be1e8-b0f0-42a3-a696-521642a3b9f8 Mar 3 13:40:36.795116 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 3 13:40:36.834723 kernel: BTRFS info (device vda6): turning on async discard Mar 3 13:40:36.835049 kernel: BTRFS info (device vda6): enabling free space tree Mar 3 13:40:36.838536 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 3 13:40:37.311272 ignition[1011]: INFO : Ignition 2.22.0 Mar 3 13:40:37.311272 ignition[1011]: INFO : Stage: files Mar 3 13:40:37.311272 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 3 13:40:37.311272 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 3 13:40:37.347537 ignition[1011]: DEBUG : files: compiled without relabeling support, skipping Mar 3 13:40:37.360622 ignition[1011]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 3 13:40:37.360622 ignition[1011]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 3 13:40:37.394361 ignition[1011]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 3 13:40:37.407299 ignition[1011]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 3 13:40:37.407299 ignition[1011]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 3 13:40:37.407299 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 3 13:40:37.407299 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 3 13:40:37.400126 unknown[1011]: wrote ssh authorized keys file for user: core Mar 3 13:40:37.519584 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 3 13:40:37.942002 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 3 13:40:37.956790 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 3 13:40:37.971918 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 3 13:40:37.986016 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 3 13:40:38.000096 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 3 13:40:38.000096 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 3 13:40:38.027800 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 3 13:40:38.027800 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 3 13:40:38.055609 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 3 13:40:38.070692 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 3 13:40:38.070692 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 3 13:40:38.070692 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 3 13:40:38.070692 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 3 13:40:38.070692 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 3 13:40:38.070692 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 3 13:40:38.448640 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 3 13:40:41.158371 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 3 13:40:41.181284 ignition[1011]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 3 13:40:41.181284 ignition[1011]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 3 13:40:41.223521 ignition[1011]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 3 13:40:41.223521 ignition[1011]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 3 13:40:41.259700 ignition[1011]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 3 13:40:41.259700 ignition[1011]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 3 13:40:41.259700 ignition[1011]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 3 13:40:41.259700 ignition[1011]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 3 13:40:41.259700 ignition[1011]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 3 13:40:41.359991 ignition[1011]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 3 13:40:41.359991 ignition[1011]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 3 13:40:41.359991 ignition[1011]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 3 13:40:41.359991 ignition[1011]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 3 13:40:41.359991 ignition[1011]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 3 13:40:41.359991 ignition[1011]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 3 13:40:41.359991 ignition[1011]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 3 13:40:41.359991 ignition[1011]: INFO : files: files passed Mar 3 13:40:41.359991 ignition[1011]: INFO : Ignition finished successfully Mar 3 13:40:41.312808 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 3 13:40:41.332796 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 3 13:40:41.351079 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 3 13:40:41.527648 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 3 13:40:41.528103 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 3 13:40:41.591086 initrd-setup-root-after-ignition[1040]: grep: /sysroot/oem/oem-release: No such file or directory Mar 3 13:40:41.613548 initrd-setup-root-after-ignition[1042]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 3 13:40:41.630626 initrd-setup-root-after-ignition[1042]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 3 13:40:41.648370 initrd-setup-root-after-ignition[1046]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 3 13:40:41.666277 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 3 13:40:41.679685 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 3 13:40:41.716523 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 3 13:40:41.860111 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 3 13:40:41.861105 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 3 13:40:41.870708 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 3 13:40:41.903993 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 3 13:40:41.925514 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 3 13:40:41.927628 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 3 13:40:42.069618 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 3 13:40:42.078145 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 3 13:40:42.157671 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 3 13:40:42.174286 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 3 13:40:42.204595 systemd[1]: Stopped target timers.target - Timer Units. Mar 3 13:40:42.229693 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 3 13:40:42.231421 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 3 13:40:42.273448 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 3 13:40:42.299148 systemd[1]: Stopped target basic.target - Basic System. Mar 3 13:40:42.330082 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 3 13:40:42.353016 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 3 13:40:42.382103 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 3 13:40:42.407770 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 3 13:40:42.445392 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 3 13:40:42.475134 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 3 13:40:42.487137 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 3 13:40:42.507785 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 3 13:40:42.536781 systemd[1]: Stopped target swap.target - Swaps. Mar 3 13:40:42.561638 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 3 13:40:42.562382 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 3 13:40:42.589798 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 3 13:40:42.609048 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 3 13:40:42.609575 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 3 13:40:42.610469 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 3 13:40:42.639296 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 3 13:40:42.639521 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 3 13:40:42.676670 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 3 13:40:42.677288 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 3 13:40:42.685340 systemd[1]: Stopped target paths.target - Path Units. Mar 3 13:40:42.728717 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 3 13:40:42.736429 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 3 13:40:42.747418 systemd[1]: Stopped target slices.target - Slice Units. Mar 3 13:40:42.769311 systemd[1]: Stopped target sockets.target - Socket Units. Mar 3 13:40:42.790424 systemd[1]: iscsid.socket: Deactivated successfully. Mar 3 13:40:42.790540 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 3 13:40:42.815615 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 3 13:40:42.815762 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 3 13:40:42.837527 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 3 13:40:42.837728 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 3 13:40:42.858365 systemd[1]: ignition-files.service: Deactivated successfully. Mar 3 13:40:42.858573 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 3 13:40:42.879088 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 3 13:40:42.888984 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 3 13:40:42.889343 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 3 13:40:42.973275 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 3 13:40:43.011447 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 3 13:40:43.023168 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 3 13:40:43.035787 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 3 13:40:43.036574 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 3 13:40:43.064365 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 3 13:40:43.066309 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 3 13:40:43.066437 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 3 13:40:43.117341 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 3 13:40:43.117691 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 3 13:40:43.148092 ignition[1066]: INFO : Ignition 2.22.0 Mar 3 13:40:43.148092 ignition[1066]: INFO : Stage: umount Mar 3 13:40:43.148092 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 3 13:40:43.148092 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 3 13:40:43.148092 ignition[1066]: INFO : umount: umount passed Mar 3 13:40:43.148092 ignition[1066]: INFO : Ignition finished successfully Mar 3 13:40:43.180769 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 3 13:40:43.181412 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 3 13:40:43.201726 systemd[1]: Stopped target network.target - Network. Mar 3 13:40:43.219744 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 3 13:40:43.220037 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 3 13:40:43.234167 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 3 13:40:43.234354 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 3 13:40:43.252011 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 3 13:40:43.252089 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 3 13:40:43.269012 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 3 13:40:43.269095 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 3 13:40:43.281592 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 3 13:40:43.281691 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 3 13:40:43.296970 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 3 13:40:43.308521 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 3 13:40:43.334471 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 3 13:40:43.335295 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 3 13:40:43.363270 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 3 13:40:43.363705 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 3 13:40:43.363777 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 3 13:40:43.388460 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 3 13:40:43.399103 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 3 13:40:43.400026 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 3 13:40:43.442475 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 3 13:40:43.442659 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 3 13:40:43.457084 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 3 13:40:43.457169 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 3 13:40:43.472592 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 3 13:40:43.489777 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 3 13:40:43.490061 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 3 13:40:43.505653 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 3 13:40:43.505737 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 3 13:40:43.539758 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 3 13:40:43.539993 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 3 13:40:43.549545 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 3 13:40:43.563281 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 3 13:40:43.630107 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 3 13:40:43.638299 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 3 13:40:43.657402 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 3 13:40:43.657551 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 3 13:40:43.672410 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 3 13:40:43.672467 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 3 13:40:43.672686 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 3 13:40:43.672765 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 3 13:40:43.709660 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 3 13:40:43.709760 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 3 13:40:43.725478 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 3 13:40:43.725568 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 3 13:40:43.760748 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 3 13:40:43.766116 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 3 13:40:43.766322 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 3 13:40:43.814317 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 3 13:40:43.814407 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 3 13:40:43.843549 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 3 13:40:43.843647 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 13:40:43.870688 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 3 13:40:43.871104 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 3 13:40:43.884310 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 3 13:40:43.884531 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 3 13:40:43.900513 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 3 13:40:43.914600 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 3 13:40:43.954941 systemd[1]: Switching root. Mar 3 13:40:44.008077 systemd-journald[203]: Journal stopped Mar 3 13:40:47.341638 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Mar 3 13:40:47.341740 kernel: SELinux: policy capability network_peer_controls=1 Mar 3 13:40:47.341764 kernel: SELinux: policy capability open_perms=1 Mar 3 13:40:47.341782 kernel: SELinux: policy capability extended_socket_class=1 Mar 3 13:40:47.341800 kernel: SELinux: policy capability always_check_network=0 Mar 3 13:40:47.341996 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 3 13:40:47.342018 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 3 13:40:47.342035 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 3 13:40:47.342052 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 3 13:40:47.342068 kernel: SELinux: policy capability userspace_initial_context=0 Mar 3 13:40:47.342085 kernel: audit: type=1403 audit(1772545244.381:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 3 13:40:47.342104 systemd[1]: Successfully loaded SELinux policy in 136.750ms. Mar 3 13:40:47.342147 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.436ms. Mar 3 13:40:47.342168 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 3 13:40:47.342194 systemd[1]: Detected virtualization kvm. Mar 3 13:40:47.342321 systemd[1]: Detected architecture x86-64. Mar 3 13:40:47.342343 systemd[1]: Detected first boot. Mar 3 13:40:47.342363 systemd[1]: Initializing machine ID from VM UUID. Mar 3 13:40:47.342382 zram_generator::config[1111]: No configuration found. Mar 3 13:40:47.342410 kernel: Guest personality initialized and is inactive Mar 3 13:40:47.342428 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Mar 3 13:40:47.342446 kernel: Initialized host personality Mar 3 13:40:47.342468 kernel: NET: Registered PF_VSOCK protocol family Mar 3 13:40:47.342486 systemd[1]: Populated /etc with preset unit settings. Mar 3 13:40:47.342506 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 3 13:40:47.342524 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 3 13:40:47.342542 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 3 13:40:47.342561 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 3 13:40:47.342579 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 3 13:40:47.342598 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 3 13:40:47.342623 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 3 13:40:47.342647 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 3 13:40:47.342669 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 3 13:40:47.342688 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 3 13:40:47.342707 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 3 13:40:47.342727 systemd[1]: Created slice user.slice - User and Session Slice. Mar 3 13:40:47.342754 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 3 13:40:47.342773 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 3 13:40:47.342793 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 3 13:40:47.342987 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 3 13:40:47.343015 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 3 13:40:47.343036 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 3 13:40:47.343055 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 3 13:40:47.343073 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 3 13:40:47.343092 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 3 13:40:47.343110 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 3 13:40:47.343128 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 3 13:40:47.343152 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 3 13:40:47.343171 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 3 13:40:47.343189 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 3 13:40:47.343296 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 3 13:40:47.343316 systemd[1]: Reached target slices.target - Slice Units. Mar 3 13:40:47.343333 systemd[1]: Reached target swap.target - Swaps. Mar 3 13:40:47.343351 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 3 13:40:47.343368 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 3 13:40:47.343384 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 3 13:40:47.343406 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 3 13:40:47.343423 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 3 13:40:47.343441 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 3 13:40:47.343462 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 3 13:40:47.343481 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 3 13:40:47.343499 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 3 13:40:47.343518 systemd[1]: Mounting media.mount - External Media Directory... Mar 3 13:40:47.343537 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 3 13:40:47.343557 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 3 13:40:47.343582 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 3 13:40:47.343602 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 3 13:40:47.343622 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 3 13:40:47.343643 systemd[1]: Reached target machines.target - Containers. Mar 3 13:40:47.343662 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 3 13:40:47.343681 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 3 13:40:47.343701 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 3 13:40:47.343720 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 3 13:40:47.343744 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 3 13:40:47.343763 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 3 13:40:47.343783 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 3 13:40:47.343801 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 3 13:40:47.343995 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 3 13:40:47.344022 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 3 13:40:47.344041 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 3 13:40:47.344064 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 3 13:40:47.344083 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 3 13:40:47.344108 systemd[1]: Stopped systemd-fsck-usr.service. Mar 3 13:40:47.344127 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 3 13:40:47.344146 kernel: ACPI: bus type drm_connector registered Mar 3 13:40:47.344164 kernel: fuse: init (API version 7.41) Mar 3 13:40:47.344181 kernel: loop: module loaded Mar 3 13:40:47.344200 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 3 13:40:47.344318 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 3 13:40:47.344339 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 3 13:40:47.344396 systemd-journald[1196]: Collecting audit messages is disabled. Mar 3 13:40:47.344435 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 3 13:40:47.344455 systemd-journald[1196]: Journal started Mar 3 13:40:47.344487 systemd-journald[1196]: Runtime Journal (/run/log/journal/d4c301aaff4448d5ac3d71f77cabfd18) is 6M, max 48.1M, 42.1M free. Mar 3 13:40:45.924393 systemd[1]: Queued start job for default target multi-user.target. Mar 3 13:40:45.953525 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 3 13:40:45.955596 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 3 13:40:45.956759 systemd[1]: systemd-journald.service: Consumed 2.794s CPU time. Mar 3 13:40:47.392628 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 3 13:40:47.418524 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 3 13:40:47.418644 systemd[1]: verity-setup.service: Deactivated successfully. Mar 3 13:40:47.426312 systemd[1]: Stopped verity-setup.service. Mar 3 13:40:47.442297 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 3 13:40:47.474336 systemd[1]: Started systemd-journald.service - Journal Service. Mar 3 13:40:47.486655 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 3 13:40:47.498552 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 3 13:40:47.510366 systemd[1]: Mounted media.mount - External Media Directory. Mar 3 13:40:47.521444 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 3 13:40:47.533483 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 3 13:40:47.545403 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 3 13:40:47.556137 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 3 13:40:47.570139 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 3 13:40:47.585129 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 3 13:40:47.585728 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 3 13:40:47.599630 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 3 13:40:47.600397 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 3 13:40:47.613464 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 3 13:40:47.614118 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 3 13:40:47.626409 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 3 13:40:47.627195 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 3 13:40:47.641164 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 3 13:40:47.641737 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 3 13:40:47.654521 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 3 13:40:47.655180 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 3 13:40:47.667089 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 3 13:40:47.679709 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 3 13:40:47.694599 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 3 13:40:47.709619 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 3 13:40:47.724023 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 3 13:40:47.762049 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 3 13:40:47.774543 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 3 13:40:47.790356 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 3 13:40:47.803117 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 3 13:40:47.803355 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 3 13:40:47.817297 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 3 13:40:47.834752 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 3 13:40:47.848029 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 3 13:40:47.851648 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 3 13:40:47.867101 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 3 13:40:47.879461 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 3 13:40:47.882559 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 3 13:40:47.895119 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 3 13:40:47.899379 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 3 13:40:47.915122 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 3 13:40:47.925633 systemd-journald[1196]: Time spent on flushing to /var/log/journal/d4c301aaff4448d5ac3d71f77cabfd18 is 31.144ms for 1061 entries. Mar 3 13:40:47.925633 systemd-journald[1196]: System Journal (/var/log/journal/d4c301aaff4448d5ac3d71f77cabfd18) is 8M, max 195.6M, 187.6M free. Mar 3 13:40:48.004588 systemd-journald[1196]: Received client request to flush runtime journal. Mar 3 13:40:47.942391 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 3 13:40:47.957111 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 3 13:40:47.975661 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 3 13:40:47.990197 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 3 13:40:48.007445 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 3 13:40:48.027059 kernel: loop0: detected capacity change from 0 to 219192 Mar 3 13:40:48.037135 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 3 13:40:48.057404 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 3 13:40:48.071680 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 3 13:40:48.102017 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 3 13:40:48.129514 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 3 13:40:48.132427 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 3 13:40:48.160160 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 3 13:40:48.169103 kernel: loop1: detected capacity change from 0 to 110984 Mar 3 13:40:48.178410 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 3 13:40:48.257301 kernel: loop2: detected capacity change from 0 to 128560 Mar 3 13:40:48.257078 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Mar 3 13:40:48.257103 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Mar 3 13:40:48.267462 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 3 13:40:48.379594 kernel: loop3: detected capacity change from 0 to 219192 Mar 3 13:40:48.426136 kernel: loop4: detected capacity change from 0 to 110984 Mar 3 13:40:48.476990 kernel: loop5: detected capacity change from 0 to 128560 Mar 3 13:40:48.519611 (sd-merge)[1254]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 3 13:40:48.522448 (sd-merge)[1254]: Merged extensions into '/usr'. Mar 3 13:40:48.531625 systemd[1]: Reload requested from client PID 1231 ('systemd-sysext') (unit systemd-sysext.service)... Mar 3 13:40:48.532118 systemd[1]: Reloading... Mar 3 13:40:48.666065 zram_generator::config[1283]: No configuration found. Mar 3 13:40:48.900803 ldconfig[1226]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 3 13:40:49.048578 systemd[1]: Reloading finished in 515 ms. Mar 3 13:40:49.079768 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 3 13:40:49.092517 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 3 13:40:49.106686 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 3 13:40:49.157450 systemd[1]: Starting ensure-sysext.service... Mar 3 13:40:49.168404 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 3 13:40:49.193608 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 3 13:40:49.226988 systemd[1]: Reload requested from client PID 1319 ('systemctl') (unit ensure-sysext.service)... Mar 3 13:40:49.227005 systemd[1]: Reloading... Mar 3 13:40:49.229703 systemd-tmpfiles[1320]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 3 13:40:49.229811 systemd-tmpfiles[1320]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 3 13:40:49.230435 systemd-tmpfiles[1320]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 3 13:40:49.230746 systemd-tmpfiles[1320]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 3 13:40:49.232625 systemd-tmpfiles[1320]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 3 13:40:49.233097 systemd-tmpfiles[1320]: ACLs are not supported, ignoring. Mar 3 13:40:49.233370 systemd-tmpfiles[1320]: ACLs are not supported, ignoring. Mar 3 13:40:49.243434 systemd-tmpfiles[1320]: Detected autofs mount point /boot during canonicalization of boot. Mar 3 13:40:49.243517 systemd-tmpfiles[1320]: Skipping /boot Mar 3 13:40:49.265667 systemd-tmpfiles[1320]: Detected autofs mount point /boot during canonicalization of boot. Mar 3 13:40:49.265686 systemd-tmpfiles[1320]: Skipping /boot Mar 3 13:40:49.266791 systemd-udevd[1321]: Using default interface naming scheme 'v255'. Mar 3 13:40:49.350064 zram_generator::config[1348]: No configuration found. Mar 3 13:40:49.670297 kernel: mousedev: PS/2 mouse device common for all mice Mar 3 13:40:49.690030 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 3 13:40:49.708151 kernel: ACPI: button: Power Button [PWRF] Mar 3 13:40:49.779117 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 3 13:40:49.792128 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 3 13:40:49.803959 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 3 13:40:49.819422 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 3 13:40:49.833713 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 3 13:40:49.834615 systemd[1]: Reloading finished in 607 ms. Mar 3 13:40:49.849123 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 3 13:40:49.881781 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 3 13:40:50.578505 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1088572974 wd_nsec: 1088572348 Mar 3 13:40:50.592177 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 3 13:40:50.616527 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 3 13:40:50.655503 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 3 13:40:50.676439 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 3 13:40:50.717475 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 3 13:40:50.752363 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 3 13:40:50.768668 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 3 13:40:50.801716 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 3 13:40:50.802316 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 3 13:40:50.827360 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 3 13:40:50.855762 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 3 13:40:50.877994 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 3 13:40:50.878392 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 3 13:40:50.878540 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 3 13:40:50.898370 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 3 13:40:50.898558 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 3 13:40:50.940488 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 3 13:40:50.983011 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 3 13:40:50.983542 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 3 13:40:50.985391 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 3 13:40:50.985997 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 3 13:40:50.996510 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 3 13:40:51.008728 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 3 13:40:51.021361 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 3 13:40:51.039559 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 3 13:40:51.040548 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 3 13:40:51.059469 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 3 13:40:51.061808 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 3 13:40:51.079350 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 3 13:40:51.082199 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 13:40:51.874408 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 3 13:40:51.875992 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 3 13:40:51.963493 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 3 13:40:51.966667 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 3 13:40:51.981148 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 3 13:40:51.995188 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 3 13:40:52.024714 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 3 13:40:52.040217 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 3 13:40:52.051748 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 3 13:40:52.052008 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 3 13:40:52.063704 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 3 13:40:52.077492 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 3 13:40:52.099490 systemd[1]: Finished ensure-sysext.service. Mar 3 13:40:52.189164 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 3 13:40:52.313736 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 3 13:40:52.314720 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 3 13:40:52.377808 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 3 13:40:52.379018 augenrules[1482]: No rules Mar 3 13:40:52.404664 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 3 13:40:52.422598 systemd[1]: audit-rules.service: Deactivated successfully. Mar 3 13:40:52.425008 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 3 13:40:52.562808 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 3 13:40:52.564032 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 3 13:40:52.575333 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 3 13:40:52.635215 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 3 13:40:52.642610 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 3 13:40:52.675218 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 3 13:40:52.678775 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 3 13:40:52.749792 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 3 13:40:52.768339 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 3 13:40:52.768766 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 3 13:40:52.768960 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 3 13:40:52.792700 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 3 13:40:53.044025 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 13:40:53.051041 kernel: kvm_amd: TSC scaling supported Mar 3 13:40:53.051095 kernel: kvm_amd: Nested Virtualization enabled Mar 3 13:40:53.051119 kernel: kvm_amd: Nested Paging enabled Mar 3 13:40:53.065114 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 3 13:40:53.075328 kernel: kvm_amd: PMU virtualization is disabled Mar 3 13:40:53.585555 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 3 13:40:53.607571 systemd[1]: Reached target time-set.target - System Time Set. Mar 3 13:40:53.613162 systemd-networkd[1442]: lo: Link UP Mar 3 13:40:53.613352 systemd-networkd[1442]: lo: Gained carrier Mar 3 13:40:53.627993 systemd-networkd[1442]: Enumeration completed Mar 3 13:40:53.629191 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 3 13:40:53.634577 systemd-networkd[1442]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 3 13:40:53.634708 systemd-networkd[1442]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 3 13:40:53.644763 systemd-networkd[1442]: eth0: Link UP Mar 3 13:40:53.647529 systemd-networkd[1442]: eth0: Gained carrier Mar 3 13:40:53.647556 systemd-networkd[1442]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 3 13:40:53.669780 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 3 13:40:53.672369 systemd-resolved[1449]: Positive Trust Anchors: Mar 3 13:40:53.672475 systemd-resolved[1449]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 3 13:40:53.672516 systemd-resolved[1449]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 3 13:40:53.709233 systemd-networkd[1442]: eth0: DHCPv4 address 10.0.0.57/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 3 13:40:53.713740 systemd-timesyncd[1487]: Network configuration changed, trying to establish connection. Mar 3 13:40:53.740749 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 3 13:40:54.477561 systemd-timesyncd[1487]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 3 13:40:54.477663 systemd-timesyncd[1487]: Initial clock synchronization to Tue 2026-03-03 13:40:54.474152 UTC. Mar 3 13:40:54.488899 systemd-resolved[1449]: Defaulting to hostname 'linux'. Mar 3 13:40:54.495523 kernel: EDAC MC: Ver: 3.0.0 Mar 3 13:40:54.503132 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 3 13:40:54.523926 systemd[1]: Reached target network.target - Network. Mar 3 13:40:54.538064 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 3 13:40:54.553184 systemd[1]: Reached target sysinit.target - System Initialization. Mar 3 13:40:54.579254 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 3 13:40:54.596080 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 3 13:40:54.612032 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Mar 3 13:40:54.628103 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 3 13:40:54.647221 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 3 13:40:54.675809 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 3 13:40:54.693620 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 3 13:40:54.693926 systemd[1]: Reached target paths.target - Path Units. Mar 3 13:40:54.707602 systemd[1]: Reached target timers.target - Timer Units. Mar 3 13:40:54.739691 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 3 13:40:54.777803 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 3 13:40:54.803884 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 3 13:40:54.824213 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 3 13:40:54.849565 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 3 13:40:54.921517 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 3 13:40:54.948066 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 3 13:40:54.980674 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 3 13:40:54.997258 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 3 13:40:55.022818 systemd[1]: Reached target sockets.target - Socket Units. Mar 3 13:40:55.033866 systemd[1]: Reached target basic.target - Basic System. Mar 3 13:40:55.067924 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 3 13:40:55.068135 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 3 13:40:55.072816 systemd[1]: Starting containerd.service - containerd container runtime... Mar 3 13:40:55.108520 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 3 13:40:55.128960 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 3 13:40:55.147856 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 3 13:40:55.178680 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 3 13:40:55.192668 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 3 13:40:55.196881 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Mar 3 13:40:55.197989 jq[1522]: false Mar 3 13:40:55.216226 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 3 13:40:55.990250 oslogin_cache_refresh[1524]: Refreshing passwd entry cache Mar 3 13:40:55.992989 google_oslogin_nss_cache[1524]: oslogin_cache_refresh[1524]: Refreshing passwd entry cache Mar 3 13:40:55.999520 systemd-networkd[1442]: eth0: Gained IPv6LL Mar 3 13:40:56.014141 google_oslogin_nss_cache[1524]: oslogin_cache_refresh[1524]: Failure getting users, quitting Mar 3 13:40:56.015084 oslogin_cache_refresh[1524]: Failure getting users, quitting Mar 3 13:40:56.015494 oslogin_cache_refresh[1524]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 3 13:40:56.016004 google_oslogin_nss_cache[1524]: oslogin_cache_refresh[1524]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 3 13:40:56.016004 google_oslogin_nss_cache[1524]: oslogin_cache_refresh[1524]: Refreshing group entry cache Mar 3 13:40:56.015602 oslogin_cache_refresh[1524]: Refreshing group entry cache Mar 3 13:40:56.034130 google_oslogin_nss_cache[1524]: oslogin_cache_refresh[1524]: Failure getting groups, quitting Mar 3 13:40:56.034130 google_oslogin_nss_cache[1524]: oslogin_cache_refresh[1524]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 3 13:40:56.034104 oslogin_cache_refresh[1524]: Failure getting groups, quitting Mar 3 13:40:56.034130 oslogin_cache_refresh[1524]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 3 13:40:56.055619 extend-filesystems[1523]: Found /dev/vda6 Mar 3 13:40:56.075493 extend-filesystems[1523]: Found /dev/vda9 Mar 3 13:40:56.088178 extend-filesystems[1523]: Checking size of /dev/vda9 Mar 3 13:40:56.123505 extend-filesystems[1523]: Resized partition /dev/vda9 Mar 3 13:40:56.136646 extend-filesystems[1540]: resize2fs 1.47.3 (8-Jul-2025) Mar 3 13:40:56.167894 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 3 13:40:56.189568 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 3 13:40:56.210584 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 3 13:40:56.256865 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 3 13:40:56.297471 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 3 13:40:56.324035 extend-filesystems[1540]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 3 13:40:56.324035 extend-filesystems[1540]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 3 13:40:56.324035 extend-filesystems[1540]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 3 13:40:56.378607 extend-filesystems[1523]: Resized filesystem in /dev/vda9 Mar 3 13:40:56.375008 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 3 13:40:56.397219 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 3 13:40:56.398666 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 3 13:40:56.402694 systemd[1]: Starting update-engine.service - Update Engine... Mar 3 13:40:56.455257 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 3 13:40:56.484196 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 3 13:40:56.527910 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 3 13:40:56.563705 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 3 13:40:56.573248 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 3 13:40:56.575986 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 3 13:40:56.586917 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 3 13:40:56.631095 jq[1549]: true Mar 3 13:40:56.644135 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Mar 3 13:40:56.645163 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Mar 3 13:40:56.671110 systemd[1]: motdgen.service: Deactivated successfully. Mar 3 13:40:56.695589 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 3 13:40:56.718585 update_engine[1548]: I20260303 13:40:56.704703 1548 main.cc:92] Flatcar Update Engine starting Mar 3 13:40:56.793021 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 3 13:40:56.799888 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 3 13:40:56.956944 jq[1555]: true Mar 3 13:40:56.983999 (ntainerd)[1556]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 3 13:40:56.987627 tar[1554]: linux-amd64/LICENSE Mar 3 13:40:56.990460 tar[1554]: linux-amd64/helm Mar 3 13:40:56.996872 systemd-logind[1547]: Watching system buttons on /dev/input/event2 (Power Button) Mar 3 13:40:56.996993 systemd-logind[1547]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 3 13:40:57.004052 systemd-logind[1547]: New seat seat0. Mar 3 13:40:57.005821 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 3 13:40:57.027067 systemd[1]: Reached target network-online.target - Network is Online. Mar 3 13:40:57.042038 dbus-daemon[1520]: [system] SELinux support is enabled Mar 3 13:40:57.043965 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 3 13:40:57.053185 update_engine[1548]: I20260303 13:40:57.050123 1548 update_check_scheduler.cc:74] Next update check in 9m3s Mar 3 13:40:57.077879 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:40:57.098822 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 3 13:40:57.124677 systemd[1]: Started systemd-logind.service - User Login Management. Mar 3 13:40:57.169661 sshd_keygen[1552]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 3 13:40:57.136194 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 3 13:40:57.173824 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 3 13:40:57.173856 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 3 13:40:57.189219 dbus-daemon[1520]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 3 13:40:57.191636 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 3 13:40:57.192172 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 3 13:40:57.223836 systemd[1]: Started update-engine.service - Update Engine. Mar 3 13:40:57.301061 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 3 13:40:57.882707 bash[1602]: Updated "/home/core/.ssh/authorized_keys" Mar 3 13:40:57.892255 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 3 13:40:57.931201 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 3 13:40:57.991096 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 3 13:40:58.018120 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 3 13:40:58.019001 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 3 13:40:58.108968 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 3 13:40:58.134645 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 3 13:40:58.147250 systemd[1]: Started sshd@0-10.0.0.57:22-10.0.0.1:48570.service - OpenSSH per-connection server daemon (10.0.0.1:48570). Mar 3 13:40:58.213070 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 3 13:40:58.278584 systemd[1]: issuegen.service: Deactivated successfully. Mar 3 13:40:58.279168 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 3 13:40:58.313611 locksmithd[1595]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 3 13:40:58.320682 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 3 13:40:58.426096 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 3 13:40:58.453238 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 3 13:40:58.481889 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 3 13:40:58.500974 systemd[1]: Reached target getty.target - Login Prompts. Mar 3 13:40:58.615229 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 48570 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 13:40:58.618880 sshd-session[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:40:58.653172 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 3 13:40:58.675455 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 3 13:40:58.717915 systemd-logind[1547]: New session 1 of user core. Mar 3 13:40:58.838613 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 3 13:40:58.865964 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 3 13:40:58.915167 (systemd)[1636]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 3 13:40:58.931622 systemd-logind[1547]: New session c1 of user core. Mar 3 13:41:00.490079 systemd[1636]: Queued start job for default target default.target. Mar 3 13:41:00.500911 containerd[1556]: time="2026-03-03T13:41:00Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 3 13:41:00.506679 containerd[1556]: time="2026-03-03T13:41:00.505184907Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 3 13:41:00.506604 systemd[1636]: Created slice app.slice - User Application Slice. Mar 3 13:41:00.506639 systemd[1636]: Reached target paths.target - Paths. Mar 3 13:41:00.506998 systemd[1636]: Reached target timers.target - Timers. Mar 3 13:41:00.511997 systemd[1636]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 3 13:41:00.825535 systemd[1636]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 3 13:41:00.825957 systemd[1636]: Reached target sockets.target - Sockets. Mar 3 13:41:00.826045 systemd[1636]: Reached target basic.target - Basic System. Mar 3 13:41:00.826116 systemd[1636]: Reached target default.target - Main User Target. Mar 3 13:41:00.826170 systemd[1636]: Startup finished in 1.826s. Mar 3 13:41:00.826636 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 3 13:41:00.857038 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 3 13:41:00.875017 containerd[1556]: time="2026-03-03T13:41:00.872155569Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="398.875µs" Mar 3 13:41:00.875017 containerd[1556]: time="2026-03-03T13:41:00.872515992Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 3 13:41:00.875017 containerd[1556]: time="2026-03-03T13:41:00.872635445Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 3 13:41:00.877577 containerd[1556]: time="2026-03-03T13:41:00.876150475Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 3 13:41:00.877577 containerd[1556]: time="2026-03-03T13:41:00.876187334Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 3 13:41:00.877577 containerd[1556]: time="2026-03-03T13:41:00.876662812Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 3 13:41:00.877577 containerd[1556]: time="2026-03-03T13:41:00.877091101Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 3 13:41:00.877577 containerd[1556]: time="2026-03-03T13:41:00.877114064Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 3 13:41:00.882100 containerd[1556]: time="2026-03-03T13:41:00.882061118Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 3 13:41:00.882698 containerd[1556]: time="2026-03-03T13:41:00.882460684Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 3 13:41:00.882698 containerd[1556]: time="2026-03-03T13:41:00.882492343Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 3 13:41:00.882698 containerd[1556]: time="2026-03-03T13:41:00.882506279Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 3 13:41:00.884013 containerd[1556]: time="2026-03-03T13:41:00.883984949Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 3 13:41:00.884984 containerd[1556]: time="2026-03-03T13:41:00.884956483Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 3 13:41:00.885121 containerd[1556]: time="2026-03-03T13:41:00.885097535Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 3 13:41:00.885195 containerd[1556]: time="2026-03-03T13:41:00.885174690Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 3 13:41:00.885937 containerd[1556]: time="2026-03-03T13:41:00.885912216Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 3 13:41:00.896968 containerd[1556]: time="2026-03-03T13:41:00.896914783Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 3 13:41:00.899034 containerd[1556]: time="2026-03-03T13:41:00.899001828Z" level=info msg="metadata content store policy set" policy=shared Mar 3 13:41:00.917476 containerd[1556]: time="2026-03-03T13:41:00.915981441Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 3 13:41:00.921530 containerd[1556]: time="2026-03-03T13:41:00.918061314Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 3 13:41:00.922638 containerd[1556]: time="2026-03-03T13:41:00.922104315Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 3 13:41:00.922638 containerd[1556]: time="2026-03-03T13:41:00.922231833Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 3 13:41:00.922638 containerd[1556]: time="2026-03-03T13:41:00.922252361Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 3 13:41:00.926950 containerd[1556]: time="2026-03-03T13:41:00.924652887Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 3 13:41:00.926950 containerd[1556]: time="2026-03-03T13:41:00.924688223Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 3 13:41:00.926950 containerd[1556]: time="2026-03-03T13:41:00.924706627Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 3 13:41:00.926950 containerd[1556]: time="2026-03-03T13:41:00.924725482Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 3 13:41:00.926950 containerd[1556]: time="2026-03-03T13:41:00.924742905Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 3 13:41:00.926950 containerd[1556]: time="2026-03-03T13:41:00.924756560Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 3 13:41:00.926950 containerd[1556]: time="2026-03-03T13:41:00.924773602Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 3 13:41:00.926950 containerd[1556]: time="2026-03-03T13:41:00.925084603Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 3 13:41:00.926950 containerd[1556]: time="2026-03-03T13:41:00.925450846Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 3 13:41:00.926950 containerd[1556]: time="2026-03-03T13:41:00.925477426Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 3 13:41:00.926950 containerd[1556]: time="2026-03-03T13:41:00.925492404Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 3 13:41:00.926950 containerd[1556]: time="2026-03-03T13:41:00.925506600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 3 13:41:00.927645 containerd[1556]: time="2026-03-03T13:41:00.927619003Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 3 13:41:00.927748 containerd[1556]: time="2026-03-03T13:41:00.927726804Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 3 13:41:00.927948 containerd[1556]: time="2026-03-03T13:41:00.927926768Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 3 13:41:00.928129 containerd[1556]: time="2026-03-03T13:41:00.928107765Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 3 13:41:00.928207 containerd[1556]: time="2026-03-03T13:41:00.928189428Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 3 13:41:00.928482 containerd[1556]: time="2026-03-03T13:41:00.928458219Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 3 13:41:00.929128 containerd[1556]: time="2026-03-03T13:41:00.929022734Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 3 13:41:00.929203 containerd[1556]: time="2026-03-03T13:41:00.929189495Z" level=info msg="Start snapshots syncer" Mar 3 13:41:00.929505 containerd[1556]: time="2026-03-03T13:41:00.929480428Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 3 13:41:00.931182 systemd[1]: Started sshd@1-10.0.0.57:22-10.0.0.1:57586.service - OpenSSH per-connection server daemon (10.0.0.1:57586). Mar 3 13:41:00.948025 containerd[1556]: time="2026-03-03T13:41:00.945602615Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 3 13:41:00.948025 containerd[1556]: time="2026-03-03T13:41:00.945922603Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 3 13:41:00.949066 containerd[1556]: time="2026-03-03T13:41:00.946117807Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 3 13:41:00.949066 containerd[1556]: time="2026-03-03T13:41:00.946505060Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 3 13:41:00.949066 containerd[1556]: time="2026-03-03T13:41:00.946539193Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 3 13:41:00.949066 containerd[1556]: time="2026-03-03T13:41:00.946558609Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 3 13:41:00.949066 containerd[1556]: time="2026-03-03T13:41:00.946572846Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 3 13:41:00.949066 containerd[1556]: time="2026-03-03T13:41:00.946592984Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 3 13:41:00.949066 containerd[1556]: time="2026-03-03T13:41:00.946607080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 3 13:41:00.949066 containerd[1556]: time="2026-03-03T13:41:00.946623150Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 3 13:41:00.949066 containerd[1556]: time="2026-03-03T13:41:00.946658727Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 3 13:41:00.949066 containerd[1556]: time="2026-03-03T13:41:00.946675568Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 3 13:41:00.949066 containerd[1556]: time="2026-03-03T13:41:00.946703710Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 3 13:41:00.949066 containerd[1556]: time="2026-03-03T13:41:00.946990386Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 3 13:41:00.949066 containerd[1556]: time="2026-03-03T13:41:00.947016354Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 3 13:41:00.949066 containerd[1556]: time="2026-03-03T13:41:00.947031452Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 3 13:41:00.949610 containerd[1556]: time="2026-03-03T13:41:00.947045338Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 3 13:41:00.949610 containerd[1556]: time="2026-03-03T13:41:00.947058924Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 3 13:41:00.949610 containerd[1556]: time="2026-03-03T13:41:00.947070225Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 3 13:41:00.949610 containerd[1556]: time="2026-03-03T13:41:00.947090682Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 3 13:41:00.949610 containerd[1556]: time="2026-03-03T13:41:00.947608519Z" level=info msg="runtime interface created" Mar 3 13:41:00.949610 containerd[1556]: time="2026-03-03T13:41:00.947622195Z" level=info msg="created NRI interface" Mar 3 13:41:00.949610 containerd[1556]: time="2026-03-03T13:41:00.947636822Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 3 13:41:00.949610 containerd[1556]: time="2026-03-03T13:41:00.947662830Z" level=info msg="Connect containerd service" Mar 3 13:41:00.949610 containerd[1556]: time="2026-03-03T13:41:00.947695872Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 3 13:41:00.950252 containerd[1556]: time="2026-03-03T13:41:00.950126098Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 3 13:41:01.090522 sshd[1652]: Accepted publickey for core from 10.0.0.1 port 57586 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 13:41:01.093736 sshd-session[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:41:01.213661 systemd-logind[1547]: New session 2 of user core. Mar 3 13:41:01.222925 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 3 13:41:01.325425 sshd[1661]: Connection closed by 10.0.0.1 port 57586 Mar 3 13:41:01.328070 sshd-session[1652]: pam_unix(sshd:session): session closed for user core Mar 3 13:41:01.351048 systemd[1]: sshd@1-10.0.0.57:22-10.0.0.1:57586.service: Deactivated successfully. Mar 3 13:41:01.358130 systemd[1]: session-2.scope: Deactivated successfully. Mar 3 13:41:01.363554 systemd-logind[1547]: Session 2 logged out. Waiting for processes to exit. Mar 3 13:41:01.374780 systemd[1]: Started sshd@2-10.0.0.57:22-10.0.0.1:57600.service - OpenSSH per-connection server daemon (10.0.0.1:57600). Mar 3 13:41:01.398723 systemd-logind[1547]: Removed session 2. Mar 3 13:41:02.020169 tar[1554]: linux-amd64/README.md Mar 3 13:41:02.017747 sshd-session[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:41:02.021458 sshd[1669]: Accepted publickey for core from 10.0.0.1 port 57600 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 13:41:02.036471 systemd-logind[1547]: New session 3 of user core. Mar 3 13:41:02.048749 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 3 13:41:02.086031 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 3 13:41:02.588441 sshd[1679]: Connection closed by 10.0.0.1 port 57600 Mar 3 13:41:02.589012 containerd[1556]: time="2026-03-03T13:41:02.588483250Z" level=info msg="Start subscribing containerd event" Mar 3 13:41:02.589591 containerd[1556]: time="2026-03-03T13:41:02.588923322Z" level=info msg="Start recovering state" Mar 3 13:41:02.589924 containerd[1556]: time="2026-03-03T13:41:02.589063810Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 3 13:41:02.590187 containerd[1556]: time="2026-03-03T13:41:02.590062575Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 3 13:41:02.590778 containerd[1556]: time="2026-03-03T13:41:02.590749790Z" level=info msg="Start event monitor" Mar 3 13:41:02.591087 containerd[1556]: time="2026-03-03T13:41:02.591064288Z" level=info msg="Start cni network conf syncer for default" Mar 3 13:41:02.591164 containerd[1556]: time="2026-03-03T13:41:02.591148825Z" level=info msg="Start streaming server" Mar 3 13:41:02.592135 containerd[1556]: time="2026-03-03T13:41:02.592111031Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 3 13:41:02.592582 containerd[1556]: time="2026-03-03T13:41:02.592560350Z" level=info msg="runtime interface starting up..." Mar 3 13:41:02.592739 containerd[1556]: time="2026-03-03T13:41:02.592718495Z" level=info msg="starting plugins..." Mar 3 13:41:02.593200 containerd[1556]: time="2026-03-03T13:41:02.593177352Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 3 13:41:02.595219 sshd-session[1669]: pam_unix(sshd:session): session closed for user core Mar 3 13:41:02.596533 systemd[1]: Started containerd.service - containerd container runtime. Mar 3 13:41:02.597007 containerd[1556]: time="2026-03-03T13:41:02.596986271Z" level=info msg="containerd successfully booted in 2.098252s" Mar 3 13:41:02.610100 systemd[1]: sshd@2-10.0.0.57:22-10.0.0.1:57600.service: Deactivated successfully. Mar 3 13:41:02.615533 systemd[1]: session-3.scope: Deactivated successfully. Mar 3 13:41:02.624712 systemd-logind[1547]: Session 3 logged out. Waiting for processes to exit. Mar 3 13:41:02.630203 systemd-logind[1547]: Removed session 3. Mar 3 13:41:06.760093 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:41:06.761441 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 3 13:41:06.764049 systemd[1]: Startup finished in 9.137s (kernel) + 21.231s (initrd) + 21.784s (userspace) = 52.152s. Mar 3 13:41:06.794218 (kubelet)[1694]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 3 13:41:09.432736 kubelet[1694]: E0303 13:41:09.431732 1694 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 3 13:41:09.439847 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 3 13:41:09.440715 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 3 13:41:09.441785 systemd[1]: kubelet.service: Consumed 8.880s CPU time, 259.2M memory peak. Mar 3 13:41:12.691871 systemd[1]: Started sshd@3-10.0.0.57:22-10.0.0.1:42482.service - OpenSSH per-connection server daemon (10.0.0.1:42482). Mar 3 13:41:12.985628 sshd[1703]: Accepted publickey for core from 10.0.0.1 port 42482 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 13:41:12.988662 sshd-session[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:41:13.006206 systemd-logind[1547]: New session 4 of user core. Mar 3 13:41:13.023848 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 3 13:41:13.064104 sshd[1706]: Connection closed by 10.0.0.1 port 42482 Mar 3 13:41:13.064856 sshd-session[1703]: pam_unix(sshd:session): session closed for user core Mar 3 13:41:13.081681 systemd[1]: sshd@3-10.0.0.57:22-10.0.0.1:42482.service: Deactivated successfully. Mar 3 13:41:13.085058 systemd[1]: session-4.scope: Deactivated successfully. Mar 3 13:41:13.088186 systemd-logind[1547]: Session 4 logged out. Waiting for processes to exit. Mar 3 13:41:13.092652 systemd[1]: Started sshd@4-10.0.0.57:22-10.0.0.1:42498.service - OpenSSH per-connection server daemon (10.0.0.1:42498). Mar 3 13:41:13.096131 systemd-logind[1547]: Removed session 4. Mar 3 13:41:13.188576 sshd[1712]: Accepted publickey for core from 10.0.0.1 port 42498 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 13:41:13.190994 sshd-session[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:41:13.202689 systemd-logind[1547]: New session 5 of user core. Mar 3 13:41:13.220767 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 3 13:41:13.247097 sshd[1716]: Connection closed by 10.0.0.1 port 42498 Mar 3 13:41:13.247226 sshd-session[1712]: pam_unix(sshd:session): session closed for user core Mar 3 13:41:13.256988 systemd[1]: sshd@4-10.0.0.57:22-10.0.0.1:42498.service: Deactivated successfully. Mar 3 13:41:13.260246 systemd[1]: session-5.scope: Deactivated successfully. Mar 3 13:41:13.262795 systemd-logind[1547]: Session 5 logged out. Waiting for processes to exit. Mar 3 13:41:13.266185 systemd[1]: Started sshd@5-10.0.0.57:22-10.0.0.1:42510.service - OpenSSH per-connection server daemon (10.0.0.1:42510). Mar 3 13:41:13.269722 systemd-logind[1547]: Removed session 5. Mar 3 13:41:13.381011 sshd[1722]: Accepted publickey for core from 10.0.0.1 port 42510 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 13:41:13.384568 sshd-session[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:41:13.398255 systemd-logind[1547]: New session 6 of user core. Mar 3 13:41:13.415588 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 3 13:41:13.453636 sshd[1725]: Connection closed by 10.0.0.1 port 42510 Mar 3 13:41:13.454042 sshd-session[1722]: pam_unix(sshd:session): session closed for user core Mar 3 13:41:13.469821 systemd[1]: sshd@5-10.0.0.57:22-10.0.0.1:42510.service: Deactivated successfully. Mar 3 13:41:13.472665 systemd[1]: session-6.scope: Deactivated successfully. Mar 3 13:41:13.475033 systemd-logind[1547]: Session 6 logged out. Waiting for processes to exit. Mar 3 13:41:13.479058 systemd[1]: Started sshd@6-10.0.0.57:22-10.0.0.1:42522.service - OpenSSH per-connection server daemon (10.0.0.1:42522). Mar 3 13:41:13.483989 systemd-logind[1547]: Removed session 6. Mar 3 13:41:13.581446 sshd[1731]: Accepted publickey for core from 10.0.0.1 port 42522 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 13:41:13.584137 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:41:13.599681 systemd-logind[1547]: New session 7 of user core. Mar 3 13:41:13.606843 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 3 13:41:13.687848 sudo[1736]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 3 13:41:13.688690 sudo[1736]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 3 13:41:13.735766 sudo[1736]: pam_unix(sudo:session): session closed for user root Mar 3 13:41:13.744529 sshd[1735]: Connection closed by 10.0.0.1 port 42522 Mar 3 13:41:13.745461 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Mar 3 13:41:13.782123 systemd[1]: sshd@6-10.0.0.57:22-10.0.0.1:42522.service: Deactivated successfully. Mar 3 13:41:13.788833 systemd[1]: session-7.scope: Deactivated successfully. Mar 3 13:41:13.791745 systemd-logind[1547]: Session 7 logged out. Waiting for processes to exit. Mar 3 13:41:13.818723 systemd[1]: Started sshd@7-10.0.0.57:22-10.0.0.1:42530.service - OpenSSH per-connection server daemon (10.0.0.1:42530). Mar 3 13:41:13.823188 systemd-logind[1547]: Removed session 7. Mar 3 13:41:14.060745 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 42530 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 13:41:14.068088 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:41:14.087842 systemd-logind[1547]: New session 8 of user core. Mar 3 13:41:14.110785 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 3 13:41:14.259020 sudo[1747]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 3 13:41:14.262116 sudo[1747]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 3 13:41:14.321505 sudo[1747]: pam_unix(sudo:session): session closed for user root Mar 3 13:41:14.432985 sudo[1746]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 3 13:41:14.435771 sudo[1746]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 3 13:41:14.500443 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 3 13:41:15.302404 augenrules[1769]: No rules Mar 3 13:41:15.312464 systemd[1]: audit-rules.service: Deactivated successfully. Mar 3 13:41:15.314644 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 3 13:41:15.325722 sudo[1746]: pam_unix(sudo:session): session closed for user root Mar 3 13:41:15.346852 sshd[1745]: Connection closed by 10.0.0.1 port 42530 Mar 3 13:41:15.349876 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Mar 3 13:41:15.375855 systemd[1]: Started sshd@8-10.0.0.57:22-10.0.0.1:42546.service - OpenSSH per-connection server daemon (10.0.0.1:42546). Mar 3 13:41:15.387574 systemd[1]: sshd@7-10.0.0.57:22-10.0.0.1:42530.service: Deactivated successfully. Mar 3 13:41:15.425091 systemd[1]: session-8.scope: Deactivated successfully. Mar 3 13:41:15.497483 systemd-logind[1547]: Session 8 logged out. Waiting for processes to exit. Mar 3 13:41:15.565732 systemd-logind[1547]: Removed session 8. Mar 3 13:41:15.742882 sshd[1775]: Accepted publickey for core from 10.0.0.1 port 42546 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 13:41:15.750670 sshd-session[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:41:15.879519 systemd-logind[1547]: New session 9 of user core. Mar 3 13:41:15.919238 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 3 13:41:16.045837 sudo[1782]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 3 13:41:16.052080 sudo[1782]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 3 13:41:19.447148 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 3 13:41:19.461652 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:41:25.659827 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:41:25.786627 (kubelet)[1810]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 3 13:41:27.853676 kubelet[1810]: E0303 13:41:27.852174 1810 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 3 13:41:27.870101 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 3 13:41:27.870986 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 3 13:41:27.873261 systemd[1]: kubelet.service: Consumed 6.902s CPU time, 110.4M memory peak. Mar 3 13:41:29.375767 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 3 13:41:29.470164 (dockerd)[1820]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 3 13:41:35.222191 dockerd[1820]: time="2026-03-03T13:41:35.220825640Z" level=info msg="Starting up" Mar 3 13:41:35.231989 dockerd[1820]: time="2026-03-03T13:41:35.231862253Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 3 13:41:35.519846 dockerd[1820]: time="2026-03-03T13:41:35.518904502Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 3 13:41:35.697137 dockerd[1820]: time="2026-03-03T13:41:35.696887797Z" level=info msg="Loading containers: start." Mar 3 13:41:35.730229 kernel: Initializing XFRM netlink socket Mar 3 13:41:37.977679 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 3 13:41:38.040100 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:41:38.804223 systemd-networkd[1442]: docker0: Link UP Mar 3 13:41:38.887199 dockerd[1820]: time="2026-03-03T13:41:38.886195520Z" level=info msg="Loading containers: done." Mar 3 13:41:39.146610 dockerd[1820]: time="2026-03-03T13:41:39.145969193Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 3 13:41:39.147856 dockerd[1820]: time="2026-03-03T13:41:39.147462274Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 3 13:41:39.148204 dockerd[1820]: time="2026-03-03T13:41:39.147981273Z" level=info msg="Initializing buildkit" Mar 3 13:41:39.290178 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:41:39.329996 (kubelet)[2027]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 3 13:41:39.342484 dockerd[1820]: time="2026-03-03T13:41:39.341908938Z" level=info msg="Completed buildkit initialization" Mar 3 13:41:39.379243 dockerd[1820]: time="2026-03-03T13:41:39.379185492Z" level=info msg="Daemon has completed initialization" Mar 3 13:41:39.380599 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 3 13:41:39.381507 dockerd[1820]: time="2026-03-03T13:41:39.380700020Z" level=info msg="API listen on /run/docker.sock" Mar 3 13:41:40.905908 kubelet[2027]: E0303 13:41:40.904776 2027 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 3 13:41:40.913091 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 3 13:41:40.913855 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 3 13:41:40.915155 systemd[1]: kubelet.service: Consumed 2.681s CPU time, 113.6M memory peak. Mar 3 13:41:42.652705 update_engine[1548]: I20260303 13:41:42.634253 1548 update_attempter.cc:509] Updating boot flags... Mar 3 13:41:46.732597 containerd[1556]: time="2026-03-03T13:41:46.732423214Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 3 13:41:48.115076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1055393519.mount: Deactivated successfully. Mar 3 13:41:51.037703 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 3 13:41:51.042943 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:41:51.720376 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:41:51.745063 (kubelet)[2141]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 3 13:41:52.483939 kubelet[2141]: E0303 13:41:52.483358 2141 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 3 13:41:52.493742 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 3 13:41:52.494037 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 3 13:41:52.495343 systemd[1]: kubelet.service: Consumed 1.076s CPU time, 112.3M memory peak. Mar 3 13:41:53.755457 containerd[1556]: time="2026-03-03T13:41:53.754891690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:41:53.762908 containerd[1556]: time="2026-03-03T13:41:53.756535991Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074497" Mar 3 13:41:53.774541 containerd[1556]: time="2026-03-03T13:41:53.774457252Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:41:53.782380 containerd[1556]: time="2026-03-03T13:41:53.782074832Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:41:53.784027 containerd[1556]: time="2026-03-03T13:41:53.783921465Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 7.051409468s" Mar 3 13:41:53.784027 containerd[1556]: time="2026-03-03T13:41:53.783957997Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 3 13:41:53.790721 containerd[1556]: time="2026-03-03T13:41:53.790583394Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 3 13:41:58.265113 containerd[1556]: time="2026-03-03T13:41:58.264541318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:41:58.267576 containerd[1556]: time="2026-03-03T13:41:58.267207014Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165823" Mar 3 13:41:58.273404 containerd[1556]: time="2026-03-03T13:41:58.272837015Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:41:58.279528 containerd[1556]: time="2026-03-03T13:41:58.279262506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:41:58.280872 containerd[1556]: time="2026-03-03T13:41:58.280722566Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 4.490048197s" Mar 3 13:41:58.280872 containerd[1556]: time="2026-03-03T13:41:58.280816916Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 3 13:41:58.284412 containerd[1556]: time="2026-03-03T13:41:58.283691684Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 3 13:42:02.708643 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 3 13:42:02.753021 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:42:02.854146 containerd[1556]: time="2026-03-03T13:42:02.852956956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:42:02.871764 containerd[1556]: time="2026-03-03T13:42:02.862791293Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729824" Mar 3 13:42:02.899649 containerd[1556]: time="2026-03-03T13:42:02.897830214Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:42:03.017642 containerd[1556]: time="2026-03-03T13:42:03.016935337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:42:03.023226 containerd[1556]: time="2026-03-03T13:42:03.023132613Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 4.739403589s" Mar 3 13:42:03.023483 containerd[1556]: time="2026-03-03T13:42:03.023410371Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 3 13:42:03.027486 containerd[1556]: time="2026-03-03T13:42:03.027421677Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 3 13:42:03.978872 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:42:03.999124 (kubelet)[2166]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 3 13:42:04.280558 kubelet[2166]: E0303 13:42:04.279719 2166 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 3 13:42:04.287153 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 3 13:42:04.287825 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 3 13:42:04.289447 systemd[1]: kubelet.service: Consumed 1.213s CPU time, 109.1M memory peak. Mar 3 13:42:07.115619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3149883064.mount: Deactivated successfully. Mar 3 13:42:09.448603 containerd[1556]: time="2026-03-03T13:42:09.447710412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:42:09.448603 containerd[1556]: time="2026-03-03T13:42:09.448766256Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861770" Mar 3 13:42:09.452957 containerd[1556]: time="2026-03-03T13:42:09.452811992Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:42:09.483748 containerd[1556]: time="2026-03-03T13:42:09.483468076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:42:09.485148 containerd[1556]: time="2026-03-03T13:42:09.484838158Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 6.457372436s" Mar 3 13:42:09.485148 containerd[1556]: time="2026-03-03T13:42:09.484932666Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 3 13:42:09.489062 containerd[1556]: time="2026-03-03T13:42:09.488829972Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 3 13:42:10.072916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1731411621.mount: Deactivated successfully. Mar 3 13:42:14.478659 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 3 13:42:14.498855 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:42:15.391440 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:42:15.402976 (kubelet)[2243]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 3 13:42:15.619543 kubelet[2243]: E0303 13:42:15.619473 2243 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 3 13:42:15.627874 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 3 13:42:15.628134 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 3 13:42:15.628825 systemd[1]: kubelet.service: Consumed 841ms CPU time, 110.2M memory peak. Mar 3 13:42:16.427517 containerd[1556]: time="2026-03-03T13:42:16.426861743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:42:16.428947 containerd[1556]: time="2026-03-03T13:42:16.427708132Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Mar 3 13:42:16.434962 containerd[1556]: time="2026-03-03T13:42:16.434787336Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:42:16.444001 containerd[1556]: time="2026-03-03T13:42:16.443906420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:42:16.446019 containerd[1556]: time="2026-03-03T13:42:16.445830162Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 6.956919829s" Mar 3 13:42:16.446019 containerd[1556]: time="2026-03-03T13:42:16.445897749Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 3 13:42:16.449052 containerd[1556]: time="2026-03-03T13:42:16.448959508Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 3 13:42:17.324043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2862097733.mount: Deactivated successfully. Mar 3 13:42:17.335163 containerd[1556]: time="2026-03-03T13:42:17.335012526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:42:17.337536 containerd[1556]: time="2026-03-03T13:42:17.337193842Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 3 13:42:17.339725 containerd[1556]: time="2026-03-03T13:42:17.339560491Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:42:17.342948 containerd[1556]: time="2026-03-03T13:42:17.342781041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:42:17.343457 containerd[1556]: time="2026-03-03T13:42:17.343248114Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 894.221249ms" Mar 3 13:42:17.343457 containerd[1556]: time="2026-03-03T13:42:17.343402894Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 3 13:42:17.347425 containerd[1556]: time="2026-03-03T13:42:17.347043559Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 3 13:42:18.151453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2158041179.mount: Deactivated successfully. Mar 3 13:42:21.770243 containerd[1556]: time="2026-03-03T13:42:21.769738420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:42:21.772236 containerd[1556]: time="2026-03-03T13:42:21.771870382Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860674" Mar 3 13:42:21.773946 containerd[1556]: time="2026-03-03T13:42:21.773850888Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:42:21.779813 containerd[1556]: time="2026-03-03T13:42:21.779776433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:42:21.782424 containerd[1556]: time="2026-03-03T13:42:21.782087292Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 4.434958262s" Mar 3 13:42:21.782424 containerd[1556]: time="2026-03-03T13:42:21.782219481Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 3 13:42:25.707908 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 3 13:42:25.717612 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:42:26.418035 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:42:26.456735 (kubelet)[2351]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 3 13:42:27.259568 kubelet[2351]: E0303 13:42:27.253637 2351 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 3 13:42:27.292435 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 3 13:42:27.294026 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 3 13:42:27.299696 systemd[1]: kubelet.service: Consumed 1.364s CPU time, 110.1M memory peak. Mar 3 13:42:29.116051 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:42:29.116687 systemd[1]: kubelet.service: Consumed 1.364s CPU time, 110.1M memory peak. Mar 3 13:42:29.123595 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:42:29.204161 systemd[1]: Reload requested from client PID 2367 ('systemctl') (unit session-9.scope)... Mar 3 13:42:29.204184 systemd[1]: Reloading... Mar 3 13:42:29.378460 zram_generator::config[2413]: No configuration found. Mar 3 13:42:29.724225 systemd[1]: Reloading finished in 519 ms. Mar 3 13:42:29.838698 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 3 13:42:29.838996 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 3 13:42:29.839794 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:42:29.839940 systemd[1]: kubelet.service: Consumed 272ms CPU time, 98.4M memory peak. Mar 3 13:42:29.843522 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:42:30.498527 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:42:30.519079 (kubelet)[2458]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 3 13:42:30.690910 kubelet[2458]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 3 13:42:30.690910 kubelet[2458]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 3 13:42:30.691583 kubelet[2458]: I0303 13:42:30.691047 2458 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 3 13:42:31.367803 kubelet[2458]: I0303 13:42:31.367704 2458 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 3 13:42:31.367803 kubelet[2458]: I0303 13:42:31.367777 2458 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 3 13:42:31.367803 kubelet[2458]: I0303 13:42:31.367806 2458 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 3 13:42:31.368030 kubelet[2458]: I0303 13:42:31.367873 2458 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 3 13:42:31.368154 kubelet[2458]: I0303 13:42:31.368063 2458 server.go:956] "Client rotation is on, will bootstrap in background" Mar 3 13:42:31.524225 kubelet[2458]: E0303 13:42:31.523669 2458 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.57:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 3 13:42:31.525640 kubelet[2458]: I0303 13:42:31.524909 2458 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 3 13:42:31.974254 kubelet[2458]: I0303 13:42:31.961768 2458 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 3 13:42:32.029948 kubelet[2458]: I0303 13:42:32.029233 2458 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 3 13:42:32.033429 kubelet[2458]: I0303 13:42:32.033123 2458 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 3 13:42:32.033637 kubelet[2458]: I0303 13:42:32.033219 2458 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 3 13:42:32.033637 kubelet[2458]: I0303 13:42:32.033634 2458 topology_manager.go:138] "Creating topology manager with none policy" Mar 3 13:42:32.034524 kubelet[2458]: I0303 13:42:32.033648 2458 container_manager_linux.go:306] "Creating device plugin manager" Mar 3 13:42:32.034524 kubelet[2458]: I0303 13:42:32.034256 2458 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 3 13:42:32.040029 kubelet[2458]: I0303 13:42:32.039927 2458 state_mem.go:36] "Initialized new in-memory state store" Mar 3 13:42:32.040693 kubelet[2458]: I0303 13:42:32.040613 2458 kubelet.go:475] "Attempting to sync node with API server" Mar 3 13:42:32.040957 kubelet[2458]: I0303 13:42:32.040772 2458 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 3 13:42:32.041233 kubelet[2458]: I0303 13:42:32.041144 2458 kubelet.go:387] "Adding apiserver pod source" Mar 3 13:42:32.041233 kubelet[2458]: I0303 13:42:32.041218 2458 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 3 13:42:32.045439 kubelet[2458]: E0303 13:42:32.044924 2458 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.57:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 3 13:42:32.045439 kubelet[2458]: E0303 13:42:32.044924 2458 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.57:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 3 13:42:32.052431 kubelet[2458]: I0303 13:42:32.050757 2458 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 3 13:42:32.052431 kubelet[2458]: I0303 13:42:32.051933 2458 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 3 13:42:32.052431 kubelet[2458]: I0303 13:42:32.051967 2458 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 3 13:42:32.052431 kubelet[2458]: W0303 13:42:32.052028 2458 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 3 13:42:32.059250 kubelet[2458]: I0303 13:42:32.059150 2458 server.go:1262] "Started kubelet" Mar 3 13:42:32.061026 kubelet[2458]: I0303 13:42:32.059639 2458 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 3 13:42:32.061026 kubelet[2458]: I0303 13:42:32.059703 2458 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 3 13:42:32.061026 kubelet[2458]: I0303 13:42:32.059759 2458 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 3 13:42:32.061026 kubelet[2458]: I0303 13:42:32.060180 2458 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 3 13:42:32.064169 kubelet[2458]: I0303 13:42:32.063121 2458 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 3 13:42:32.072085 kubelet[2458]: I0303 13:42:32.071983 2458 server.go:310] "Adding debug handlers to kubelet server" Mar 3 13:42:32.076025 kubelet[2458]: I0303 13:42:32.075190 2458 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 3 13:42:32.077163 kubelet[2458]: E0303 13:42:32.074519 2458 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.57:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.57:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1899589b39c001c1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-03 13:42:32.059060673 +0000 UTC m=+1.502059596,LastTimestamp:2026-03-03 13:42:32.059060673 +0000 UTC m=+1.502059596,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 3 13:42:32.078744 kubelet[2458]: E0303 13:42:32.078622 2458 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 3 13:42:32.078744 kubelet[2458]: I0303 13:42:32.078721 2458 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 3 13:42:32.080255 kubelet[2458]: I0303 13:42:32.078966 2458 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 3 13:42:32.080255 kubelet[2458]: I0303 13:42:32.079164 2458 reconciler.go:29] "Reconciler: start to sync state" Mar 3 13:42:32.081928 kubelet[2458]: E0303 13:42:32.081778 2458 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.57:6443: connect: connection refused" interval="200ms" Mar 3 13:42:32.082944 kubelet[2458]: E0303 13:42:32.082739 2458 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.57:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 3 13:42:32.084552 kubelet[2458]: I0303 13:42:32.084471 2458 factory.go:223] Registration of the systemd container factory successfully Mar 3 13:42:32.084623 kubelet[2458]: I0303 13:42:32.084597 2458 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 3 13:42:32.085759 kubelet[2458]: E0303 13:42:32.085626 2458 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 3 13:42:32.089112 kubelet[2458]: I0303 13:42:32.088743 2458 factory.go:223] Registration of the containerd container factory successfully Mar 3 13:42:32.124142 kubelet[2458]: I0303 13:42:32.123814 2458 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 3 13:42:32.124142 kubelet[2458]: I0303 13:42:32.123982 2458 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 3 13:42:32.124142 kubelet[2458]: I0303 13:42:32.124005 2458 state_mem.go:36] "Initialized new in-memory state store" Mar 3 13:42:32.128535 kubelet[2458]: I0303 13:42:32.128475 2458 policy_none.go:49] "None policy: Start" Mar 3 13:42:32.128535 kubelet[2458]: I0303 13:42:32.128498 2458 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 3 13:42:32.128535 kubelet[2458]: I0303 13:42:32.128515 2458 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 3 13:42:32.131034 kubelet[2458]: I0303 13:42:32.130766 2458 policy_none.go:47] "Start" Mar 3 13:42:32.145414 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 3 13:42:32.146581 kubelet[2458]: I0303 13:42:32.146482 2458 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 3 13:42:32.152666 kubelet[2458]: I0303 13:42:32.152634 2458 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 3 13:42:32.153719 kubelet[2458]: I0303 13:42:32.153245 2458 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 3 13:42:32.153719 kubelet[2458]: I0303 13:42:32.153473 2458 kubelet.go:2428] "Starting kubelet main sync loop" Mar 3 13:42:32.153719 kubelet[2458]: E0303 13:42:32.153536 2458 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 3 13:42:32.154067 kubelet[2458]: E0303 13:42:32.154040 2458 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.57:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 3 13:42:32.162728 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 3 13:42:32.179232 kubelet[2458]: E0303 13:42:32.178897 2458 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 3 13:42:32.179553 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 3 13:42:32.200708 kubelet[2458]: E0303 13:42:32.200678 2458 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 3 13:42:32.201951 kubelet[2458]: I0303 13:42:32.201929 2458 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 3 13:42:32.202090 kubelet[2458]: I0303 13:42:32.202047 2458 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 3 13:42:32.204070 kubelet[2458]: E0303 13:42:32.203923 2458 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 3 13:42:32.204070 kubelet[2458]: E0303 13:42:32.203978 2458 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 3 13:42:32.204735 kubelet[2458]: I0303 13:42:32.204632 2458 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 3 13:42:32.279966 kubelet[2458]: I0303 13:42:32.279763 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 3 13:42:32.285079 kubelet[2458]: I0303 13:42:32.282461 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c510be2292c7dc746f7ba53a3e68ba5b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c510be2292c7dc746f7ba53a3e68ba5b\") " pod="kube-system/kube-apiserver-localhost" Mar 3 13:42:32.285079 kubelet[2458]: I0303 13:42:32.282591 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c510be2292c7dc746f7ba53a3e68ba5b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c510be2292c7dc746f7ba53a3e68ba5b\") " pod="kube-system/kube-apiserver-localhost" Mar 3 13:42:32.285079 kubelet[2458]: I0303 13:42:32.282625 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 13:42:32.285079 kubelet[2458]: I0303 13:42:32.282944 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 13:42:32.285079 kubelet[2458]: I0303 13:42:32.282973 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c510be2292c7dc746f7ba53a3e68ba5b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c510be2292c7dc746f7ba53a3e68ba5b\") " pod="kube-system/kube-apiserver-localhost" Mar 3 13:42:32.285512 kubelet[2458]: I0303 13:42:32.282994 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 13:42:32.285512 kubelet[2458]: I0303 13:42:32.283240 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 13:42:32.285512 kubelet[2458]: I0303 13:42:32.283411 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 13:42:32.285512 kubelet[2458]: E0303 13:42:32.284624 2458 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.57:6443: connect: connection refused" interval="400ms" Mar 3 13:42:32.290486 systemd[1]: Created slice kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice - libcontainer container kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice. Mar 3 13:42:32.302182 kubelet[2458]: E0303 13:42:32.301917 2458 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 13:42:32.306778 systemd[1]: Created slice kubepods-burstable-podc510be2292c7dc746f7ba53a3e68ba5b.slice - libcontainer container kubepods-burstable-podc510be2292c7dc746f7ba53a3e68ba5b.slice. Mar 3 13:42:32.308117 kubelet[2458]: I0303 13:42:32.306998 2458 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 3 13:42:32.308117 kubelet[2458]: E0303 13:42:32.307255 2458 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.57:6443/api/v1/nodes\": dial tcp 10.0.0.57:6443: connect: connection refused" node="localhost" Mar 3 13:42:32.311693 kubelet[2458]: E0303 13:42:32.311613 2458 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 13:42:32.316895 systemd[1]: Created slice kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice - libcontainer container kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice. Mar 3 13:42:32.322024 kubelet[2458]: E0303 13:42:32.321811 2458 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 13:42:32.511221 kubelet[2458]: I0303 13:42:32.510959 2458 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 3 13:42:32.512080 kubelet[2458]: E0303 13:42:32.511765 2458 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.57:6443/api/v1/nodes\": dial tcp 10.0.0.57:6443: connect: connection refused" node="localhost" Mar 3 13:42:32.614725 kubelet[2458]: E0303 13:42:32.613255 2458 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:42:32.616578 containerd[1556]: time="2026-03-03T13:42:32.615921527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,}" Mar 3 13:42:32.621096 kubelet[2458]: E0303 13:42:32.620662 2458 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:42:32.622112 containerd[1556]: time="2026-03-03T13:42:32.621707291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c510be2292c7dc746f7ba53a3e68ba5b,Namespace:kube-system,Attempt:0,}" Mar 3 13:42:32.629820 kubelet[2458]: E0303 13:42:32.629782 2458 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:42:32.631198 containerd[1556]: time="2026-03-03T13:42:32.631052687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,}" Mar 3 13:42:32.686112 kubelet[2458]: E0303 13:42:32.685750 2458 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.57:6443: connect: connection refused" interval="800ms" Mar 3 13:42:32.914901 kubelet[2458]: I0303 13:42:32.914505 2458 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 3 13:42:32.915037 kubelet[2458]: E0303 13:42:32.914968 2458 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.57:6443/api/v1/nodes\": dial tcp 10.0.0.57:6443: connect: connection refused" node="localhost" Mar 3 13:42:33.115612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2742243744.mount: Deactivated successfully. Mar 3 13:42:33.166050 containerd[1556]: time="2026-03-03T13:42:33.165739119Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 3 13:42:33.170582 kubelet[2458]: E0303 13:42:33.170158 2458 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.57:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 3 13:42:33.171945 containerd[1556]: time="2026-03-03T13:42:33.171811224Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 3 13:42:33.176116 containerd[1556]: time="2026-03-03T13:42:33.175918544Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 3 13:42:33.181245 containerd[1556]: time="2026-03-03T13:42:33.180748682Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 3 13:42:33.183027 containerd[1556]: time="2026-03-03T13:42:33.182974029Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 3 13:42:33.184576 containerd[1556]: time="2026-03-03T13:42:33.184392639Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 3 13:42:33.186688 containerd[1556]: time="2026-03-03T13:42:33.186522259Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 3 13:42:33.188573 containerd[1556]: time="2026-03-03T13:42:33.188430586Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 3 13:42:33.193900 containerd[1556]: time="2026-03-03T13:42:33.193696104Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 573.46709ms" Mar 3 13:42:33.198165 containerd[1556]: time="2026-03-03T13:42:33.197108347Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 561.311997ms" Mar 3 13:42:33.199779 containerd[1556]: time="2026-03-03T13:42:33.199640433Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 568.694459ms" Mar 3 13:42:33.202905 kubelet[2458]: E0303 13:42:33.202736 2458 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.57:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 3 13:42:33.355445 containerd[1556]: time="2026-03-03T13:42:33.354947290Z" level=info msg="connecting to shim 8f0c8dded7e168349d163645487241449028ae51483a42e465a184d4eb86c3ec" address="unix:///run/containerd/s/b2673d31ff430dd3feb1b0b8542f91e563484f174ca144c5daada99f53aa8d92" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:42:33.355445 containerd[1556]: time="2026-03-03T13:42:33.353254697Z" level=info msg="connecting to shim 215d020b35ad27683d504e414cce471227202edb73b42f1b2e0dfdc9776b6664" address="unix:///run/containerd/s/4e5c6c3ad22e9cb59a2431902eb2d42e51e9ec8934537f9ee904e577c9d63af6" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:42:33.371156 kubelet[2458]: E0303 13:42:33.370999 2458 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.57:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 3 13:42:33.483213 containerd[1556]: time="2026-03-03T13:42:33.481479645Z" level=info msg="connecting to shim 7cba8fec01987baa9de9904b9baa73d738fcb507f344d41d78dc67d6d85d6c9e" address="unix:///run/containerd/s/27b2ca8b4d63c9214378e2ac688e51630f4e53452ec9f795ee35cd6e8fad5b9c" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:42:33.504819 kubelet[2458]: E0303 13:42:33.501994 2458 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.57:6443: connect: connection refused" interval="1.6s" Mar 3 13:42:33.595735 kubelet[2458]: E0303 13:42:33.593966 2458 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.57:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 3 13:42:33.653203 kubelet[2458]: E0303 13:42:33.653130 2458 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.57:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 3 13:42:33.708431 systemd[1]: Started cri-containerd-8f0c8dded7e168349d163645487241449028ae51483a42e465a184d4eb86c3ec.scope - libcontainer container 8f0c8dded7e168349d163645487241449028ae51483a42e465a184d4eb86c3ec. Mar 3 13:42:33.728427 kubelet[2458]: I0303 13:42:33.727821 2458 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 3 13:42:33.733611 kubelet[2458]: E0303 13:42:33.733174 2458 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.57:6443/api/v1/nodes\": dial tcp 10.0.0.57:6443: connect: connection refused" node="localhost" Mar 3 13:42:33.783647 systemd[1]: Started cri-containerd-215d020b35ad27683d504e414cce471227202edb73b42f1b2e0dfdc9776b6664.scope - libcontainer container 215d020b35ad27683d504e414cce471227202edb73b42f1b2e0dfdc9776b6664. Mar 3 13:42:33.821606 systemd[1]: Started cri-containerd-7cba8fec01987baa9de9904b9baa73d738fcb507f344d41d78dc67d6d85d6c9e.scope - libcontainer container 7cba8fec01987baa9de9904b9baa73d738fcb507f344d41d78dc67d6d85d6c9e. Mar 3 13:42:34.202745 containerd[1556]: time="2026-03-03T13:42:34.202541700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,} returns sandbox id \"215d020b35ad27683d504e414cce471227202edb73b42f1b2e0dfdc9776b6664\"" Mar 3 13:42:34.211107 containerd[1556]: time="2026-03-03T13:42:34.211065888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c510be2292c7dc746f7ba53a3e68ba5b,Namespace:kube-system,Attempt:0,} returns sandbox id \"7cba8fec01987baa9de9904b9baa73d738fcb507f344d41d78dc67d6d85d6c9e\"" Mar 3 13:42:34.216625 kubelet[2458]: E0303 13:42:34.215656 2458 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:42:34.219965 kubelet[2458]: E0303 13:42:34.219707 2458 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:42:34.248143 containerd[1556]: time="2026-03-03T13:42:34.248091643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f0c8dded7e168349d163645487241449028ae51483a42e465a184d4eb86c3ec\"" Mar 3 13:42:34.251161 kubelet[2458]: E0303 13:42:34.251129 2458 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:42:34.261769 containerd[1556]: time="2026-03-03T13:42:34.259825686Z" level=info msg="CreateContainer within sandbox \"215d020b35ad27683d504e414cce471227202edb73b42f1b2e0dfdc9776b6664\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 3 13:42:34.266761 containerd[1556]: time="2026-03-03T13:42:34.266405979Z" level=info msg="CreateContainer within sandbox \"7cba8fec01987baa9de9904b9baa73d738fcb507f344d41d78dc67d6d85d6c9e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 3 13:42:34.272348 containerd[1556]: time="2026-03-03T13:42:34.272199824Z" level=info msg="CreateContainer within sandbox \"8f0c8dded7e168349d163645487241449028ae51483a42e465a184d4eb86c3ec\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 3 13:42:34.311016 containerd[1556]: time="2026-03-03T13:42:34.310804458Z" level=info msg="Container 615bf8e4847f6ad1ab8906adb6bf0b10046534849d1df941acc08c8b66becf14: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:42:34.311157 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1803291973.mount: Deactivated successfully. Mar 3 13:42:34.321021 containerd[1556]: time="2026-03-03T13:42:34.320756302Z" level=info msg="Container 5918187958a91ca423d38b4f7ebe5c6f343142100aff71d450177f6f489f8814: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:42:34.327260 containerd[1556]: time="2026-03-03T13:42:34.327221340Z" level=info msg="Container d325a1ca2bdccd655390caa116252ba2fc584c85b4281447b5615e395d24eb99: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:42:34.336793 containerd[1556]: time="2026-03-03T13:42:34.336607425Z" level=info msg="CreateContainer within sandbox \"215d020b35ad27683d504e414cce471227202edb73b42f1b2e0dfdc9776b6664\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"615bf8e4847f6ad1ab8906adb6bf0b10046534849d1df941acc08c8b66becf14\"" Mar 3 13:42:34.338637 containerd[1556]: time="2026-03-03T13:42:34.338465046Z" level=info msg="StartContainer for \"615bf8e4847f6ad1ab8906adb6bf0b10046534849d1df941acc08c8b66becf14\"" Mar 3 13:42:34.342391 containerd[1556]: time="2026-03-03T13:42:34.342226253Z" level=info msg="connecting to shim 615bf8e4847f6ad1ab8906adb6bf0b10046534849d1df941acc08c8b66becf14" address="unix:///run/containerd/s/4e5c6c3ad22e9cb59a2431902eb2d42e51e9ec8934537f9ee904e577c9d63af6" protocol=ttrpc version=3 Mar 3 13:42:34.343954 containerd[1556]: time="2026-03-03T13:42:34.343549976Z" level=info msg="CreateContainer within sandbox \"7cba8fec01987baa9de9904b9baa73d738fcb507f344d41d78dc67d6d85d6c9e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5918187958a91ca423d38b4f7ebe5c6f343142100aff71d450177f6f489f8814\"" Mar 3 13:42:34.347029 containerd[1556]: time="2026-03-03T13:42:34.346996064Z" level=info msg="StartContainer for \"5918187958a91ca423d38b4f7ebe5c6f343142100aff71d450177f6f489f8814\"" Mar 3 13:42:34.349528 containerd[1556]: time="2026-03-03T13:42:34.349498700Z" level=info msg="connecting to shim 5918187958a91ca423d38b4f7ebe5c6f343142100aff71d450177f6f489f8814" address="unix:///run/containerd/s/27b2ca8b4d63c9214378e2ac688e51630f4e53452ec9f795ee35cd6e8fad5b9c" protocol=ttrpc version=3 Mar 3 13:42:34.364751 containerd[1556]: time="2026-03-03T13:42:34.364109462Z" level=info msg="CreateContainer within sandbox \"8f0c8dded7e168349d163645487241449028ae51483a42e465a184d4eb86c3ec\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d325a1ca2bdccd655390caa116252ba2fc584c85b4281447b5615e395d24eb99\"" Mar 3 13:42:34.366214 containerd[1556]: time="2026-03-03T13:42:34.365171867Z" level=info msg="StartContainer for \"d325a1ca2bdccd655390caa116252ba2fc584c85b4281447b5615e395d24eb99\"" Mar 3 13:42:34.367441 containerd[1556]: time="2026-03-03T13:42:34.367159771Z" level=info msg="connecting to shim d325a1ca2bdccd655390caa116252ba2fc584c85b4281447b5615e395d24eb99" address="unix:///run/containerd/s/b2673d31ff430dd3feb1b0b8542f91e563484f174ca144c5daada99f53aa8d92" protocol=ttrpc version=3 Mar 3 13:42:34.401787 systemd[1]: Started cri-containerd-5918187958a91ca423d38b4f7ebe5c6f343142100aff71d450177f6f489f8814.scope - libcontainer container 5918187958a91ca423d38b4f7ebe5c6f343142100aff71d450177f6f489f8814. Mar 3 13:42:34.431490 kubelet[2458]: E0303 13:42:34.429391 2458 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.57:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.57:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1899589b39c001c1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-03 13:42:32.059060673 +0000 UTC m=+1.502059596,LastTimestamp:2026-03-03 13:42:32.059060673 +0000 UTC m=+1.502059596,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 3 13:42:34.442550 systemd[1]: Started cri-containerd-615bf8e4847f6ad1ab8906adb6bf0b10046534849d1df941acc08c8b66becf14.scope - libcontainer container 615bf8e4847f6ad1ab8906adb6bf0b10046534849d1df941acc08c8b66becf14. Mar 3 13:42:34.490003 systemd[1]: Started cri-containerd-d325a1ca2bdccd655390caa116252ba2fc584c85b4281447b5615e395d24eb99.scope - libcontainer container d325a1ca2bdccd655390caa116252ba2fc584c85b4281447b5615e395d24eb99. Mar 3 13:42:34.662439 containerd[1556]: time="2026-03-03T13:42:34.662068855Z" level=info msg="StartContainer for \"615bf8e4847f6ad1ab8906adb6bf0b10046534849d1df941acc08c8b66becf14\" returns successfully" Mar 3 13:42:34.684226 containerd[1556]: time="2026-03-03T13:42:34.684187866Z" level=info msg="StartContainer for \"5918187958a91ca423d38b4f7ebe5c6f343142100aff71d450177f6f489f8814\" returns successfully" Mar 3 13:42:34.715047 containerd[1556]: time="2026-03-03T13:42:34.714577420Z" level=info msg="StartContainer for \"d325a1ca2bdccd655390caa116252ba2fc584c85b4281447b5615e395d24eb99\" returns successfully" Mar 3 13:42:35.052161 kubelet[2458]: E0303 13:42:35.051978 2458 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.57:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 3 13:42:35.104235 kubelet[2458]: E0303 13:42:35.104109 2458 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.57:6443: connect: connection refused" interval="3.2s" Mar 3 13:42:35.208350 kubelet[2458]: E0303 13:42:35.207543 2458 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 13:42:35.208350 kubelet[2458]: E0303 13:42:35.207704 2458 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:42:35.213527 kubelet[2458]: E0303 13:42:35.213073 2458 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 13:42:35.213527 kubelet[2458]: E0303 13:42:35.213250 2458 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:42:35.224723 kubelet[2458]: E0303 13:42:35.222147 2458 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 13:42:35.226122 kubelet[2458]: E0303 13:42:35.226031 2458 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:42:35.340661 kubelet[2458]: I0303 13:42:35.339399 2458 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 3 13:42:36.244236 kubelet[2458]: E0303 13:42:36.244198 2458 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 13:42:36.248254 kubelet[2458]: E0303 13:42:36.248149 2458 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:42:36.251418 kubelet[2458]: E0303 13:42:36.250737 2458 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 13:42:36.254402 kubelet[2458]: E0303 13:42:36.253755 2458 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:42:37.262027 kubelet[2458]: E0303 13:42:37.257777 2458 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 13:42:37.295175 kubelet[2458]: E0303 13:42:37.290087 2458 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:42:40.630680 kubelet[2458]: E0303 13:42:40.630540 2458 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 13:42:40.630680 kubelet[2458]: E0303 13:42:40.630781 2458 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:42:41.473516 kubelet[2458]: E0303 13:42:41.471575 2458 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 13:42:41.473516 kubelet[2458]: E0303 13:42:41.471845 2458 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:42:42.210086 kubelet[2458]: E0303 13:42:42.204737 2458 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 3 13:42:45.235707 kubelet[2458]: E0303 13:42:45.233603 2458 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.57:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 3 13:42:45.320146 kubelet[2458]: E0303 13:42:45.320068 2458 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.57:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 3 13:42:45.342785 kubelet[2458]: E0303 13:42:45.342713 2458 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.57:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Mar 3 13:42:45.565853 kubelet[2458]: E0303 13:42:45.565631 2458 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.57:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 3 13:42:46.457246 kubelet[2458]: E0303 13:42:46.456735 2458 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 3 13:42:46.580484 kubelet[2458]: E0303 13:42:46.579151 2458 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1899589b39c001c1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-03 13:42:32.059060673 +0000 UTC m=+1.502059596,LastTimestamp:2026-03-03 13:42:32.059060673 +0000 UTC m=+1.502059596,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 3 13:42:46.695176 kubelet[2458]: E0303 13:42:46.674240 2458 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1899589b3b551fb7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-03 13:42:32.085610423 +0000 UTC m=+1.528609337,LastTimestamp:2026-03-03 13:42:32.085610423 +0000 UTC m=+1.528609337,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 3 13:42:46.697147 kubelet[2458]: E0303 13:42:46.697108 2458 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 13:42:46.721505 kubelet[2458]: E0303 13:42:46.715212 2458 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:42:46.905560 kubelet[2458]: E0303 13:42:46.905506 2458 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Mar 3 13:42:47.559925 kubelet[2458]: E0303 13:42:47.555699 2458 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Mar 3 13:42:48.388701 kubelet[2458]: E0303 13:42:48.385780 2458 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Mar 3 13:42:48.569516 kubelet[2458]: I0303 13:42:48.569404 2458 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 3 13:42:48.642198 kubelet[2458]: I0303 13:42:48.641894 2458 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 3 13:42:48.642913 kubelet[2458]: E0303 13:42:48.642642 2458 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 3 13:42:48.871150 kubelet[2458]: E0303 13:42:48.870046 2458 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 3 13:42:48.882397 kubelet[2458]: I0303 13:42:48.880482 2458 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 3 13:42:48.997434 kubelet[2458]: I0303 13:42:48.997256 2458 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 3 13:42:49.048385 kubelet[2458]: I0303 13:42:49.046940 2458 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 3 13:42:49.538498 kubelet[2458]: I0303 13:42:49.532953 2458 apiserver.go:52] "Watching apiserver" Mar 3 13:42:49.548503 kubelet[2458]: E0303 13:42:49.548235 2458 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:42:49.552454 kubelet[2458]: E0303 13:42:49.549831 2458 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:42:49.552454 kubelet[2458]: E0303 13:42:49.552093 2458 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:42:49.580542 kubelet[2458]: I0303 13:42:49.579540 2458 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 3 13:42:50.674789 kubelet[2458]: E0303 13:42:50.674747 2458 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:42:50.872628 kubelet[2458]: I0303 13:42:50.866827 2458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.866806472 podStartE2EDuration="2.866806472s" podCreationTimestamp="2026-03-03 13:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 13:42:50.856835146 +0000 UTC m=+20.299834069" watchObservedRunningTime="2026-03-03 13:42:50.866806472 +0000 UTC m=+20.309805385" Mar 3 13:42:51.010080 kubelet[2458]: I0303 13:42:51.007260 2458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.007215051 podStartE2EDuration="2.007215051s" podCreationTimestamp="2026-03-03 13:42:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 13:42:50.927622554 +0000 UTC m=+20.370621477" watchObservedRunningTime="2026-03-03 13:42:51.007215051 +0000 UTC m=+20.450213965" Mar 3 13:42:54.091202 systemd[1]: Reload requested from client PID 2751 ('systemctl') (unit session-9.scope)... Mar 3 13:42:54.091512 systemd[1]: Reloading... Mar 3 13:42:54.724452 zram_generator::config[2794]: No configuration found. Mar 3 13:42:55.806676 systemd[1]: Reloading finished in 1712 ms. Mar 3 13:42:55.923956 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:42:55.967919 systemd[1]: kubelet.service: Deactivated successfully. Mar 3 13:42:55.968673 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:42:55.968753 systemd[1]: kubelet.service: Consumed 4.471s CPU time, 131M memory peak. Mar 3 13:42:55.997495 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:42:56.980658 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:42:57.041245 (kubelet)[2840]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 3 13:42:57.390492 kubelet[2840]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 3 13:42:57.390492 kubelet[2840]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 3 13:42:57.393413 kubelet[2840]: I0303 13:42:57.391784 2840 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 3 13:42:57.421722 kubelet[2840]: I0303 13:42:57.421670 2840 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 3 13:42:57.421907 kubelet[2840]: I0303 13:42:57.421889 2840 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 3 13:42:57.422142 kubelet[2840]: I0303 13:42:57.422032 2840 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 3 13:42:57.422258 kubelet[2840]: I0303 13:42:57.422238 2840 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 3 13:42:57.442234 kubelet[2840]: I0303 13:42:57.441633 2840 server.go:956] "Client rotation is on, will bootstrap in background" Mar 3 13:42:57.474742 kubelet[2840]: I0303 13:42:57.473429 2840 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 3 13:42:57.488426 kubelet[2840]: I0303 13:42:57.484433 2840 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 3 13:42:57.551423 kubelet[2840]: I0303 13:42:57.547773 2840 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 3 13:42:57.593510 kubelet[2840]: I0303 13:42:57.586040 2840 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 3 13:42:57.593510 kubelet[2840]: I0303 13:42:57.591515 2840 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 3 13:42:57.593510 kubelet[2840]: I0303 13:42:57.591558 2840 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 3 13:42:57.593510 kubelet[2840]: I0303 13:42:57.591781 2840 topology_manager.go:138] "Creating topology manager with none policy" Mar 3 13:42:57.593998 kubelet[2840]: I0303 13:42:57.591797 2840 container_manager_linux.go:306] "Creating device plugin manager" Mar 3 13:42:57.593998 kubelet[2840]: I0303 13:42:57.591851 2840 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 3 13:42:57.593998 kubelet[2840]: I0303 13:42:57.592504 2840 state_mem.go:36] "Initialized new in-memory state store" Mar 3 13:42:57.593998 kubelet[2840]: I0303 13:42:57.592947 2840 kubelet.go:475] "Attempting to sync node with API server" Mar 3 13:42:57.593998 kubelet[2840]: I0303 13:42:57.592980 2840 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 3 13:42:57.598242 kubelet[2840]: I0303 13:42:57.595843 2840 kubelet.go:387] "Adding apiserver pod source" Mar 3 13:42:57.598242 kubelet[2840]: I0303 13:42:57.595887 2840 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 3 13:42:57.689244 kubelet[2840]: I0303 13:42:57.688490 2840 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 3 13:42:57.694826 kubelet[2840]: I0303 13:42:57.694466 2840 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 3 13:42:57.694826 kubelet[2840]: I0303 13:42:57.694518 2840 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 3 13:42:57.778551 kubelet[2840]: I0303 13:42:57.777874 2840 server.go:1262] "Started kubelet" Mar 3 13:42:57.778797 kubelet[2840]: I0303 13:42:57.778762 2840 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 3 13:42:57.836691 kubelet[2840]: I0303 13:42:57.787576 2840 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 3 13:42:57.858476 kubelet[2840]: I0303 13:42:57.787716 2840 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 3 13:42:57.858476 kubelet[2840]: I0303 13:42:57.852518 2840 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 3 13:42:57.858476 kubelet[2840]: I0303 13:42:57.805031 2840 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 3 13:42:57.858476 kubelet[2840]: I0303 13:42:57.852719 2840 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 3 13:42:57.861370 kubelet[2840]: I0303 13:42:57.860983 2840 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 3 13:42:57.867954 kubelet[2840]: I0303 13:42:57.861606 2840 reconciler.go:29] "Reconciler: start to sync state" Mar 3 13:42:57.879986 kubelet[2840]: I0303 13:42:57.876536 2840 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 3 13:42:57.879986 kubelet[2840]: I0303 13:42:57.877610 2840 factory.go:223] Registration of the systemd container factory successfully Mar 3 13:42:57.879986 kubelet[2840]: I0303 13:42:57.877733 2840 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 3 13:42:57.880477 kubelet[2840]: I0303 13:42:57.880002 2840 server.go:310] "Adding debug handlers to kubelet server" Mar 3 13:42:57.899395 kubelet[2840]: I0303 13:42:57.898889 2840 factory.go:223] Registration of the containerd container factory successfully Mar 3 13:42:57.902856 kubelet[2840]: E0303 13:42:57.901018 2840 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 3 13:42:58.082431 kubelet[2840]: I0303 13:42:58.081639 2840 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 3 13:42:58.132800 kubelet[2840]: I0303 13:42:58.131205 2840 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 3 13:42:58.132800 kubelet[2840]: I0303 13:42:58.131245 2840 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 3 13:42:58.132800 kubelet[2840]: I0303 13:42:58.131484 2840 kubelet.go:2428] "Starting kubelet main sync loop" Mar 3 13:42:58.132800 kubelet[2840]: E0303 13:42:58.131557 2840 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 3 13:42:58.234219 kubelet[2840]: E0303 13:42:58.232595 2840 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 3 13:42:58.379758 kubelet[2840]: I0303 13:42:58.379568 2840 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 3 13:42:58.379758 kubelet[2840]: I0303 13:42:58.379655 2840 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 3 13:42:58.379758 kubelet[2840]: I0303 13:42:58.379702 2840 state_mem.go:36] "Initialized new in-memory state store" Mar 3 13:42:58.379956 kubelet[2840]: I0303 13:42:58.379877 2840 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 3 13:42:58.379956 kubelet[2840]: I0303 13:42:58.379889 2840 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 3 13:42:58.379956 kubelet[2840]: I0303 13:42:58.379911 2840 policy_none.go:49] "None policy: Start" Mar 3 13:42:58.379956 kubelet[2840]: I0303 13:42:58.379930 2840 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 3 13:42:58.379956 kubelet[2840]: I0303 13:42:58.379944 2840 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 3 13:42:58.380198 kubelet[2840]: I0303 13:42:58.380142 2840 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 3 13:42:58.380198 kubelet[2840]: I0303 13:42:58.380161 2840 policy_none.go:47] "Start" Mar 3 13:42:58.433530 kubelet[2840]: E0303 13:42:58.432883 2840 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 3 13:42:58.469223 kubelet[2840]: E0303 13:42:58.468518 2840 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 3 13:42:58.478539 kubelet[2840]: I0303 13:42:58.474885 2840 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 3 13:42:58.478539 kubelet[2840]: I0303 13:42:58.474920 2840 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 3 13:42:58.478539 kubelet[2840]: I0303 13:42:58.475741 2840 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 3 13:42:58.520745 kubelet[2840]: E0303 13:42:58.500546 2840 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 3 13:42:58.685779 kubelet[2840]: I0303 13:42:58.649231 2840 apiserver.go:52] "Watching apiserver" Mar 3 13:42:58.840635 kubelet[2840]: I0303 13:42:58.840593 2840 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 3 13:42:58.849496 kubelet[2840]: I0303 13:42:58.842706 2840 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 3 13:42:58.849496 kubelet[2840]: I0303 13:42:58.842886 2840 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 3 13:42:58.866728 kubelet[2840]: I0303 13:42:58.866680 2840 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 3 13:42:58.895570 kubelet[2840]: I0303 13:42:58.893714 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c510be2292c7dc746f7ba53a3e68ba5b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c510be2292c7dc746f7ba53a3e68ba5b\") " pod="kube-system/kube-apiserver-localhost" Mar 3 13:42:58.897026 kubelet[2840]: I0303 13:42:58.896779 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c510be2292c7dc746f7ba53a3e68ba5b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c510be2292c7dc746f7ba53a3e68ba5b\") " pod="kube-system/kube-apiserver-localhost" Mar 3 13:42:58.900566 kubelet[2840]: I0303 13:42:58.900477 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 13:42:58.902219 kubelet[2840]: I0303 13:42:58.901013 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 3 13:42:58.902491 kubelet[2840]: I0303 13:42:58.902460 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c510be2292c7dc746f7ba53a3e68ba5b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c510be2292c7dc746f7ba53a3e68ba5b\") " pod="kube-system/kube-apiserver-localhost" Mar 3 13:42:58.902646 kubelet[2840]: I0303 13:42:58.902622 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 13:42:58.907235 kubelet[2840]: I0303 13:42:58.904025 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 13:42:58.923716 kubelet[2840]: I0303 13:42:58.919754 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 13:42:58.924188 kubelet[2840]: I0303 13:42:58.923990 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 13:42:58.926260 kubelet[2840]: E0303 13:42:58.924717 2840 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 3 13:42:58.930969 kubelet[2840]: E0303 13:42:58.930243 2840 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 3 13:42:58.938983 kubelet[2840]: E0303 13:42:58.937696 2840 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 3 13:42:59.085783 kubelet[2840]: I0303 13:42:59.083843 2840 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 3 13:42:59.174669 kubelet[2840]: I0303 13:42:59.173630 2840 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 3 13:42:59.175654 kubelet[2840]: I0303 13:42:59.175626 2840 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 3 13:42:59.227531 kubelet[2840]: E0303 13:42:59.227480 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:42:59.277691 kubelet[2840]: E0303 13:42:59.277499 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:42:59.281170 kubelet[2840]: E0303 13:42:59.279944 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:42:59.328802 kubelet[2840]: E0303 13:42:59.328657 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:42:59.781419 kubelet[2840]: I0303 13:42:59.780684 2840 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 3 13:42:59.794389 containerd[1556]: time="2026-03-03T13:42:59.794022751Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 3 13:42:59.797020 kubelet[2840]: I0303 13:42:59.796992 2840 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 3 13:43:00.364748 kubelet[2840]: E0303 13:43:00.361852 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:43:00.372408 kubelet[2840]: E0303 13:43:00.368046 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:43:01.398420 kubelet[2840]: E0303 13:43:01.398012 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:43:01.403461 kubelet[2840]: E0303 13:43:01.402953 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:43:02.103034 systemd[1]: Created slice kubepods-besteffort-pod8eb7a3a6_df6f_4b3f_b147_3396c4325840.slice - libcontainer container kubepods-besteffort-pod8eb7a3a6_df6f_4b3f_b147_3396c4325840.slice. Mar 3 13:43:02.110211 kubelet[2840]: I0303 13:43:02.109441 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6zq7\" (UniqueName: \"kubernetes.io/projected/8eb7a3a6-df6f-4b3f-b147-3396c4325840-kube-api-access-g6zq7\") pod \"kube-proxy-lr7bq\" (UID: \"8eb7a3a6-df6f-4b3f-b147-3396c4325840\") " pod="kube-system/kube-proxy-lr7bq" Mar 3 13:43:02.110211 kubelet[2840]: I0303 13:43:02.109496 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8eb7a3a6-df6f-4b3f-b147-3396c4325840-kube-proxy\") pod \"kube-proxy-lr7bq\" (UID: \"8eb7a3a6-df6f-4b3f-b147-3396c4325840\") " pod="kube-system/kube-proxy-lr7bq" Mar 3 13:43:02.110211 kubelet[2840]: I0303 13:43:02.109527 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8eb7a3a6-df6f-4b3f-b147-3396c4325840-xtables-lock\") pod \"kube-proxy-lr7bq\" (UID: \"8eb7a3a6-df6f-4b3f-b147-3396c4325840\") " pod="kube-system/kube-proxy-lr7bq" Mar 3 13:43:02.110211 kubelet[2840]: I0303 13:43:02.109544 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8eb7a3a6-df6f-4b3f-b147-3396c4325840-lib-modules\") pod \"kube-proxy-lr7bq\" (UID: \"8eb7a3a6-df6f-4b3f-b147-3396c4325840\") " pod="kube-system/kube-proxy-lr7bq" Mar 3 13:43:02.417415 kubelet[2840]: E0303 13:43:02.413720 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:43:02.480417 kubelet[2840]: E0303 13:43:02.470035 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:43:02.480593 containerd[1556]: time="2026-03-03T13:43:02.473961573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lr7bq,Uid:8eb7a3a6-df6f-4b3f-b147-3396c4325840,Namespace:kube-system,Attempt:0,}" Mar 3 13:43:02.493488 kubelet[2840]: E0303 13:43:02.491959 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:43:03.988844 containerd[1556]: time="2026-03-03T13:43:03.889927516Z" level=info msg="connecting to shim 1c2df333a13ec6ccacd68d47f6bde0bf906e665c35b1a53d40d2f148ab6afd72" address="unix:///run/containerd/s/bd8c99967333c2d0afde23785de4ad2fd89a6d3093709c6ac1ce1514a65eda10" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:43:04.021403 kubelet[2840]: E0303 13:43:04.019671 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:43:04.041570 kubelet[2840]: E0303 13:43:04.041426 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:43:04.895671 systemd[1]: Started cri-containerd-1c2df333a13ec6ccacd68d47f6bde0bf906e665c35b1a53d40d2f148ab6afd72.scope - libcontainer container 1c2df333a13ec6ccacd68d47f6bde0bf906e665c35b1a53d40d2f148ab6afd72. Mar 3 13:43:05.650216 containerd[1556]: time="2026-03-03T13:43:05.648762343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lr7bq,Uid:8eb7a3a6-df6f-4b3f-b147-3396c4325840,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c2df333a13ec6ccacd68d47f6bde0bf906e665c35b1a53d40d2f148ab6afd72\"" Mar 3 13:43:05.655373 kubelet[2840]: E0303 13:43:05.651070 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:43:05.749817 containerd[1556]: time="2026-03-03T13:43:05.742938279Z" level=info msg="CreateContainer within sandbox \"1c2df333a13ec6ccacd68d47f6bde0bf906e665c35b1a53d40d2f148ab6afd72\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 3 13:43:05.761662 kubelet[2840]: E0303 13:43:05.760582 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:43:05.854079 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3458123338.mount: Deactivated successfully. Mar 3 13:43:05.947665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4136402379.mount: Deactivated successfully. Mar 3 13:43:06.239238 containerd[1556]: time="2026-03-03T13:43:06.228899147Z" level=info msg="Container 0c85a9ed78f32d2a6783a2254d53cd28e61ceeea71a2f3ff0ce415d6e1c0d1a9: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:43:06.270853 kubelet[2840]: E0303 13:43:06.270809 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:43:06.349567 containerd[1556]: time="2026-03-03T13:43:06.347443866Z" level=info msg="CreateContainer within sandbox \"1c2df333a13ec6ccacd68d47f6bde0bf906e665c35b1a53d40d2f148ab6afd72\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0c85a9ed78f32d2a6783a2254d53cd28e61ceeea71a2f3ff0ce415d6e1c0d1a9\"" Mar 3 13:43:06.388439 containerd[1556]: time="2026-03-03T13:43:06.386402293Z" level=info msg="StartContainer for \"0c85a9ed78f32d2a6783a2254d53cd28e61ceeea71a2f3ff0ce415d6e1c0d1a9\"" Mar 3 13:43:06.402910 containerd[1556]: time="2026-03-03T13:43:06.402860616Z" level=info msg="connecting to shim 0c85a9ed78f32d2a6783a2254d53cd28e61ceeea71a2f3ff0ce415d6e1c0d1a9" address="unix:///run/containerd/s/bd8c99967333c2d0afde23785de4ad2fd89a6d3093709c6ac1ce1514a65eda10" protocol=ttrpc version=3 Mar 3 13:43:06.636226 systemd[1]: Started cri-containerd-0c85a9ed78f32d2a6783a2254d53cd28e61ceeea71a2f3ff0ce415d6e1c0d1a9.scope - libcontainer container 0c85a9ed78f32d2a6783a2254d53cd28e61ceeea71a2f3ff0ce415d6e1c0d1a9. Mar 3 13:43:08.254059 containerd[1556]: time="2026-03-03T13:43:08.251722308Z" level=info msg="StartContainer for \"0c85a9ed78f32d2a6783a2254d53cd28e61ceeea71a2f3ff0ce415d6e1c0d1a9\" returns successfully" Mar 3 13:43:09.190029 kubelet[2840]: E0303 13:43:09.188582 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:43:09.197533 systemd[1]: Created slice kubepods-besteffort-pod84dd6d91_6bda_47d4_b3cb_97c435dca123.slice - libcontainer container kubepods-besteffort-pod84dd6d91_6bda_47d4_b3cb_97c435dca123.slice. Mar 3 13:43:09.311584 kubelet[2840]: I0303 13:43:09.304975 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw75h\" (UniqueName: \"kubernetes.io/projected/84dd6d91-6bda-47d4-b3cb-97c435dca123-kube-api-access-mw75h\") pod \"tigera-operator-5588576f44-59h9h\" (UID: \"84dd6d91-6bda-47d4-b3cb-97c435dca123\") " pod="tigera-operator/tigera-operator-5588576f44-59h9h" Mar 3 13:43:09.311584 kubelet[2840]: I0303 13:43:09.305116 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/84dd6d91-6bda-47d4-b3cb-97c435dca123-var-lib-calico\") pod \"tigera-operator-5588576f44-59h9h\" (UID: \"84dd6d91-6bda-47d4-b3cb-97c435dca123\") " pod="tigera-operator/tigera-operator-5588576f44-59h9h" Mar 3 13:43:09.519450 containerd[1556]: time="2026-03-03T13:43:09.517948715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-59h9h,Uid:84dd6d91-6bda-47d4-b3cb-97c435dca123,Namespace:tigera-operator,Attempt:0,}" Mar 3 13:43:09.643029 containerd[1556]: time="2026-03-03T13:43:09.642651300Z" level=info msg="connecting to shim 6a845f6859fe8077f19edcaed78fd2025407f9107151290b535c1b201babbd40" address="unix:///run/containerd/s/42907ae0e61bae36ddd95247c9ece6f38471def3791711e6e4e3f5a9e476869f" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:43:09.830873 systemd[1]: Started cri-containerd-6a845f6859fe8077f19edcaed78fd2025407f9107151290b535c1b201babbd40.scope - libcontainer container 6a845f6859fe8077f19edcaed78fd2025407f9107151290b535c1b201babbd40. Mar 3 13:43:10.434631 kubelet[2840]: E0303 13:43:10.431094 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:43:10.648625 containerd[1556]: time="2026-03-03T13:43:10.645690978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-59h9h,Uid:84dd6d91-6bda-47d4-b3cb-97c435dca123,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6a845f6859fe8077f19edcaed78fd2025407f9107151290b535c1b201babbd40\"" Mar 3 13:43:10.695760 containerd[1556]: time="2026-03-03T13:43:10.689754503Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 3 13:43:12.971735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4257642123.mount: Deactivated successfully. Mar 3 13:43:19.631783 kubelet[2840]: E0303 13:43:19.631599 2840 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.463s" Mar 3 13:43:36.475135 containerd[1556]: time="2026-03-03T13:43:36.474875562Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:43:36.480245 containerd[1556]: time="2026-03-03T13:43:36.480136434Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 3 13:43:36.487815 containerd[1556]: time="2026-03-03T13:43:36.487721454Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:43:36.498611 containerd[1556]: time="2026-03-03T13:43:36.498107145Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:43:36.504865 containerd[1556]: time="2026-03-03T13:43:36.504647002Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 25.814760313s" Mar 3 13:43:36.504865 containerd[1556]: time="2026-03-03T13:43:36.504721371Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 3 13:43:36.539552 containerd[1556]: time="2026-03-03T13:43:36.538942650Z" level=info msg="CreateContainer within sandbox \"6a845f6859fe8077f19edcaed78fd2025407f9107151290b535c1b201babbd40\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 3 13:43:36.579972 containerd[1556]: time="2026-03-03T13:43:36.579913331Z" level=info msg="Container 1fcbffa333366de6dc81a946acd38b3bcdc1892cc2d7f1a5d64deae728a0fcff: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:43:36.611495 containerd[1556]: time="2026-03-03T13:43:36.611053378Z" level=info msg="CreateContainer within sandbox \"6a845f6859fe8077f19edcaed78fd2025407f9107151290b535c1b201babbd40\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1fcbffa333366de6dc81a946acd38b3bcdc1892cc2d7f1a5d64deae728a0fcff\"" Mar 3 13:43:36.613039 containerd[1556]: time="2026-03-03T13:43:36.612923279Z" level=info msg="StartContainer for \"1fcbffa333366de6dc81a946acd38b3bcdc1892cc2d7f1a5d64deae728a0fcff\"" Mar 3 13:43:36.616978 containerd[1556]: time="2026-03-03T13:43:36.616928526Z" level=info msg="connecting to shim 1fcbffa333366de6dc81a946acd38b3bcdc1892cc2d7f1a5d64deae728a0fcff" address="unix:///run/containerd/s/42907ae0e61bae36ddd95247c9ece6f38471def3791711e6e4e3f5a9e476869f" protocol=ttrpc version=3 Mar 3 13:43:36.870240 systemd[1]: Started cri-containerd-1fcbffa333366de6dc81a946acd38b3bcdc1892cc2d7f1a5d64deae728a0fcff.scope - libcontainer container 1fcbffa333366de6dc81a946acd38b3bcdc1892cc2d7f1a5d64deae728a0fcff. Mar 3 13:43:37.484715 containerd[1556]: time="2026-03-03T13:43:37.484648845Z" level=info msg="StartContainer for \"1fcbffa333366de6dc81a946acd38b3bcdc1892cc2d7f1a5d64deae728a0fcff\" returns successfully" Mar 3 13:43:38.221946 kubelet[2840]: I0303 13:43:38.221683 2840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lr7bq" podStartSLOduration=37.221655489 podStartE2EDuration="37.221655489s" podCreationTimestamp="2026-03-03 13:43:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 13:43:09.445992132 +0000 UTC m=+12.377578174" watchObservedRunningTime="2026-03-03 13:43:38.221655489 +0000 UTC m=+41.153241512" Mar 3 13:43:38.221946 kubelet[2840]: I0303 13:43:38.221943 2840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-59h9h" podStartSLOduration=4.394745762 podStartE2EDuration="30.221930743s" podCreationTimestamp="2026-03-03 13:43:08 +0000 UTC" firstStartedPulling="2026-03-03 13:43:10.682791542 +0000 UTC m=+13.614377564" lastFinishedPulling="2026-03-03 13:43:36.509976523 +0000 UTC m=+39.441562545" observedRunningTime="2026-03-03 13:43:38.203599139 +0000 UTC m=+41.135185182" watchObservedRunningTime="2026-03-03 13:43:38.221930743 +0000 UTC m=+41.153516776" Mar 3 13:43:52.598943 kubelet[2840]: E0303 13:43:52.598811 2840 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.996s" Mar 3 13:43:56.857656 sudo[1782]: pam_unix(sudo:session): session closed for user root Mar 3 13:43:56.877449 sshd[1781]: Connection closed by 10.0.0.1 port 42546 Mar 3 13:43:56.883132 sshd-session[1775]: pam_unix(sshd:session): session closed for user core Mar 3 13:43:56.894122 systemd[1]: sshd@8-10.0.0.57:22-10.0.0.1:42546.service: Deactivated successfully. Mar 3 13:43:56.909976 systemd[1]: session-9.scope: Deactivated successfully. Mar 3 13:43:56.910743 systemd[1]: session-9.scope: Consumed 35.412s CPU time, 238M memory peak. Mar 3 13:43:56.914867 systemd-logind[1547]: Session 9 logged out. Waiting for processes to exit. Mar 3 13:43:56.925702 systemd-logind[1547]: Removed session 9. Mar 3 13:44:12.002166 kubelet[2840]: I0303 13:44:11.998141 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1eb7090-4045-4996-a467-1b5608b65a98-tigera-ca-bundle\") pod \"calico-typha-864669b94f-4kmm2\" (UID: \"c1eb7090-4045-4996-a467-1b5608b65a98\") " pod="calico-system/calico-typha-864669b94f-4kmm2" Mar 3 13:44:12.002166 kubelet[2840]: I0303 13:44:11.998204 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jv5b\" (UniqueName: \"kubernetes.io/projected/c1eb7090-4045-4996-a467-1b5608b65a98-kube-api-access-9jv5b\") pod \"calico-typha-864669b94f-4kmm2\" (UID: \"c1eb7090-4045-4996-a467-1b5608b65a98\") " pod="calico-system/calico-typha-864669b94f-4kmm2" Mar 3 13:44:12.002166 kubelet[2840]: I0303 13:44:11.998235 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c1eb7090-4045-4996-a467-1b5608b65a98-typha-certs\") pod \"calico-typha-864669b94f-4kmm2\" (UID: \"c1eb7090-4045-4996-a467-1b5608b65a98\") " pod="calico-system/calico-typha-864669b94f-4kmm2" Mar 3 13:44:12.038354 systemd[1]: Created slice kubepods-besteffort-podc1eb7090_4045_4996_a467_1b5608b65a98.slice - libcontainer container kubepods-besteffort-podc1eb7090_4045_4996_a467_1b5608b65a98.slice. Mar 3 13:44:12.137927 kubelet[2840]: E0303 13:44:12.137232 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:44:12.314074 systemd[1]: Created slice kubepods-besteffort-pod93ff466a_debd_470d_b9d8_3b9d1199c4f1.slice - libcontainer container kubepods-besteffort-pod93ff466a_debd_470d_b9d8_3b9d1199c4f1.slice. Mar 3 13:44:12.406078 kubelet[2840]: E0303 13:44:12.405889 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:44:12.411928 kubelet[2840]: I0303 13:44:12.410612 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/93ff466a-debd-470d-b9d8-3b9d1199c4f1-cni-log-dir\") pod \"calico-node-6lxgg\" (UID: \"93ff466a-debd-470d-b9d8-3b9d1199c4f1\") " pod="calico-system/calico-node-6lxgg" Mar 3 13:44:12.413610 kubelet[2840]: I0303 13:44:12.413360 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/93ff466a-debd-470d-b9d8-3b9d1199c4f1-node-certs\") pod \"calico-node-6lxgg\" (UID: \"93ff466a-debd-470d-b9d8-3b9d1199c4f1\") " pod="calico-system/calico-node-6lxgg" Mar 3 13:44:12.416507 containerd[1556]: time="2026-03-03T13:44:12.414642118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-864669b94f-4kmm2,Uid:c1eb7090-4045-4996-a467-1b5608b65a98,Namespace:calico-system,Attempt:0,}" Mar 3 13:44:12.430681 kubelet[2840]: I0303 13:44:12.428858 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93ff466a-debd-470d-b9d8-3b9d1199c4f1-xtables-lock\") pod \"calico-node-6lxgg\" (UID: \"93ff466a-debd-470d-b9d8-3b9d1199c4f1\") " pod="calico-system/calico-node-6lxgg" Mar 3 13:44:12.439344 kubelet[2840]: I0303 13:44:12.438941 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/93ff466a-debd-470d-b9d8-3b9d1199c4f1-policysync\") pod \"calico-node-6lxgg\" (UID: \"93ff466a-debd-470d-b9d8-3b9d1199c4f1\") " pod="calico-system/calico-node-6lxgg" Mar 3 13:44:12.439344 kubelet[2840]: I0303 13:44:12.439141 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/93ff466a-debd-470d-b9d8-3b9d1199c4f1-var-lib-calico\") pod \"calico-node-6lxgg\" (UID: \"93ff466a-debd-470d-b9d8-3b9d1199c4f1\") " pod="calico-system/calico-node-6lxgg" Mar 3 13:44:12.439344 kubelet[2840]: I0303 13:44:12.439189 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/93ff466a-debd-470d-b9d8-3b9d1199c4f1-var-run-calico\") pod \"calico-node-6lxgg\" (UID: \"93ff466a-debd-470d-b9d8-3b9d1199c4f1\") " pod="calico-system/calico-node-6lxgg" Mar 3 13:44:12.439344 kubelet[2840]: I0303 13:44:12.439221 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/93ff466a-debd-470d-b9d8-3b9d1199c4f1-nodeproc\") pod \"calico-node-6lxgg\" (UID: \"93ff466a-debd-470d-b9d8-3b9d1199c4f1\") " pod="calico-system/calico-node-6lxgg" Mar 3 13:44:12.439344 kubelet[2840]: I0303 13:44:12.439250 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93ff466a-debd-470d-b9d8-3b9d1199c4f1-tigera-ca-bundle\") pod \"calico-node-6lxgg\" (UID: \"93ff466a-debd-470d-b9d8-3b9d1199c4f1\") " pod="calico-system/calico-node-6lxgg" Mar 3 13:44:12.440093 kubelet[2840]: I0303 13:44:12.439386 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/93ff466a-debd-470d-b9d8-3b9d1199c4f1-cni-bin-dir\") pod \"calico-node-6lxgg\" (UID: \"93ff466a-debd-470d-b9d8-3b9d1199c4f1\") " pod="calico-system/calico-node-6lxgg" Mar 3 13:44:12.440093 kubelet[2840]: I0303 13:44:12.439416 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/93ff466a-debd-470d-b9d8-3b9d1199c4f1-cni-net-dir\") pod \"calico-node-6lxgg\" (UID: \"93ff466a-debd-470d-b9d8-3b9d1199c4f1\") " pod="calico-system/calico-node-6lxgg" Mar 3 13:44:12.440093 kubelet[2840]: I0303 13:44:12.439446 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93ff466a-debd-470d-b9d8-3b9d1199c4f1-lib-modules\") pod \"calico-node-6lxgg\" (UID: \"93ff466a-debd-470d-b9d8-3b9d1199c4f1\") " pod="calico-system/calico-node-6lxgg" Mar 3 13:44:12.440093 kubelet[2840]: I0303 13:44:12.439477 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/93ff466a-debd-470d-b9d8-3b9d1199c4f1-flexvol-driver-host\") pod \"calico-node-6lxgg\" (UID: \"93ff466a-debd-470d-b9d8-3b9d1199c4f1\") " pod="calico-system/calico-node-6lxgg" Mar 3 13:44:12.440093 kubelet[2840]: I0303 13:44:12.439504 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/93ff466a-debd-470d-b9d8-3b9d1199c4f1-sys-fs\") pod \"calico-node-6lxgg\" (UID: \"93ff466a-debd-470d-b9d8-3b9d1199c4f1\") " pod="calico-system/calico-node-6lxgg" Mar 3 13:44:12.440491 kubelet[2840]: I0303 13:44:12.439527 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb5xh\" (UniqueName: \"kubernetes.io/projected/93ff466a-debd-470d-b9d8-3b9d1199c4f1-kube-api-access-qb5xh\") pod \"calico-node-6lxgg\" (UID: \"93ff466a-debd-470d-b9d8-3b9d1199c4f1\") " pod="calico-system/calico-node-6lxgg" Mar 3 13:44:12.440491 kubelet[2840]: I0303 13:44:12.439555 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/93ff466a-debd-470d-b9d8-3b9d1199c4f1-bpffs\") pod \"calico-node-6lxgg\" (UID: \"93ff466a-debd-470d-b9d8-3b9d1199c4f1\") " pod="calico-system/calico-node-6lxgg" Mar 3 13:44:12.460481 kubelet[2840]: E0303 13:44:12.460422 2840 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-468gm" podUID="1d197286-6496-40b9-b42f-7c09e055ab02" Mar 3 13:44:12.543990 kubelet[2840]: I0303 13:44:12.542633 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1d197286-6496-40b9-b42f-7c09e055ab02-registration-dir\") pod \"csi-node-driver-468gm\" (UID: \"1d197286-6496-40b9-b42f-7c09e055ab02\") " pod="calico-system/csi-node-driver-468gm" Mar 3 13:44:12.548230 kubelet[2840]: I0303 13:44:12.547937 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1d197286-6496-40b9-b42f-7c09e055ab02-varrun\") pod \"csi-node-driver-468gm\" (UID: \"1d197286-6496-40b9-b42f-7c09e055ab02\") " pod="calico-system/csi-node-driver-468gm" Mar 3 13:44:12.549566 kubelet[2840]: I0303 13:44:12.549114 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkv4x\" (UniqueName: \"kubernetes.io/projected/1d197286-6496-40b9-b42f-7c09e055ab02-kube-api-access-pkv4x\") pod \"csi-node-driver-468gm\" (UID: \"1d197286-6496-40b9-b42f-7c09e055ab02\") " pod="calico-system/csi-node-driver-468gm" Mar 3 13:44:12.550920 kubelet[2840]: I0303 13:44:12.550455 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1d197286-6496-40b9-b42f-7c09e055ab02-kubelet-dir\") pod \"csi-node-driver-468gm\" (UID: \"1d197286-6496-40b9-b42f-7c09e055ab02\") " pod="calico-system/csi-node-driver-468gm" Mar 3 13:44:12.552917 kubelet[2840]: I0303 13:44:12.551179 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1d197286-6496-40b9-b42f-7c09e055ab02-socket-dir\") pod \"csi-node-driver-468gm\" (UID: \"1d197286-6496-40b9-b42f-7c09e055ab02\") " pod="calico-system/csi-node-driver-468gm" Mar 3 13:44:12.574046 kubelet[2840]: E0303 13:44:12.572794 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:12.574046 kubelet[2840]: W0303 13:44:12.572882 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:12.574046 kubelet[2840]: E0303 13:44:12.572915 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:12.619695 kubelet[2840]: E0303 13:44:12.619628 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:12.621113 kubelet[2840]: W0303 13:44:12.620963 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:12.621113 kubelet[2840]: E0303 13:44:12.621007 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:12.630407 containerd[1556]: time="2026-03-03T13:44:12.630159105Z" level=info msg="connecting to shim 420711797c04fd5e92f2ae0d7ea65b73d0857745d74aab7e84a158894e9230e6" address="unix:///run/containerd/s/3dd7fb8f23b97ca67cecec8c0a33461fce220915c579595dd61d777ec9ac0558" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:44:12.658606 kubelet[2840]: E0303 13:44:12.658523 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:12.658870 kubelet[2840]: W0303 13:44:12.658577 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:12.658870 kubelet[2840]: E0303 13:44:12.658668 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:12.666941 kubelet[2840]: E0303 13:44:12.664639 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:12.666941 kubelet[2840]: W0303 13:44:12.664680 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:12.666941 kubelet[2840]: E0303 13:44:12.664713 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:12.672153 kubelet[2840]: E0303 13:44:12.671901 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:12.673249 kubelet[2840]: W0303 13:44:12.672512 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:12.675205 kubelet[2840]: E0303 13:44:12.673671 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:12.684563 kubelet[2840]: E0303 13:44:12.681607 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:12.684563 kubelet[2840]: W0303 13:44:12.687394 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:12.684563 kubelet[2840]: E0303 13:44:12.687448 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:12.691422 kubelet[2840]: E0303 13:44:12.691254 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:12.706218 kubelet[2840]: W0303 13:44:12.691703 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:12.706218 kubelet[2840]: E0303 13:44:12.706113 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:12.737716 kubelet[2840]: E0303 13:44:12.737672 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:12.737716 kubelet[2840]: W0303 13:44:12.745384 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:12.737716 kubelet[2840]: E0303 13:44:12.745438 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:12.758359 kubelet[2840]: E0303 13:44:12.758210 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:12.758359 kubelet[2840]: W0303 13:44:12.758248 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:12.762371 kubelet[2840]: E0303 13:44:12.758486 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:12.779536 kubelet[2840]: E0303 13:44:12.777825 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:12.779536 kubelet[2840]: W0303 13:44:12.777911 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:12.779536 kubelet[2840]: E0303 13:44:12.777943 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:12.788418 kubelet[2840]: E0303 13:44:12.783577 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:12.788418 kubelet[2840]: W0303 13:44:12.783626 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:12.788418 kubelet[2840]: E0303 13:44:12.783658 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:12.788987 kubelet[2840]: E0303 13:44:12.788489 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:12.788987 kubelet[2840]: W0303 13:44:12.788520 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:12.788987 kubelet[2840]: E0303 13:44:12.788551 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:12.790966 kubelet[2840]: E0303 13:44:12.790463 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:12.790966 kubelet[2840]: W0303 13:44:12.790540 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:12.790966 kubelet[2840]: E0303 13:44:12.790564 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:12.794168 kubelet[2840]: E0303 13:44:12.794083 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:12.794168 kubelet[2840]: W0303 13:44:12.794157 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:12.794419 kubelet[2840]: E0303 13:44:12.794182 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:12.797388 kubelet[2840]: E0303 13:44:12.797232 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:12.797477 kubelet[2840]: W0303 13:44:12.797421 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:12.797477 kubelet[2840]: E0303 13:44:12.797448 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:12.798682 kubelet[2840]: E0303 13:44:12.798590 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:12.798682 kubelet[2840]: W0303 13:44:12.798669 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:12.798845 kubelet[2840]: E0303 13:44:12.798691 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:12.799775 kubelet[2840]: E0303 13:44:12.799641 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:12.799908 kubelet[2840]: W0303 13:44:12.799718 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:12.799908 kubelet[2840]: E0303 13:44:12.799805 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:12.802792 kubelet[2840]: E0303 13:44:12.802198 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:12.802792 kubelet[2840]: W0303 13:44:12.802226 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:12.802792 kubelet[2840]: E0303 13:44:12.802243 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:12.804614 kubelet[2840]: E0303 13:44:12.804581 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:12.804614 kubelet[2840]: W0303 13:44:12.804611 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:12.806955 kubelet[2840]: E0303 13:44:12.804629 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:12.812890 kubelet[2840]: E0303 13:44:12.808138 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:12.812890 kubelet[2840]: W0303 13:44:12.808222 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:12.812890 kubelet[2840]: E0303 13:44:12.808246 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:12.812890 kubelet[2840]: E0303 13:44:12.810674 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:12.812890 kubelet[2840]: W0303 13:44:12.810694 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:12.812890 kubelet[2840]: E0303 13:44:12.810713 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:12.817013 kubelet[2840]: E0303 13:44:12.816234 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:12.817013 kubelet[2840]: W0303 13:44:12.816391 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:12.817013 kubelet[2840]: E0303 13:44:12.816423 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:12.829638 kubelet[2840]: E0303 13:44:12.826070 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:12.829638 kubelet[2840]: W0303 13:44:12.826114 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:12.829638 kubelet[2840]: E0303 13:44:12.826146 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:12.829638 kubelet[2840]: E0303 13:44:12.827636 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:12.829638 kubelet[2840]: W0303 13:44:12.827654 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:12.829638 kubelet[2840]: E0303 13:44:12.827676 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:12.833995 kubelet[2840]: E0303 13:44:12.833569 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:12.833995 kubelet[2840]: W0303 13:44:12.833607 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:12.833995 kubelet[2840]: E0303 13:44:12.833633 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:12.838236 kubelet[2840]: E0303 13:44:12.838156 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:12.838236 kubelet[2840]: W0303 13:44:12.838181 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:12.838236 kubelet[2840]: E0303 13:44:12.838202 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:12.846255 kubelet[2840]: E0303 13:44:12.845143 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:12.846255 kubelet[2840]: W0303 13:44:12.845168 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:12.846255 kubelet[2840]: E0303 13:44:12.845186 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:12.852397 kubelet[2840]: E0303 13:44:12.850816 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:12.852397 kubelet[2840]: W0303 13:44:12.852244 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:12.853403 kubelet[2840]: E0303 13:44:12.852813 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:12.917845 kubelet[2840]: E0303 13:44:12.913543 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:12.918256 kubelet[2840]: W0303 13:44:12.918136 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:12.918256 kubelet[2840]: E0303 13:44:12.918191 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:12.961134 containerd[1556]: time="2026-03-03T13:44:12.960854524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6lxgg,Uid:93ff466a-debd-470d-b9d8-3b9d1199c4f1,Namespace:calico-system,Attempt:0,}" Mar 3 13:44:13.038048 systemd[1]: Started cri-containerd-420711797c04fd5e92f2ae0d7ea65b73d0857745d74aab7e84a158894e9230e6.scope - libcontainer container 420711797c04fd5e92f2ae0d7ea65b73d0857745d74aab7e84a158894e9230e6. Mar 3 13:44:13.150375 containerd[1556]: time="2026-03-03T13:44:13.149949528Z" level=info msg="connecting to shim 90dc62cba0af49f124547ba732661077f420dac8fabde2e1616d41681f760ac9" address="unix:///run/containerd/s/4feed5962c880f8a52a45d0459be1ea180e92f4ff73a919adff3876af5e98c25" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:44:13.403232 systemd[1]: Started cri-containerd-90dc62cba0af49f124547ba732661077f420dac8fabde2e1616d41681f760ac9.scope - libcontainer container 90dc62cba0af49f124547ba732661077f420dac8fabde2e1616d41681f760ac9. Mar 3 13:44:13.479791 containerd[1556]: time="2026-03-03T13:44:13.479682196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-864669b94f-4kmm2,Uid:c1eb7090-4045-4996-a467-1b5608b65a98,Namespace:calico-system,Attempt:0,} returns sandbox id \"420711797c04fd5e92f2ae0d7ea65b73d0857745d74aab7e84a158894e9230e6\"" Mar 3 13:44:13.515029 kubelet[2840]: E0303 13:44:13.514896 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:44:13.567386 containerd[1556]: time="2026-03-03T13:44:13.564906203Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 3 13:44:13.967109 containerd[1556]: time="2026-03-03T13:44:13.965728863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6lxgg,Uid:93ff466a-debd-470d-b9d8-3b9d1199c4f1,Namespace:calico-system,Attempt:0,} returns sandbox id \"90dc62cba0af49f124547ba732661077f420dac8fabde2e1616d41681f760ac9\"" Mar 3 13:44:14.136947 kubelet[2840]: E0303 13:44:14.136171 2840 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-468gm" podUID="1d197286-6496-40b9-b42f-7c09e055ab02" Mar 3 13:44:15.132553 kubelet[2840]: E0303 13:44:15.132213 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:44:15.146235 kubelet[2840]: E0303 13:44:15.146079 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:15.146235 kubelet[2840]: W0303 13:44:15.146114 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:15.146518 kubelet[2840]: E0303 13:44:15.146393 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:15.153038 kubelet[2840]: E0303 13:44:15.152692 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:15.153038 kubelet[2840]: W0303 13:44:15.152807 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:15.153038 kubelet[2840]: E0303 13:44:15.152838 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:15.158412 kubelet[2840]: E0303 13:44:15.156202 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:15.158412 kubelet[2840]: W0303 13:44:15.156234 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:15.158412 kubelet[2840]: E0303 13:44:15.156385 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:15.158412 kubelet[2840]: E0303 13:44:15.156837 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:15.158412 kubelet[2840]: W0303 13:44:15.156851 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:15.158412 kubelet[2840]: E0303 13:44:15.156866 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:15.158412 kubelet[2840]: E0303 13:44:15.157141 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:15.158412 kubelet[2840]: W0303 13:44:15.157153 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:15.158412 kubelet[2840]: E0303 13:44:15.157172 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:15.158412 kubelet[2840]: E0303 13:44:15.157576 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:15.158840 kubelet[2840]: W0303 13:44:15.157588 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:15.158840 kubelet[2840]: E0303 13:44:15.157605 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:15.183135 kubelet[2840]: E0303 13:44:15.182656 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:15.183135 kubelet[2840]: W0303 13:44:15.182820 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:15.183135 kubelet[2840]: E0303 13:44:15.182858 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:15.192989 kubelet[2840]: E0303 13:44:15.190569 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:15.196182 kubelet[2840]: W0303 13:44:15.193242 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:15.196182 kubelet[2840]: E0303 13:44:15.193417 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:15.199922 kubelet[2840]: E0303 13:44:15.197174 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:15.199922 kubelet[2840]: W0303 13:44:15.197194 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:15.199922 kubelet[2840]: E0303 13:44:15.197221 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:15.202828 kubelet[2840]: E0303 13:44:15.202466 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:15.202828 kubelet[2840]: W0303 13:44:15.202552 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:15.202828 kubelet[2840]: E0303 13:44:15.202586 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:15.204009 kubelet[2840]: E0303 13:44:15.203608 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:15.204009 kubelet[2840]: W0303 13:44:15.203621 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:15.204009 kubelet[2840]: E0303 13:44:15.203639 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:15.204009 kubelet[2840]: E0303 13:44:15.204003 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:15.204381 kubelet[2840]: W0303 13:44:15.204016 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:15.204381 kubelet[2840]: E0303 13:44:15.204032 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:15.206533 kubelet[2840]: E0303 13:44:15.206448 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:15.206533 kubelet[2840]: W0303 13:44:15.206521 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:15.206879 kubelet[2840]: E0303 13:44:15.206543 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:15.210406 kubelet[2840]: E0303 13:44:15.209094 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:15.210406 kubelet[2840]: W0303 13:44:15.209166 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:15.210406 kubelet[2840]: E0303 13:44:15.209189 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:15.212589 kubelet[2840]: E0303 13:44:15.212443 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:15.212589 kubelet[2840]: W0303 13:44:15.212470 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:15.212589 kubelet[2840]: E0303 13:44:15.212497 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:15.213207 kubelet[2840]: E0303 13:44:15.213082 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:15.213207 kubelet[2840]: W0303 13:44:15.213101 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:15.213207 kubelet[2840]: E0303 13:44:15.213117 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:15.220413 kubelet[2840]: E0303 13:44:15.219895 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:15.220413 kubelet[2840]: W0303 13:44:15.219936 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:15.220413 kubelet[2840]: E0303 13:44:15.219968 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:15.221525 kubelet[2840]: E0303 13:44:15.221042 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:15.221525 kubelet[2840]: W0303 13:44:15.221060 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:15.221525 kubelet[2840]: E0303 13:44:15.221078 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:15.222047 kubelet[2840]: E0303 13:44:15.221948 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:15.222047 kubelet[2840]: W0303 13:44:15.221968 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:15.222047 kubelet[2840]: E0303 13:44:15.221982 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:15.224241 kubelet[2840]: E0303 13:44:15.224104 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:15.224241 kubelet[2840]: W0303 13:44:15.224128 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:15.224241 kubelet[2840]: E0303 13:44:15.224146 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:15.314566 kubelet[2840]: E0303 13:44:15.314464 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:15.314566 kubelet[2840]: W0303 13:44:15.314557 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:15.317607 kubelet[2840]: E0303 13:44:15.314591 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:15.319261 kubelet[2840]: E0303 13:44:15.319230 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:15.323445 kubelet[2840]: W0303 13:44:15.319581 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:15.323445 kubelet[2840]: E0303 13:44:15.319622 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:15.327832 kubelet[2840]: E0303 13:44:15.327697 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:15.327975 kubelet[2840]: W0303 13:44:15.327834 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:15.327975 kubelet[2840]: E0303 13:44:15.327869 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:15.334809 kubelet[2840]: E0303 13:44:15.333629 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:15.334809 kubelet[2840]: W0303 13:44:15.333710 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:15.338062 kubelet[2840]: E0303 13:44:15.333740 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:15.352154 kubelet[2840]: E0303 13:44:15.351421 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:15.352154 kubelet[2840]: W0303 13:44:15.351460 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:15.352154 kubelet[2840]: E0303 13:44:15.351491 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:15.357833 kubelet[2840]: E0303 13:44:15.357178 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:15.357833 kubelet[2840]: W0303 13:44:15.357369 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:15.357833 kubelet[2840]: E0303 13:44:15.357406 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:15.363570 kubelet[2840]: E0303 13:44:15.360856 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:15.363570 kubelet[2840]: W0303 13:44:15.360886 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:15.363570 kubelet[2840]: E0303 13:44:15.360915 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:15.367999 kubelet[2840]: E0303 13:44:15.367660 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:15.367999 kubelet[2840]: W0303 13:44:15.367806 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:15.367999 kubelet[2840]: E0303 13:44:15.367843 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:15.374988 kubelet[2840]: E0303 13:44:15.374074 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:15.374988 kubelet[2840]: W0303 13:44:15.374150 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:15.374988 kubelet[2840]: E0303 13:44:15.374179 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:15.376463 kubelet[2840]: E0303 13:44:15.376433 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:15.377420 kubelet[2840]: W0303 13:44:15.376655 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:15.377420 kubelet[2840]: E0303 13:44:15.376692 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:15.382919 kubelet[2840]: E0303 13:44:15.381029 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:15.382919 kubelet[2840]: W0303 13:44:15.381456 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:15.382919 kubelet[2840]: E0303 13:44:15.381619 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:15.390086 kubelet[2840]: E0303 13:44:15.389647 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:15.390086 kubelet[2840]: W0303 13:44:15.389680 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:15.390086 kubelet[2840]: E0303 13:44:15.389710 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:15.432559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3024521898.mount: Deactivated successfully. Mar 3 13:44:16.138087 kubelet[2840]: E0303 13:44:16.134822 2840 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-468gm" podUID="1d197286-6496-40b9-b42f-7c09e055ab02" Mar 3 13:44:18.136385 kubelet[2840]: E0303 13:44:18.136212 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:44:18.143162 kubelet[2840]: E0303 13:44:18.142627 2840 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-468gm" podUID="1d197286-6496-40b9-b42f-7c09e055ab02" Mar 3 13:44:18.215954 kubelet[2840]: E0303 13:44:18.214975 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:18.215954 kubelet[2840]: W0303 13:44:18.215235 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:18.215954 kubelet[2840]: E0303 13:44:18.215499 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:18.224008 kubelet[2840]: E0303 13:44:18.221793 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:18.224008 kubelet[2840]: W0303 13:44:18.221824 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:18.224008 kubelet[2840]: E0303 13:44:18.221857 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:18.230164 kubelet[2840]: E0303 13:44:18.228642 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:18.230164 kubelet[2840]: W0303 13:44:18.228742 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:18.230164 kubelet[2840]: E0303 13:44:18.228779 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:18.230164 kubelet[2840]: E0303 13:44:18.229150 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:18.230164 kubelet[2840]: W0303 13:44:18.229162 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:18.230164 kubelet[2840]: E0303 13:44:18.229174 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:18.230164 kubelet[2840]: E0303 13:44:18.229616 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:18.230164 kubelet[2840]: W0303 13:44:18.229626 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:18.230164 kubelet[2840]: E0303 13:44:18.229637 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:18.230164 kubelet[2840]: E0303 13:44:18.229898 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:18.232585 kubelet[2840]: W0303 13:44:18.229907 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:18.232585 kubelet[2840]: E0303 13:44:18.229918 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:18.232585 kubelet[2840]: E0303 13:44:18.230111 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:18.232585 kubelet[2840]: W0303 13:44:18.230123 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:18.232585 kubelet[2840]: E0303 13:44:18.230133 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:18.234796 kubelet[2840]: E0303 13:44:18.234627 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:18.234796 kubelet[2840]: W0303 13:44:18.234758 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:18.234796 kubelet[2840]: E0303 13:44:18.234782 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:18.235535 kubelet[2840]: E0303 13:44:18.235514 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:18.235535 kubelet[2840]: W0303 13:44:18.235530 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:18.236670 kubelet[2840]: E0303 13:44:18.235546 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:18.246996 kubelet[2840]: E0303 13:44:18.246823 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:18.246996 kubelet[2840]: W0303 13:44:18.246861 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:18.246996 kubelet[2840]: E0303 13:44:18.246891 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:18.249907 kubelet[2840]: E0303 13:44:18.249868 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:18.250115 kubelet[2840]: W0303 13:44:18.250009 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:18.250115 kubelet[2840]: E0303 13:44:18.250036 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:18.257224 kubelet[2840]: E0303 13:44:18.257176 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:18.257790 kubelet[2840]: W0303 13:44:18.257556 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:18.257790 kubelet[2840]: E0303 13:44:18.257598 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:18.260118 kubelet[2840]: E0303 13:44:18.260090 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:18.260235 kubelet[2840]: W0303 13:44:18.260216 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:18.260821 kubelet[2840]: E0303 13:44:18.260442 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:18.265034 kubelet[2840]: E0303 13:44:18.265000 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:18.265207 kubelet[2840]: W0303 13:44:18.265184 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:18.265408 kubelet[2840]: E0303 13:44:18.265384 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:18.266401 kubelet[2840]: E0303 13:44:18.266250 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:18.266498 kubelet[2840]: W0303 13:44:18.266481 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:18.266679 kubelet[2840]: E0303 13:44:18.266560 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:18.268367 kubelet[2840]: E0303 13:44:18.268347 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:18.268463 kubelet[2840]: W0303 13:44:18.268448 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:18.268538 kubelet[2840]: E0303 13:44:18.268524 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:18.269222 kubelet[2840]: E0303 13:44:18.269137 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:18.269222 kubelet[2840]: W0303 13:44:18.269154 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:18.269222 kubelet[2840]: E0303 13:44:18.269167 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:18.271938 kubelet[2840]: E0303 13:44:18.271817 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:18.271938 kubelet[2840]: W0303 13:44:18.271841 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:18.271938 kubelet[2840]: E0303 13:44:18.271861 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:18.279597 kubelet[2840]: E0303 13:44:18.275408 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:18.279597 kubelet[2840]: W0303 13:44:18.275429 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:18.279597 kubelet[2840]: E0303 13:44:18.275449 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:18.279597 kubelet[2840]: E0303 13:44:18.279497 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:18.279597 kubelet[2840]: W0303 13:44:18.279517 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:18.279597 kubelet[2840]: E0303 13:44:18.279539 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:18.286081 kubelet[2840]: E0303 13:44:18.285890 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:18.286081 kubelet[2840]: W0303 13:44:18.286069 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:18.287458 kubelet[2840]: E0303 13:44:18.286107 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:18.294007 kubelet[2840]: E0303 13:44:18.292028 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:18.294007 kubelet[2840]: W0303 13:44:18.292057 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:18.294007 kubelet[2840]: E0303 13:44:18.292087 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:18.307999 kubelet[2840]: E0303 13:44:18.305800 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:18.307999 kubelet[2840]: W0303 13:44:18.306039 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:18.307999 kubelet[2840]: E0303 13:44:18.306385 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:18.312913 kubelet[2840]: E0303 13:44:18.312789 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:18.312913 kubelet[2840]: W0303 13:44:18.312870 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:18.312913 kubelet[2840]: E0303 13:44:18.312899 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:18.318156 kubelet[2840]: E0303 13:44:18.313653 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:18.318156 kubelet[2840]: W0303 13:44:18.313668 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:18.318454 kubelet[2840]: E0303 13:44:18.318426 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:20.136387 kubelet[2840]: E0303 13:44:20.133845 2840 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-468gm" podUID="1d197286-6496-40b9-b42f-7c09e055ab02" Mar 3 13:44:21.868476 containerd[1556]: time="2026-03-03T13:44:21.867973831Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:44:21.875413 containerd[1556]: time="2026-03-03T13:44:21.874096306Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 3 13:44:21.880046 containerd[1556]: time="2026-03-03T13:44:21.877982535Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:44:21.899933 containerd[1556]: time="2026-03-03T13:44:21.897884094Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:44:21.899933 containerd[1556]: time="2026-03-03T13:44:21.899683516Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 8.334715037s" Mar 3 13:44:21.899933 containerd[1556]: time="2026-03-03T13:44:21.899935246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 3 13:44:21.907479 containerd[1556]: time="2026-03-03T13:44:21.907093114Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 3 13:44:21.982547 containerd[1556]: time="2026-03-03T13:44:21.981240855Z" level=info msg="CreateContainer within sandbox \"420711797c04fd5e92f2ae0d7ea65b73d0857745d74aab7e84a158894e9230e6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 3 13:44:22.029096 containerd[1556]: time="2026-03-03T13:44:22.028910002Z" level=info msg="Container cd3af1c21cee9d2c8eb7d5bf7189349fe970b4f5e4418a16f62c9374740ebef8: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:44:22.070214 containerd[1556]: time="2026-03-03T13:44:22.069438631Z" level=info msg="CreateContainer within sandbox \"420711797c04fd5e92f2ae0d7ea65b73d0857745d74aab7e84a158894e9230e6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"cd3af1c21cee9d2c8eb7d5bf7189349fe970b4f5e4418a16f62c9374740ebef8\"" Mar 3 13:44:22.073822 containerd[1556]: time="2026-03-03T13:44:22.073685690Z" level=info msg="StartContainer for \"cd3af1c21cee9d2c8eb7d5bf7189349fe970b4f5e4418a16f62c9374740ebef8\"" Mar 3 13:44:22.091091 containerd[1556]: time="2026-03-03T13:44:22.090846987Z" level=info msg="connecting to shim cd3af1c21cee9d2c8eb7d5bf7189349fe970b4f5e4418a16f62c9374740ebef8" address="unix:///run/containerd/s/3dd7fb8f23b97ca67cecec8c0a33461fce220915c579595dd61d777ec9ac0558" protocol=ttrpc version=3 Mar 3 13:44:22.134541 kubelet[2840]: E0303 13:44:22.133639 2840 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-468gm" podUID="1d197286-6496-40b9-b42f-7c09e055ab02" Mar 3 13:44:22.215882 systemd[1]: Started cri-containerd-cd3af1c21cee9d2c8eb7d5bf7189349fe970b4f5e4418a16f62c9374740ebef8.scope - libcontainer container cd3af1c21cee9d2c8eb7d5bf7189349fe970b4f5e4418a16f62c9374740ebef8. Mar 3 13:44:22.641536 containerd[1556]: time="2026-03-03T13:44:22.638515989Z" level=info msg="StartContainer for \"cd3af1c21cee9d2c8eb7d5bf7189349fe970b4f5e4418a16f62c9374740ebef8\" returns successfully" Mar 3 13:44:23.201461 kubelet[2840]: E0303 13:44:23.201255 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:44:23.300241 kubelet[2840]: E0303 13:44:23.296004 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:23.300241 kubelet[2840]: W0303 13:44:23.300225 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:23.300574 kubelet[2840]: E0303 13:44:23.300354 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:23.304372 kubelet[2840]: E0303 13:44:23.300886 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:23.304372 kubelet[2840]: W0303 13:44:23.300909 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:23.304372 kubelet[2840]: E0303 13:44:23.300933 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:23.304372 kubelet[2840]: E0303 13:44:23.301233 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:23.304372 kubelet[2840]: W0303 13:44:23.301245 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:23.304372 kubelet[2840]: E0303 13:44:23.301262 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:23.304372 kubelet[2840]: E0303 13:44:23.304123 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:23.304372 kubelet[2840]: W0303 13:44:23.304141 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:23.304372 kubelet[2840]: E0303 13:44:23.304164 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:23.309919 kubelet[2840]: E0303 13:44:23.307000 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:23.309919 kubelet[2840]: W0303 13:44:23.307077 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:23.309919 kubelet[2840]: E0303 13:44:23.307101 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:23.309919 kubelet[2840]: E0303 13:44:23.309168 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:23.309919 kubelet[2840]: W0303 13:44:23.309184 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:23.309919 kubelet[2840]: E0303 13:44:23.309206 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:23.319204 kubelet[2840]: E0303 13:44:23.313688 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:23.319204 kubelet[2840]: W0303 13:44:23.313714 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:23.319204 kubelet[2840]: E0303 13:44:23.316413 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:23.319204 kubelet[2840]: E0303 13:44:23.317658 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:23.319204 kubelet[2840]: W0303 13:44:23.317674 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:23.319204 kubelet[2840]: E0303 13:44:23.317700 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:23.319204 kubelet[2840]: E0303 13:44:23.318421 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:23.319204 kubelet[2840]: W0303 13:44:23.318437 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:23.319204 kubelet[2840]: E0303 13:44:23.318455 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:23.321451 kubelet[2840]: E0303 13:44:23.321217 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:23.321451 kubelet[2840]: W0303 13:44:23.321402 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:23.321451 kubelet[2840]: E0303 13:44:23.321432 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:23.322036 kubelet[2840]: E0303 13:44:23.321843 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:23.322036 kubelet[2840]: W0303 13:44:23.321908 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:23.322036 kubelet[2840]: E0303 13:44:23.321928 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:23.322439 kubelet[2840]: E0303 13:44:23.322236 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:23.322439 kubelet[2840]: W0303 13:44:23.322402 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:23.322439 kubelet[2840]: E0303 13:44:23.322421 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:23.328979 kubelet[2840]: E0303 13:44:23.326024 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:23.328979 kubelet[2840]: W0303 13:44:23.326119 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:23.328979 kubelet[2840]: E0303 13:44:23.326147 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:23.328979 kubelet[2840]: E0303 13:44:23.328082 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:23.328979 kubelet[2840]: W0303 13:44:23.328104 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:23.328979 kubelet[2840]: E0303 13:44:23.328130 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:23.330378 kubelet[2840]: E0303 13:44:23.330084 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:23.330378 kubelet[2840]: W0303 13:44:23.330159 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:23.330378 kubelet[2840]: E0303 13:44:23.330181 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:23.412380 kubelet[2840]: E0303 13:44:23.412195 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:23.412380 kubelet[2840]: W0303 13:44:23.412233 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:23.412663 kubelet[2840]: E0303 13:44:23.412634 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:23.414469 kubelet[2840]: E0303 13:44:23.414400 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:23.414469 kubelet[2840]: W0303 13:44:23.414423 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:23.414469 kubelet[2840]: E0303 13:44:23.414445 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:23.416936 kubelet[2840]: E0303 13:44:23.416861 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:23.416936 kubelet[2840]: W0303 13:44:23.416886 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:23.416936 kubelet[2840]: E0303 13:44:23.416911 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:23.423808 kubelet[2840]: E0303 13:44:23.423255 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:23.423808 kubelet[2840]: W0303 13:44:23.423460 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:23.423808 kubelet[2840]: E0303 13:44:23.423494 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:23.426396 kubelet[2840]: E0303 13:44:23.425681 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:23.426396 kubelet[2840]: W0303 13:44:23.425703 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:23.426396 kubelet[2840]: E0303 13:44:23.425789 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:23.431641 kubelet[2840]: E0303 13:44:23.431526 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:23.431641 kubelet[2840]: W0303 13:44:23.431554 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:23.431641 kubelet[2840]: E0303 13:44:23.431578 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:23.434418 kubelet[2840]: E0303 13:44:23.434224 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:23.434496 kubelet[2840]: W0303 13:44:23.434427 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:23.434496 kubelet[2840]: E0303 13:44:23.434451 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:23.437616 kubelet[2840]: E0303 13:44:23.435964 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:23.437616 kubelet[2840]: W0303 13:44:23.436028 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:23.437616 kubelet[2840]: E0303 13:44:23.436046 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:23.441966 kubelet[2840]: E0303 13:44:23.440206 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:23.441966 kubelet[2840]: W0303 13:44:23.440363 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:23.441966 kubelet[2840]: E0303 13:44:23.440389 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:23.441966 kubelet[2840]: E0303 13:44:23.440972 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:23.441966 kubelet[2840]: W0303 13:44:23.440984 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:23.441966 kubelet[2840]: E0303 13:44:23.440996 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:23.441966 kubelet[2840]: E0303 13:44:23.441629 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:23.441966 kubelet[2840]: W0303 13:44:23.441644 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:23.441966 kubelet[2840]: E0303 13:44:23.441657 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:23.445712 kubelet[2840]: E0303 13:44:23.445615 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:23.445712 kubelet[2840]: W0303 13:44:23.445688 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:23.445712 kubelet[2840]: E0303 13:44:23.445710 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:23.448952 kubelet[2840]: E0303 13:44:23.448864 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:23.448952 kubelet[2840]: W0303 13:44:23.448938 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:23.449127 kubelet[2840]: E0303 13:44:23.448958 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:23.452541 kubelet[2840]: E0303 13:44:23.452178 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:23.452541 kubelet[2840]: W0303 13:44:23.452252 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:23.452541 kubelet[2840]: E0303 13:44:23.452386 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:23.459694 kubelet[2840]: E0303 13:44:23.459518 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:23.459694 kubelet[2840]: W0303 13:44:23.459608 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:23.459694 kubelet[2840]: E0303 13:44:23.459641 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:23.464245 kubelet[2840]: E0303 13:44:23.464128 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:23.464245 kubelet[2840]: W0303 13:44:23.464225 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:23.464245 kubelet[2840]: E0303 13:44:23.464254 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:23.467050 kubelet[2840]: E0303 13:44:23.465666 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:23.467148 kubelet[2840]: W0303 13:44:23.467081 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:23.467148 kubelet[2840]: E0303 13:44:23.467113 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:23.482410 kubelet[2840]: E0303 13:44:23.481773 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:23.482410 kubelet[2840]: W0303 13:44:23.481822 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:23.482410 kubelet[2840]: E0303 13:44:23.481857 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:24.138508 kubelet[2840]: E0303 13:44:24.138241 2840 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-468gm" podUID="1d197286-6496-40b9-b42f-7c09e055ab02" Mar 3 13:44:24.219227 kubelet[2840]: E0303 13:44:24.219042 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:44:24.274487 kubelet[2840]: E0303 13:44:24.273677 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:24.274487 kubelet[2840]: W0303 13:44:24.273710 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:24.274812 kubelet[2840]: E0303 13:44:24.274660 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:24.278708 kubelet[2840]: E0303 13:44:24.278465 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:24.278708 kubelet[2840]: W0303 13:44:24.278496 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:24.278708 kubelet[2840]: E0303 13:44:24.278525 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:24.285550 kubelet[2840]: E0303 13:44:24.283605 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:24.285550 kubelet[2840]: W0303 13:44:24.283631 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:24.285550 kubelet[2840]: E0303 13:44:24.283655 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:24.287669 kubelet[2840]: E0303 13:44:24.286972 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:24.287669 kubelet[2840]: W0303 13:44:24.286992 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:24.287669 kubelet[2840]: E0303 13:44:24.287017 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:24.292371 kubelet[2840]: E0303 13:44:24.290365 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:24.292371 kubelet[2840]: W0303 13:44:24.290397 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:24.292371 kubelet[2840]: E0303 13:44:24.290430 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:24.292371 kubelet[2840]: E0303 13:44:24.291821 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:24.292371 kubelet[2840]: W0303 13:44:24.291837 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:24.292371 kubelet[2840]: E0303 13:44:24.291861 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:24.292371 kubelet[2840]: E0303 13:44:24.292121 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:24.292371 kubelet[2840]: W0303 13:44:24.292134 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:24.292371 kubelet[2840]: E0303 13:44:24.292151 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:24.294088 kubelet[2840]: E0303 13:44:24.292558 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:24.294088 kubelet[2840]: W0303 13:44:24.292571 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:24.294088 kubelet[2840]: E0303 13:44:24.292588 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:24.296000 kubelet[2840]: E0303 13:44:24.295718 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:24.296000 kubelet[2840]: W0303 13:44:24.295833 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:24.296000 kubelet[2840]: E0303 13:44:24.295855 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:24.297258 kubelet[2840]: E0303 13:44:24.296972 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:24.297258 kubelet[2840]: W0303 13:44:24.297031 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:24.297258 kubelet[2840]: E0303 13:44:24.297048 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:24.303392 kubelet[2840]: E0303 13:44:24.302261 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:24.303392 kubelet[2840]: W0303 13:44:24.302405 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:24.303392 kubelet[2840]: E0303 13:44:24.302429 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:24.308470 kubelet[2840]: I0303 13:44:24.308200 2840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-864669b94f-4kmm2" podStartSLOduration=4.965491203 podStartE2EDuration="13.308180964s" podCreationTimestamp="2026-03-03 13:44:11 +0000 UTC" firstStartedPulling="2026-03-03 13:44:13.561918716 +0000 UTC m=+76.493504739" lastFinishedPulling="2026-03-03 13:44:21.904608467 +0000 UTC m=+84.836194500" observedRunningTime="2026-03-03 13:44:23.380182745 +0000 UTC m=+86.311768828" watchObservedRunningTime="2026-03-03 13:44:24.308180964 +0000 UTC m=+87.239766997" Mar 3 13:44:24.315628 kubelet[2840]: E0303 13:44:24.315537 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:24.315628 kubelet[2840]: W0303 13:44:24.315625 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:24.315881 kubelet[2840]: E0303 13:44:24.315658 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:24.326578 kubelet[2840]: E0303 13:44:24.326489 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:24.326578 kubelet[2840]: W0303 13:44:24.326529 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:24.326578 kubelet[2840]: E0303 13:44:24.326556 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:24.331384 kubelet[2840]: E0303 13:44:24.331144 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:24.331384 kubelet[2840]: W0303 13:44:24.331223 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:24.331384 kubelet[2840]: E0303 13:44:24.331258 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:24.335431 kubelet[2840]: E0303 13:44:24.333258 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:24.335431 kubelet[2840]: W0303 13:44:24.333380 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:24.335431 kubelet[2840]: E0303 13:44:24.333405 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:24.371188 kubelet[2840]: E0303 13:44:24.370476 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:24.371188 kubelet[2840]: W0303 13:44:24.370500 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:24.371188 kubelet[2840]: E0303 13:44:24.370524 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:24.377302 kubelet[2840]: E0303 13:44:24.374617 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:24.377302 kubelet[2840]: W0303 13:44:24.374879 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:24.377302 kubelet[2840]: E0303 13:44:24.375042 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:24.386852 kubelet[2840]: E0303 13:44:24.384417 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:24.386852 kubelet[2840]: W0303 13:44:24.384457 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:24.386852 kubelet[2840]: E0303 13:44:24.384488 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:24.396252 kubelet[2840]: E0303 13:44:24.389940 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:24.396252 kubelet[2840]: W0303 13:44:24.390012 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:24.396252 kubelet[2840]: E0303 13:44:24.390043 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:24.403838 kubelet[2840]: E0303 13:44:24.402066 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:24.403838 kubelet[2840]: W0303 13:44:24.402095 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:24.403838 kubelet[2840]: E0303 13:44:24.402128 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:24.408122 kubelet[2840]: E0303 13:44:24.407852 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:24.408122 kubelet[2840]: W0303 13:44:24.407938 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:24.408122 kubelet[2840]: E0303 13:44:24.407971 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:24.409422 kubelet[2840]: E0303 13:44:24.409106 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:24.409422 kubelet[2840]: W0303 13:44:24.409172 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:24.409422 kubelet[2840]: E0303 13:44:24.409194 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:24.409878 kubelet[2840]: E0303 13:44:24.409802 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:24.409878 kubelet[2840]: W0303 13:44:24.409868 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:24.409960 kubelet[2840]: E0303 13:44:24.409891 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:24.412972 kubelet[2840]: E0303 13:44:24.412711 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:24.412972 kubelet[2840]: W0303 13:44:24.412802 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:24.412972 kubelet[2840]: E0303 13:44:24.412825 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:24.423558 kubelet[2840]: E0303 13:44:24.418183 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:24.423558 kubelet[2840]: W0303 13:44:24.418257 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:24.423558 kubelet[2840]: E0303 13:44:24.418373 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:24.423558 kubelet[2840]: E0303 13:44:24.420969 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:24.423558 kubelet[2840]: W0303 13:44:24.420987 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:24.423558 kubelet[2840]: E0303 13:44:24.421008 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:24.425429 kubelet[2840]: E0303 13:44:24.425184 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:24.425429 kubelet[2840]: W0303 13:44:24.425361 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:24.425429 kubelet[2840]: E0303 13:44:24.425395 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:24.431869 kubelet[2840]: E0303 13:44:24.426012 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:24.431869 kubelet[2840]: W0303 13:44:24.426086 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:24.431869 kubelet[2840]: E0303 13:44:24.426104 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:24.431869 kubelet[2840]: E0303 13:44:24.426615 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:24.431869 kubelet[2840]: W0303 13:44:24.426627 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:24.431869 kubelet[2840]: E0303 13:44:24.426640 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:24.431869 kubelet[2840]: E0303 13:44:24.427217 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:24.431869 kubelet[2840]: W0303 13:44:24.427231 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:24.431869 kubelet[2840]: E0303 13:44:24.427244 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:24.434238 kubelet[2840]: E0303 13:44:24.434136 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:24.434238 kubelet[2840]: W0303 13:44:24.434218 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:24.434449 kubelet[2840]: E0303 13:44:24.434249 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:24.455126 kubelet[2840]: E0303 13:44:24.453183 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:24.455126 kubelet[2840]: W0303 13:44:24.453256 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:24.455126 kubelet[2840]: E0303 13:44:24.453380 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:24.455126 kubelet[2840]: E0303 13:44:24.454614 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:24.455126 kubelet[2840]: W0303 13:44:24.454629 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:24.455126 kubelet[2840]: E0303 13:44:24.454647 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:25.172403 containerd[1556]: time="2026-03-03T13:44:25.171954841Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:44:25.176714 containerd[1556]: time="2026-03-03T13:44:25.176662416Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 3 13:44:25.190892 containerd[1556]: time="2026-03-03T13:44:25.183702081Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:44:25.194829 containerd[1556]: time="2026-03-03T13:44:25.194711250Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:44:25.197002 containerd[1556]: time="2026-03-03T13:44:25.196523065Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 3.289332098s" Mar 3 13:44:25.197002 containerd[1556]: time="2026-03-03T13:44:25.196569251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 3 13:44:25.235074 containerd[1556]: time="2026-03-03T13:44:25.231096356Z" level=info msg="CreateContainer within sandbox \"90dc62cba0af49f124547ba732661077f420dac8fabde2e1616d41681f760ac9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 3 13:44:25.235221 kubelet[2840]: E0303 13:44:25.234808 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:44:25.252428 kubelet[2840]: E0303 13:44:25.252391 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:25.256139 kubelet[2840]: W0303 13:44:25.255814 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:25.256139 kubelet[2840]: E0303 13:44:25.255857 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:25.265426 kubelet[2840]: E0303 13:44:25.261189 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:25.265426 kubelet[2840]: W0303 13:44:25.261392 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:25.265426 kubelet[2840]: E0303 13:44:25.261423 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:25.265426 kubelet[2840]: E0303 13:44:25.265042 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:25.265426 kubelet[2840]: W0303 13:44:25.265062 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:25.265426 kubelet[2840]: E0303 13:44:25.265085 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:25.267718 kubelet[2840]: E0303 13:44:25.267195 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:25.267718 kubelet[2840]: W0303 13:44:25.267262 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:25.267718 kubelet[2840]: E0303 13:44:25.267379 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:25.271919 kubelet[2840]: E0303 13:44:25.270437 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:25.271919 kubelet[2840]: W0303 13:44:25.270461 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:25.271919 kubelet[2840]: E0303 13:44:25.270484 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:25.271919 kubelet[2840]: E0303 13:44:25.270870 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:25.271919 kubelet[2840]: W0303 13:44:25.270882 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:25.271919 kubelet[2840]: E0303 13:44:25.270896 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:25.271919 kubelet[2840]: E0303 13:44:25.271190 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:25.271919 kubelet[2840]: W0303 13:44:25.271206 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:25.271919 kubelet[2840]: E0303 13:44:25.271220 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:25.271919 kubelet[2840]: E0303 13:44:25.271650 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:25.273486 kubelet[2840]: W0303 13:44:25.271662 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:25.273486 kubelet[2840]: E0303 13:44:25.271675 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:25.279445 kubelet[2840]: E0303 13:44:25.275196 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:25.279445 kubelet[2840]: W0303 13:44:25.275217 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:25.279445 kubelet[2840]: E0303 13:44:25.275237 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:25.282990 kubelet[2840]: E0303 13:44:25.280539 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:25.282990 kubelet[2840]: W0303 13:44:25.280564 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:25.282990 kubelet[2840]: E0303 13:44:25.280590 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:25.289912 kubelet[2840]: E0303 13:44:25.286421 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:25.289912 kubelet[2840]: W0303 13:44:25.286456 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:25.289912 kubelet[2840]: E0303 13:44:25.286483 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:25.293420 kubelet[2840]: E0303 13:44:25.292187 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:25.293420 kubelet[2840]: W0303 13:44:25.292219 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:25.293420 kubelet[2840]: E0303 13:44:25.292249 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:25.295500 kubelet[2840]: E0303 13:44:25.295176 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:25.299121 kubelet[2840]: W0303 13:44:25.295700 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:25.299121 kubelet[2840]: E0303 13:44:25.297174 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:25.299121 kubelet[2840]: E0303 13:44:25.298480 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:25.299121 kubelet[2840]: W0303 13:44:25.298497 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:25.299121 kubelet[2840]: E0303 13:44:25.298516 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:25.302000 kubelet[2840]: E0303 13:44:25.301028 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:25.302000 kubelet[2840]: W0303 13:44:25.301099 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:25.302000 kubelet[2840]: E0303 13:44:25.301122 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:25.316913 containerd[1556]: time="2026-03-03T13:44:25.315651986Z" level=info msg="Container 89534a8122ddabb385bc95f64adc39c711c9c66895511cfd906135c2c5bec5a8: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:44:25.320508 kubelet[2840]: E0303 13:44:25.320026 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:25.320508 kubelet[2840]: W0303 13:44:25.320437 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:25.320508 kubelet[2840]: E0303 13:44:25.320470 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:25.325162 kubelet[2840]: E0303 13:44:25.325089 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:25.325162 kubelet[2840]: W0303 13:44:25.325115 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:25.325162 kubelet[2840]: E0303 13:44:25.325140 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:25.330432 kubelet[2840]: E0303 13:44:25.327480 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:25.330432 kubelet[2840]: W0303 13:44:25.327553 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:25.330432 kubelet[2840]: E0303 13:44:25.327573 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:25.330432 kubelet[2840]: E0303 13:44:25.328252 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:25.330432 kubelet[2840]: W0303 13:44:25.328418 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:25.330432 kubelet[2840]: E0303 13:44:25.328438 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:25.331262 kubelet[2840]: E0303 13:44:25.331155 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:25.331262 kubelet[2840]: W0303 13:44:25.331231 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:25.331262 kubelet[2840]: E0303 13:44:25.331256 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:25.333975 kubelet[2840]: E0303 13:44:25.332496 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:25.333975 kubelet[2840]: W0303 13:44:25.332557 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:25.333975 kubelet[2840]: E0303 13:44:25.332574 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:25.334824 kubelet[2840]: E0303 13:44:25.334399 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:25.334824 kubelet[2840]: W0303 13:44:25.334459 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:25.334824 kubelet[2840]: E0303 13:44:25.334476 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:25.335523 kubelet[2840]: E0303 13:44:25.335491 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:25.335523 kubelet[2840]: W0303 13:44:25.335508 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:25.335523 kubelet[2840]: E0303 13:44:25.335524 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:25.338852 kubelet[2840]: E0303 13:44:25.338117 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:25.338852 kubelet[2840]: W0303 13:44:25.338136 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:25.338852 kubelet[2840]: E0303 13:44:25.338155 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:25.339424 kubelet[2840]: E0303 13:44:25.338975 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:25.339424 kubelet[2840]: W0303 13:44:25.338995 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:25.339424 kubelet[2840]: E0303 13:44:25.339009 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:25.342378 kubelet[2840]: E0303 13:44:25.342085 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:25.342378 kubelet[2840]: W0303 13:44:25.342149 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:25.342378 kubelet[2840]: E0303 13:44:25.342169 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:25.343391 kubelet[2840]: E0303 13:44:25.343166 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:25.344189 kubelet[2840]: W0303 13:44:25.343497 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:25.344189 kubelet[2840]: E0303 13:44:25.343566 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:25.347921 kubelet[2840]: E0303 13:44:25.346125 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:25.347921 kubelet[2840]: W0303 13:44:25.346145 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:25.347921 kubelet[2840]: E0303 13:44:25.346166 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:25.347921 kubelet[2840]: E0303 13:44:25.347155 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:25.347921 kubelet[2840]: W0303 13:44:25.347171 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:25.347921 kubelet[2840]: E0303 13:44:25.347185 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:25.351359 kubelet[2840]: E0303 13:44:25.351123 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:25.351359 kubelet[2840]: W0303 13:44:25.351199 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:25.351359 kubelet[2840]: E0303 13:44:25.351225 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:25.355035 kubelet[2840]: E0303 13:44:25.351865 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:25.355035 kubelet[2840]: W0303 13:44:25.351929 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:25.355035 kubelet[2840]: E0303 13:44:25.351946 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:25.358226 kubelet[2840]: E0303 13:44:25.358136 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:25.358226 kubelet[2840]: W0303 13:44:25.358217 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:25.358462 kubelet[2840]: E0303 13:44:25.358245 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:25.359516 containerd[1556]: time="2026-03-03T13:44:25.359116625Z" level=info msg="CreateContainer within sandbox \"90dc62cba0af49f124547ba732661077f420dac8fabde2e1616d41681f760ac9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"89534a8122ddabb385bc95f64adc39c711c9c66895511cfd906135c2c5bec5a8\"" Mar 3 13:44:25.365785 containerd[1556]: time="2026-03-03T13:44:25.360580747Z" level=info msg="StartContainer for \"89534a8122ddabb385bc95f64adc39c711c9c66895511cfd906135c2c5bec5a8\"" Mar 3 13:44:25.366348 kubelet[2840]: E0303 13:44:25.366219 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:44:25.366348 kubelet[2840]: W0303 13:44:25.366246 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:44:25.366472 kubelet[2840]: E0303 13:44:25.366455 2840 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:44:25.383966 containerd[1556]: time="2026-03-03T13:44:25.382402344Z" level=info msg="connecting to shim 89534a8122ddabb385bc95f64adc39c711c9c66895511cfd906135c2c5bec5a8" address="unix:///run/containerd/s/4feed5962c880f8a52a45d0459be1ea180e92f4ff73a919adff3876af5e98c25" protocol=ttrpc version=3 Mar 3 13:44:25.504811 systemd[1]: Started cri-containerd-89534a8122ddabb385bc95f64adc39c711c9c66895511cfd906135c2c5bec5a8.scope - libcontainer container 89534a8122ddabb385bc95f64adc39c711c9c66895511cfd906135c2c5bec5a8. Mar 3 13:44:25.924405 containerd[1556]: time="2026-03-03T13:44:25.924060210Z" level=info msg="StartContainer for \"89534a8122ddabb385bc95f64adc39c711c9c66895511cfd906135c2c5bec5a8\" returns successfully" Mar 3 13:44:25.984965 systemd[1]: cri-containerd-89534a8122ddabb385bc95f64adc39c711c9c66895511cfd906135c2c5bec5a8.scope: Deactivated successfully. Mar 3 13:44:26.000476 containerd[1556]: time="2026-03-03T13:44:25.999544738Z" level=info msg="received container exit event container_id:\"89534a8122ddabb385bc95f64adc39c711c9c66895511cfd906135c2c5bec5a8\" id:\"89534a8122ddabb385bc95f64adc39c711c9c66895511cfd906135c2c5bec5a8\" pid:3640 exited_at:{seconds:1772545465 nanos:997597789}" Mar 3 13:44:26.140041 kubelet[2840]: E0303 13:44:26.133256 2840 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-468gm" podUID="1d197286-6496-40b9-b42f-7c09e055ab02" Mar 3 13:44:26.199945 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89534a8122ddabb385bc95f64adc39c711c9c66895511cfd906135c2c5bec5a8-rootfs.mount: Deactivated successfully. Mar 3 13:44:26.253845 kubelet[2840]: E0303 13:44:26.246231 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:44:27.280872 containerd[1556]: time="2026-03-03T13:44:27.280584883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 3 13:44:28.140386 kubelet[2840]: E0303 13:44:28.140064 2840 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-468gm" podUID="1d197286-6496-40b9-b42f-7c09e055ab02" Mar 3 13:44:30.137950 kubelet[2840]: E0303 13:44:30.137618 2840 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-468gm" podUID="1d197286-6496-40b9-b42f-7c09e055ab02" Mar 3 13:44:31.135673 kubelet[2840]: E0303 13:44:31.135618 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:44:32.146914 kubelet[2840]: E0303 13:44:32.140145 2840 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-468gm" podUID="1d197286-6496-40b9-b42f-7c09e055ab02" Mar 3 13:44:34.174545 kubelet[2840]: E0303 13:44:34.174456 2840 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-468gm" podUID="1d197286-6496-40b9-b42f-7c09e055ab02" Mar 3 13:44:36.144893 kubelet[2840]: E0303 13:44:36.144663 2840 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-468gm" podUID="1d197286-6496-40b9-b42f-7c09e055ab02" Mar 3 13:44:38.135538 kubelet[2840]: E0303 13:44:38.135483 2840 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-468gm" podUID="1d197286-6496-40b9-b42f-7c09e055ab02" Mar 3 13:44:40.140112 kubelet[2840]: E0303 13:44:40.140056 2840 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-468gm" podUID="1d197286-6496-40b9-b42f-7c09e055ab02" Mar 3 13:44:42.134796 kubelet[2840]: E0303 13:44:42.134706 2840 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-468gm" podUID="1d197286-6496-40b9-b42f-7c09e055ab02" Mar 3 13:44:44.137088 kubelet[2840]: E0303 13:44:44.136911 2840 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-468gm" podUID="1d197286-6496-40b9-b42f-7c09e055ab02" Mar 3 13:44:46.135122 kubelet[2840]: E0303 13:44:46.135050 2840 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-468gm" podUID="1d197286-6496-40b9-b42f-7c09e055ab02" Mar 3 13:44:48.159032 kubelet[2840]: E0303 13:44:48.156583 2840 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-468gm" podUID="1d197286-6496-40b9-b42f-7c09e055ab02" Mar 3 13:44:50.153239 kubelet[2840]: E0303 13:44:50.149254 2840 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-468gm" podUID="1d197286-6496-40b9-b42f-7c09e055ab02" Mar 3 13:44:52.161485 kubelet[2840]: E0303 13:44:52.160530 2840 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-468gm" podUID="1d197286-6496-40b9-b42f-7c09e055ab02" Mar 3 13:44:54.142234 kubelet[2840]: E0303 13:44:54.141939 2840 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-468gm" podUID="1d197286-6496-40b9-b42f-7c09e055ab02" Mar 3 13:44:56.134463 kubelet[2840]: E0303 13:44:56.133456 2840 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-468gm" podUID="1d197286-6496-40b9-b42f-7c09e055ab02" Mar 3 13:44:57.952205 kubelet[2840]: E0303 13:44:57.952035 2840 kubelet_node_status.go:398] "Node not becoming ready in time after startup" Mar 3 13:44:58.143120 kubelet[2840]: E0303 13:44:58.142399 2840 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-468gm" podUID="1d197286-6496-40b9-b42f-7c09e055ab02" Mar 3 13:45:00.148441 kubelet[2840]: E0303 13:45:00.139538 2840 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-468gm" podUID="1d197286-6496-40b9-b42f-7c09e055ab02" Mar 3 13:45:02.179196 kubelet[2840]: E0303 13:45:02.176406 2840 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-468gm" podUID="1d197286-6496-40b9-b42f-7c09e055ab02" Mar 3 13:45:02.666825 kubelet[2840]: E0303 13:45:02.666464 2840 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 3 13:45:02.932222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2595744815.mount: Deactivated successfully. Mar 3 13:45:03.082030 containerd[1556]: time="2026-03-03T13:45:03.081572667Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:45:03.087005 containerd[1556]: time="2026-03-03T13:45:03.086540858Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 3 13:45:03.098227 containerd[1556]: time="2026-03-03T13:45:03.098109122Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:45:03.140947 containerd[1556]: time="2026-03-03T13:45:03.139874355Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:45:03.144726 containerd[1556]: time="2026-03-03T13:45:03.144223816Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 35.863496847s" Mar 3 13:45:03.144726 containerd[1556]: time="2026-03-03T13:45:03.144478787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 3 13:45:03.208143 containerd[1556]: time="2026-03-03T13:45:03.207938253Z" level=info msg="CreateContainer within sandbox \"90dc62cba0af49f124547ba732661077f420dac8fabde2e1616d41681f760ac9\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 3 13:45:03.267812 containerd[1556]: time="2026-03-03T13:45:03.267749685Z" level=info msg="Container ee6c4ca650f2d2a56b1cb21b0d3ebfe4db9ba010e4739f9f76ef582a7dc49c40: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:45:04.100894 containerd[1556]: time="2026-03-03T13:45:04.099493629Z" level=info msg="CreateContainer within sandbox \"90dc62cba0af49f124547ba732661077f420dac8fabde2e1616d41681f760ac9\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"ee6c4ca650f2d2a56b1cb21b0d3ebfe4db9ba010e4739f9f76ef582a7dc49c40\"" Mar 3 13:45:04.108230 containerd[1556]: time="2026-03-03T13:45:04.107786913Z" level=info msg="StartContainer for \"ee6c4ca650f2d2a56b1cb21b0d3ebfe4db9ba010e4739f9f76ef582a7dc49c40\"" Mar 3 13:45:04.122470 containerd[1556]: time="2026-03-03T13:45:04.122206142Z" level=info msg="connecting to shim ee6c4ca650f2d2a56b1cb21b0d3ebfe4db9ba010e4739f9f76ef582a7dc49c40" address="unix:///run/containerd/s/4feed5962c880f8a52a45d0459be1ea180e92f4ff73a919adff3876af5e98c25" protocol=ttrpc version=3 Mar 3 13:45:04.146916 kubelet[2840]: E0303 13:45:04.144530 2840 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-468gm" podUID="1d197286-6496-40b9-b42f-7c09e055ab02" Mar 3 13:45:04.721000 systemd[1]: Started cri-containerd-ee6c4ca650f2d2a56b1cb21b0d3ebfe4db9ba010e4739f9f76ef582a7dc49c40.scope - libcontainer container ee6c4ca650f2d2a56b1cb21b0d3ebfe4db9ba010e4739f9f76ef582a7dc49c40. Mar 3 13:45:05.721510 containerd[1556]: time="2026-03-03T13:45:05.720129900Z" level=info msg="StartContainer for \"ee6c4ca650f2d2a56b1cb21b0d3ebfe4db9ba010e4739f9f76ef582a7dc49c40\" returns successfully" Mar 3 13:45:06.191475 kubelet[2840]: E0303 13:45:06.186145 2840 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-468gm" podUID="1d197286-6496-40b9-b42f-7c09e055ab02" Mar 3 13:45:06.255933 systemd[1]: cri-containerd-ee6c4ca650f2d2a56b1cb21b0d3ebfe4db9ba010e4739f9f76ef582a7dc49c40.scope: Deactivated successfully. Mar 3 13:45:06.375689 containerd[1556]: time="2026-03-03T13:45:06.375422562Z" level=info msg="received container exit event container_id:\"ee6c4ca650f2d2a56b1cb21b0d3ebfe4db9ba010e4739f9f76ef582a7dc49c40\" id:\"ee6c4ca650f2d2a56b1cb21b0d3ebfe4db9ba010e4739f9f76ef582a7dc49c40\" pid:3701 exited_at:{seconds:1772545506 nanos:338553142}" Mar 3 13:45:06.539980 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee6c4ca650f2d2a56b1cb21b0d3ebfe4db9ba010e4739f9f76ef582a7dc49c40-rootfs.mount: Deactivated successfully. Mar 3 13:45:06.908439 containerd[1556]: time="2026-03-03T13:45:06.906000458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 3 13:45:07.684184 kubelet[2840]: E0303 13:45:07.682227 2840 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 3 13:45:08.138941 kubelet[2840]: E0303 13:45:08.138171 2840 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-468gm" podUID="1d197286-6496-40b9-b42f-7c09e055ab02" Mar 3 13:45:10.519496 kubelet[2840]: E0303 13:45:10.517034 2840 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-468gm" podUID="1d197286-6496-40b9-b42f-7c09e055ab02" Mar 3 13:45:12.135827 kubelet[2840]: E0303 13:45:12.135660 2840 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-468gm" podUID="1d197286-6496-40b9-b42f-7c09e055ab02" Mar 3 13:45:12.700130 kubelet[2840]: E0303 13:45:12.693526 2840 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 3 13:45:14.136657 kubelet[2840]: E0303 13:45:14.132674 2840 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-468gm" podUID="1d197286-6496-40b9-b42f-7c09e055ab02" Mar 3 13:45:16.134739 kubelet[2840]: E0303 13:45:16.134675 2840 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-468gm" podUID="1d197286-6496-40b9-b42f-7c09e055ab02" Mar 3 13:45:17.714244 kubelet[2840]: E0303 13:45:17.714155 2840 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 3 13:45:18.134832 kubelet[2840]: E0303 13:45:18.134204 2840 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-468gm" podUID="1d197286-6496-40b9-b42f-7c09e055ab02" Mar 3 13:45:18.143213 kubelet[2840]: E0303 13:45:18.143175 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:20.141538 kubelet[2840]: E0303 13:45:20.141076 2840 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-468gm" podUID="1d197286-6496-40b9-b42f-7c09e055ab02" Mar 3 13:45:20.243241 containerd[1556]: time="2026-03-03T13:45:20.242792754Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:45:20.247622 containerd[1556]: time="2026-03-03T13:45:20.247408196Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 3 13:45:20.252783 containerd[1556]: time="2026-03-03T13:45:20.251118728Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:45:20.260852 containerd[1556]: time="2026-03-03T13:45:20.260444899Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:45:20.266241 containerd[1556]: time="2026-03-03T13:45:20.262944817Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 13.356895108s" Mar 3 13:45:20.266241 containerd[1556]: time="2026-03-03T13:45:20.262996963Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 3 13:45:20.338432 containerd[1556]: time="2026-03-03T13:45:20.336136391Z" level=info msg="CreateContainer within sandbox \"90dc62cba0af49f124547ba732661077f420dac8fabde2e1616d41681f760ac9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 3 13:45:21.460878 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1495222637.mount: Deactivated successfully. Mar 3 13:45:22.111695 containerd[1556]: time="2026-03-03T13:45:21.426131250Z" level=info msg="Container 6d8b8cab2ddc0d1b6b187b3b85ee64c53e513726cb01a70704e64e99bae8115a: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:45:23.590214 kubelet[2840]: E0303 13:45:23.586870 2840 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 3 13:45:23.805685 kubelet[2840]: E0303 13:45:23.802873 2840 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-468gm" podUID="1d197286-6496-40b9-b42f-7c09e055ab02" Mar 3 13:45:23.812450 kubelet[2840]: E0303 13:45:23.809731 2840 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.667s" Mar 3 13:45:23.881446 containerd[1556]: time="2026-03-03T13:45:23.879459932Z" level=info msg="CreateContainer within sandbox \"90dc62cba0af49f124547ba732661077f420dac8fabde2e1616d41681f760ac9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6d8b8cab2ddc0d1b6b187b3b85ee64c53e513726cb01a70704e64e99bae8115a\"" Mar 3 13:45:23.898449 containerd[1556]: time="2026-03-03T13:45:23.897204586Z" level=info msg="StartContainer for \"6d8b8cab2ddc0d1b6b187b3b85ee64c53e513726cb01a70704e64e99bae8115a\"" Mar 3 13:45:24.274259 containerd[1556]: time="2026-03-03T13:45:24.274201946Z" level=info msg="connecting to shim 6d8b8cab2ddc0d1b6b187b3b85ee64c53e513726cb01a70704e64e99bae8115a" address="unix:///run/containerd/s/4feed5962c880f8a52a45d0459be1ea180e92f4ff73a919adff3876af5e98c25" protocol=ttrpc version=3 Mar 3 13:45:25.101820 systemd[1]: Started cri-containerd-6d8b8cab2ddc0d1b6b187b3b85ee64c53e513726cb01a70704e64e99bae8115a.scope - libcontainer container 6d8b8cab2ddc0d1b6b187b3b85ee64c53e513726cb01a70704e64e99bae8115a. Mar 3 13:45:25.192179 kubelet[2840]: E0303 13:45:25.191863 2840 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-468gm" podUID="1d197286-6496-40b9-b42f-7c09e055ab02" Mar 3 13:45:26.253859 containerd[1556]: time="2026-03-03T13:45:26.253446028Z" level=info msg="StartContainer for \"6d8b8cab2ddc0d1b6b187b3b85ee64c53e513726cb01a70704e64e99bae8115a\" returns successfully" Mar 3 13:45:27.575661 kubelet[2840]: E0303 13:45:27.574706 2840 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-468gm" podUID="1d197286-6496-40b9-b42f-7c09e055ab02" Mar 3 13:45:28.610157 kubelet[2840]: E0303 13:45:28.609121 2840 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 3 13:45:29.186159 kubelet[2840]: E0303 13:45:29.138117 2840 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-468gm" podUID="1d197286-6496-40b9-b42f-7c09e055ab02" Mar 3 13:45:30.522782 kubelet[2840]: E0303 13:45:30.517804 2840 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-468gm" podUID="1d197286-6496-40b9-b42f-7c09e055ab02" Mar 3 13:45:32.229996 kubelet[2840]: E0303 13:45:32.229242 2840 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-468gm" podUID="1d197286-6496-40b9-b42f-7c09e055ab02" Mar 3 13:45:32.229996 kubelet[2840]: E0303 13:45:32.229596 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:47.395438 systemd[1]: cri-containerd-d325a1ca2bdccd655390caa116252ba2fc584c85b4281447b5615e395d24eb99.scope: Deactivated successfully. Mar 3 13:45:47.398929 systemd[1]: cri-containerd-d325a1ca2bdccd655390caa116252ba2fc584c85b4281447b5615e395d24eb99.scope: Consumed 22.025s CPU time, 71.4M memory peak, 10.1M read from disk. Mar 3 13:45:47.947692 kubelet[2840]: E0303 13:45:47.946920 2840 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 3 13:45:47.960606 systemd[1]: cri-containerd-615bf8e4847f6ad1ab8906adb6bf0b10046534849d1df941acc08c8b66becf14.scope: Deactivated successfully. Mar 3 13:45:47.962024 systemd[1]: cri-containerd-615bf8e4847f6ad1ab8906adb6bf0b10046534849d1df941acc08c8b66becf14.scope: Consumed 11.022s CPU time, 28.4M memory peak, 6.4M read from disk. Mar 3 13:45:48.038432 containerd[1556]: time="2026-03-03T13:45:48.037149699Z" level=info msg="received container exit event container_id:\"615bf8e4847f6ad1ab8906adb6bf0b10046534849d1df941acc08c8b66becf14\" id:\"615bf8e4847f6ad1ab8906adb6bf0b10046534849d1df941acc08c8b66becf14\" pid:2676 exit_status:1 exited_at:{seconds:1772545548 nanos:23180379}" Mar 3 13:45:48.074786 containerd[1556]: time="2026-03-03T13:45:48.074653071Z" level=info msg="received container exit event container_id:\"d325a1ca2bdccd655390caa116252ba2fc584c85b4281447b5615e395d24eb99\" id:\"d325a1ca2bdccd655390caa116252ba2fc584c85b4281447b5615e395d24eb99\" pid:2690 exit_status:1 exited_at:{seconds:1772545548 nanos:53642335}" Mar 3 13:45:48.111372 kubelet[2840]: E0303 13:45:48.110015 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:48.117860 kubelet[2840]: E0303 13:45:48.110080 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:48.118897 kubelet[2840]: E0303 13:45:48.110537 2840 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-468gm" podUID="1d197286-6496-40b9-b42f-7c09e055ab02" Mar 3 13:45:48.119406 kubelet[2840]: E0303 13:45:48.117188 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:48.291018 kubelet[2840]: E0303 13:45:48.285896 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:48.693099 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d325a1ca2bdccd655390caa116252ba2fc584c85b4281447b5615e395d24eb99-rootfs.mount: Deactivated successfully. Mar 3 13:45:48.784739 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-615bf8e4847f6ad1ab8906adb6bf0b10046534849d1df941acc08c8b66becf14-rootfs.mount: Deactivated successfully. Mar 3 13:45:49.513686 kubelet[2840]: I0303 13:45:49.513533 2840 scope.go:117] "RemoveContainer" containerID="d325a1ca2bdccd655390caa116252ba2fc584c85b4281447b5615e395d24eb99" Mar 3 13:45:49.515606 kubelet[2840]: E0303 13:45:49.515581 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:49.527662 kubelet[2840]: E0303 13:45:49.526108 2840 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-468gm" podUID="1d197286-6496-40b9-b42f-7c09e055ab02" Mar 3 13:45:49.527662 kubelet[2840]: I0303 13:45:49.526763 2840 scope.go:117] "RemoveContainer" containerID="615bf8e4847f6ad1ab8906adb6bf0b10046534849d1df941acc08c8b66becf14" Mar 3 13:45:49.527662 kubelet[2840]: E0303 13:45:49.526873 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:49.529969 containerd[1556]: time="2026-03-03T13:45:49.529875893Z" level=info msg="CreateContainer within sandbox \"8f0c8dded7e168349d163645487241449028ae51483a42e465a184d4eb86c3ec\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 3 13:45:49.549241 containerd[1556]: time="2026-03-03T13:45:49.549188255Z" level=info msg="CreateContainer within sandbox \"215d020b35ad27683d504e414cce471227202edb73b42f1b2e0dfdc9776b6664\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 3 13:45:49.603880 containerd[1556]: time="2026-03-03T13:45:49.603818164Z" level=info msg="Container b9456c5622a9df2ebe092ae6275c44a401c4c76e49d12a9e4cc5f0f25913db61: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:45:49.643678 containerd[1556]: time="2026-03-03T13:45:49.643562464Z" level=info msg="Container 8c35265fb723d59a5d8ac049750ab144be355d2e06d7808223dea4fbfda50249: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:45:49.703853 containerd[1556]: time="2026-03-03T13:45:49.703724590Z" level=info msg="CreateContainer within sandbox \"8f0c8dded7e168349d163645487241449028ae51483a42e465a184d4eb86c3ec\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"b9456c5622a9df2ebe092ae6275c44a401c4c76e49d12a9e4cc5f0f25913db61\"" Mar 3 13:45:49.714394 containerd[1556]: time="2026-03-03T13:45:49.713963302Z" level=info msg="StartContainer for \"b9456c5622a9df2ebe092ae6275c44a401c4c76e49d12a9e4cc5f0f25913db61\"" Mar 3 13:45:49.739394 containerd[1556]: time="2026-03-03T13:45:49.737070887Z" level=info msg="connecting to shim b9456c5622a9df2ebe092ae6275c44a401c4c76e49d12a9e4cc5f0f25913db61" address="unix:///run/containerd/s/b2673d31ff430dd3feb1b0b8542f91e563484f174ca144c5daada99f53aa8d92" protocol=ttrpc version=3 Mar 3 13:45:49.759866 containerd[1556]: time="2026-03-03T13:45:49.759737556Z" level=info msg="CreateContainer within sandbox \"215d020b35ad27683d504e414cce471227202edb73b42f1b2e0dfdc9776b6664\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"8c35265fb723d59a5d8ac049750ab144be355d2e06d7808223dea4fbfda50249\"" Mar 3 13:45:49.766588 containerd[1556]: time="2026-03-03T13:45:49.765992380Z" level=info msg="StartContainer for \"8c35265fb723d59a5d8ac049750ab144be355d2e06d7808223dea4fbfda50249\"" Mar 3 13:45:49.811696 containerd[1556]: time="2026-03-03T13:45:49.811215727Z" level=info msg="connecting to shim 8c35265fb723d59a5d8ac049750ab144be355d2e06d7808223dea4fbfda50249" address="unix:///run/containerd/s/4e5c6c3ad22e9cb59a2431902eb2d42e51e9ec8934537f9ee904e577c9d63af6" protocol=ttrpc version=3 Mar 3 13:45:49.896963 systemd[1]: Started cri-containerd-b9456c5622a9df2ebe092ae6275c44a401c4c76e49d12a9e4cc5f0f25913db61.scope - libcontainer container b9456c5622a9df2ebe092ae6275c44a401c4c76e49d12a9e4cc5f0f25913db61. Mar 3 13:45:50.037967 systemd[1]: Started cri-containerd-8c35265fb723d59a5d8ac049750ab144be355d2e06d7808223dea4fbfda50249.scope - libcontainer container 8c35265fb723d59a5d8ac049750ab144be355d2e06d7808223dea4fbfda50249. Mar 3 13:45:50.392051 containerd[1556]: time="2026-03-03T13:45:50.391017247Z" level=info msg="StartContainer for \"b9456c5622a9df2ebe092ae6275c44a401c4c76e49d12a9e4cc5f0f25913db61\" returns successfully" Mar 3 13:45:50.510759 containerd[1556]: time="2026-03-03T13:45:50.509710123Z" level=info msg="StartContainer for \"8c35265fb723d59a5d8ac049750ab144be355d2e06d7808223dea4fbfda50249\" returns successfully" Mar 3 13:45:50.565607 kubelet[2840]: E0303 13:45:50.565082 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:50.598555 kubelet[2840]: E0303 13:45:50.598448 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:51.132952 kubelet[2840]: E0303 13:45:51.132789 2840 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-468gm" podUID="1d197286-6496-40b9-b42f-7c09e055ab02" Mar 3 13:45:51.626001 kubelet[2840]: E0303 13:45:51.625913 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:52.497177 kubelet[2840]: E0303 13:45:52.495020 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:52.639910 kubelet[2840]: E0303 13:45:52.637168 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:45:52.959687 kubelet[2840]: E0303 13:45:52.957470 2840 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 3 13:45:53.144696 kubelet[2840]: E0303 13:45:53.140862 2840 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-468gm" podUID="1d197286-6496-40b9-b42f-7c09e055ab02" Mar 3 13:45:54.698720 systemd[1]: cri-containerd-6d8b8cab2ddc0d1b6b187b3b85ee64c53e513726cb01a70704e64e99bae8115a.scope: Deactivated successfully. Mar 3 13:45:54.700155 systemd[1]: cri-containerd-6d8b8cab2ddc0d1b6b187b3b85ee64c53e513726cb01a70704e64e99bae8115a.scope: Consumed 3.837s CPU time, 183.7M memory peak, 2M read from disk, 177M written to disk. Mar 3 13:45:54.712394 containerd[1556]: time="2026-03-03T13:45:54.711820209Z" level=info msg="received container exit event container_id:\"6d8b8cab2ddc0d1b6b187b3b85ee64c53e513726cb01a70704e64e99bae8115a\" id:\"6d8b8cab2ddc0d1b6b187b3b85ee64c53e513726cb01a70704e64e99bae8115a\" pid:3765 exited_at:{seconds:1772545554 nanos:710148607}" Mar 3 13:45:54.866140 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d8b8cab2ddc0d1b6b187b3b85ee64c53e513726cb01a70704e64e99bae8115a-rootfs.mount: Deactivated successfully. Mar 3 13:45:55.133615 kubelet[2840]: E0303 13:45:55.132987 2840 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-468gm" podUID="1d197286-6496-40b9-b42f-7c09e055ab02" Mar 3 13:45:55.873379 containerd[1556]: time="2026-03-03T13:45:55.872996824Z" level=info msg="CreateContainer within sandbox \"90dc62cba0af49f124547ba732661077f420dac8fabde2e1616d41681f760ac9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 3 13:45:56.010468 containerd[1556]: time="2026-03-03T13:45:56.008895078Z" level=info msg="Container 48776208bc6eda9e754e243c896b3e9b1f084d967540f9af0c9e2e38003bf83e: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:45:56.091726 containerd[1556]: time="2026-03-03T13:45:56.090451179Z" level=info msg="CreateContainer within sandbox \"90dc62cba0af49f124547ba732661077f420dac8fabde2e1616d41681f760ac9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"48776208bc6eda9e754e243c896b3e9b1f084d967540f9af0c9e2e38003bf83e\"" Mar 3 13:45:56.095772 containerd[1556]: time="2026-03-03T13:45:56.094670593Z" level=info msg="StartContainer for \"48776208bc6eda9e754e243c896b3e9b1f084d967540f9af0c9e2e38003bf83e\"" Mar 3 13:45:56.113101 containerd[1556]: time="2026-03-03T13:45:56.106119599Z" level=info msg="connecting to shim 48776208bc6eda9e754e243c896b3e9b1f084d967540f9af0c9e2e38003bf83e" address="unix:///run/containerd/s/4feed5962c880f8a52a45d0459be1ea180e92f4ff73a919adff3876af5e98c25" protocol=ttrpc version=3 Mar 3 13:45:56.293833 systemd[1]: Started cri-containerd-48776208bc6eda9e754e243c896b3e9b1f084d967540f9af0c9e2e38003bf83e.scope - libcontainer container 48776208bc6eda9e754e243c896b3e9b1f084d967540f9af0c9e2e38003bf83e. Mar 3 13:45:56.701843 containerd[1556]: time="2026-03-03T13:45:56.701624788Z" level=info msg="StartContainer for \"48776208bc6eda9e754e243c896b3e9b1f084d967540f9af0c9e2e38003bf83e\" returns successfully" Mar 3 13:45:56.909828 kubelet[2840]: I0303 13:45:56.908594 2840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6lxgg" podStartSLOduration=38.613345945 podStartE2EDuration="1m44.908567323s" podCreationTimestamp="2026-03-03 13:44:12 +0000 UTC" firstStartedPulling="2026-03-03 13:44:13.977009243 +0000 UTC m=+76.908595266" lastFinishedPulling="2026-03-03 13:45:20.272230621 +0000 UTC m=+143.203816644" observedRunningTime="2026-03-03 13:45:56.901211885 +0000 UTC m=+179.832797938" watchObservedRunningTime="2026-03-03 13:45:56.908567323 +0000 UTC m=+179.840153347" Mar 3 13:45:57.133470 kubelet[2840]: E0303 13:45:57.132936 2840 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-468gm" podUID="1d197286-6496-40b9-b42f-7c09e055ab02" Mar 3 13:45:59.171960 systemd[1]: Created slice kubepods-besteffort-pod1d197286_6496_40b9_b42f_7c09e055ab02.slice - libcontainer container kubepods-besteffort-pod1d197286_6496_40b9_b42f_7c09e055ab02.slice. Mar 3 13:45:59.195357 containerd[1556]: time="2026-03-03T13:45:59.194982287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-468gm,Uid:1d197286-6496-40b9-b42f-7c09e055ab02,Namespace:calico-system,Attempt:0,}" Mar 3 13:46:00.231250 systemd-networkd[1442]: cali9b053f04c47: Link UP Mar 3 13:46:00.231936 systemd-networkd[1442]: cali9b053f04c47: Gained carrier Mar 3 13:46:00.303756 containerd[1556]: 2026-03-03 13:45:59.648 [ERROR][4003] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 3 13:46:00.303756 containerd[1556]: 2026-03-03 13:45:59.753 [INFO][4003] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--468gm-eth0 csi-node-driver- calico-system 1d197286-6496-40b9-b42f-7c09e055ab02 859 0 2026-03-03 13:44:12 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-468gm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9b053f04c47 [] [] }} ContainerID="b88ce6138031b2a58e607fae54ec5528e93cf3a2860711bc9622ba63104cf78a" Namespace="calico-system" Pod="csi-node-driver-468gm" WorkloadEndpoint="localhost-k8s-csi--node--driver--468gm-" Mar 3 13:46:00.303756 containerd[1556]: 2026-03-03 13:45:59.754 [INFO][4003] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b88ce6138031b2a58e607fae54ec5528e93cf3a2860711bc9622ba63104cf78a" Namespace="calico-system" Pod="csi-node-driver-468gm" WorkloadEndpoint="localhost-k8s-csi--node--driver--468gm-eth0" Mar 3 13:46:00.303756 containerd[1556]: 2026-03-03 13:45:59.934 [INFO][4020] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b88ce6138031b2a58e607fae54ec5528e93cf3a2860711bc9622ba63104cf78a" HandleID="k8s-pod-network.b88ce6138031b2a58e607fae54ec5528e93cf3a2860711bc9622ba63104cf78a" Workload="localhost-k8s-csi--node--driver--468gm-eth0" Mar 3 13:46:00.304673 containerd[1556]: 2026-03-03 13:45:59.962 [INFO][4020] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b88ce6138031b2a58e607fae54ec5528e93cf3a2860711bc9622ba63104cf78a" HandleID="k8s-pod-network.b88ce6138031b2a58e607fae54ec5528e93cf3a2860711bc9622ba63104cf78a" Workload="localhost-k8s-csi--node--driver--468gm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139750), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-468gm", "timestamp":"2026-03-03 13:45:59.934934011 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000218dc0)} Mar 3 13:46:00.304673 containerd[1556]: 2026-03-03 13:45:59.963 [INFO][4020] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 13:46:00.304673 containerd[1556]: 2026-03-03 13:45:59.964 [INFO][4020] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 13:46:00.304673 containerd[1556]: 2026-03-03 13:45:59.966 [INFO][4020] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 3 13:46:00.304673 containerd[1556]: 2026-03-03 13:45:59.978 [INFO][4020] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b88ce6138031b2a58e607fae54ec5528e93cf3a2860711bc9622ba63104cf78a" host="localhost" Mar 3 13:46:00.304673 containerd[1556]: 2026-03-03 13:46:00.028 [INFO][4020] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 3 13:46:00.304673 containerd[1556]: 2026-03-03 13:46:00.051 [INFO][4020] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 3 13:46:00.304673 containerd[1556]: 2026-03-03 13:46:00.062 [INFO][4020] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 3 13:46:00.304673 containerd[1556]: 2026-03-03 13:46:00.075 [INFO][4020] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 3 13:46:00.304673 containerd[1556]: 2026-03-03 13:46:00.075 [INFO][4020] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b88ce6138031b2a58e607fae54ec5528e93cf3a2860711bc9622ba63104cf78a" host="localhost" Mar 3 13:46:00.305237 containerd[1556]: 2026-03-03 13:46:00.084 [INFO][4020] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b88ce6138031b2a58e607fae54ec5528e93cf3a2860711bc9622ba63104cf78a Mar 3 13:46:00.305237 containerd[1556]: 2026-03-03 13:46:00.099 [INFO][4020] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b88ce6138031b2a58e607fae54ec5528e93cf3a2860711bc9622ba63104cf78a" host="localhost" Mar 3 13:46:00.305237 containerd[1556]: 2026-03-03 13:46:00.129 [INFO][4020] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.b88ce6138031b2a58e607fae54ec5528e93cf3a2860711bc9622ba63104cf78a" host="localhost" Mar 3 13:46:00.305237 containerd[1556]: 2026-03-03 13:46:00.130 [INFO][4020] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.b88ce6138031b2a58e607fae54ec5528e93cf3a2860711bc9622ba63104cf78a" host="localhost" Mar 3 13:46:00.305237 containerd[1556]: 2026-03-03 13:46:00.130 [INFO][4020] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 13:46:00.305237 containerd[1556]: 2026-03-03 13:46:00.131 [INFO][4020] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="b88ce6138031b2a58e607fae54ec5528e93cf3a2860711bc9622ba63104cf78a" HandleID="k8s-pod-network.b88ce6138031b2a58e607fae54ec5528e93cf3a2860711bc9622ba63104cf78a" Workload="localhost-k8s-csi--node--driver--468gm-eth0" Mar 3 13:46:00.306628 containerd[1556]: 2026-03-03 13:46:00.154 [INFO][4003] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b88ce6138031b2a58e607fae54ec5528e93cf3a2860711bc9622ba63104cf78a" Namespace="calico-system" Pod="csi-node-driver-468gm" WorkloadEndpoint="localhost-k8s-csi--node--driver--468gm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--468gm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1d197286-6496-40b9-b42f-7c09e055ab02", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 44, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-468gm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9b053f04c47", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:46:00.306864 containerd[1556]: 2026-03-03 13:46:00.154 [INFO][4003] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="b88ce6138031b2a58e607fae54ec5528e93cf3a2860711bc9622ba63104cf78a" Namespace="calico-system" Pod="csi-node-driver-468gm" WorkloadEndpoint="localhost-k8s-csi--node--driver--468gm-eth0" Mar 3 13:46:00.306864 containerd[1556]: 2026-03-03 13:46:00.154 [INFO][4003] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9b053f04c47 ContainerID="b88ce6138031b2a58e607fae54ec5528e93cf3a2860711bc9622ba63104cf78a" Namespace="calico-system" Pod="csi-node-driver-468gm" WorkloadEndpoint="localhost-k8s-csi--node--driver--468gm-eth0" Mar 3 13:46:00.306864 containerd[1556]: 2026-03-03 13:46:00.228 [INFO][4003] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b88ce6138031b2a58e607fae54ec5528e93cf3a2860711bc9622ba63104cf78a" Namespace="calico-system" Pod="csi-node-driver-468gm" WorkloadEndpoint="localhost-k8s-csi--node--driver--468gm-eth0" Mar 3 13:46:00.306973 containerd[1556]: 2026-03-03 13:46:00.230 [INFO][4003] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b88ce6138031b2a58e607fae54ec5528e93cf3a2860711bc9622ba63104cf78a" Namespace="calico-system" Pod="csi-node-driver-468gm" WorkloadEndpoint="localhost-k8s-csi--node--driver--468gm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--468gm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1d197286-6496-40b9-b42f-7c09e055ab02", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 44, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b88ce6138031b2a58e607fae54ec5528e93cf3a2860711bc9622ba63104cf78a", Pod:"csi-node-driver-468gm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9b053f04c47", MAC:"da:4e:e6:de:c0:5b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:46:00.307167 containerd[1556]: 2026-03-03 13:46:00.288 [INFO][4003] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b88ce6138031b2a58e607fae54ec5528e93cf3a2860711bc9622ba63104cf78a" Namespace="calico-system" Pod="csi-node-driver-468gm" WorkloadEndpoint="localhost-k8s-csi--node--driver--468gm-eth0" Mar 3 13:46:00.625925 containerd[1556]: time="2026-03-03T13:46:00.625630004Z" level=info msg="connecting to shim b88ce6138031b2a58e607fae54ec5528e93cf3a2860711bc9622ba63104cf78a" address="unix:///run/containerd/s/5eae1752bf382d3acf46300f97c6c67f21284020820bd2f0a8547c467c41bfff" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:46:00.801864 systemd[1]: Started cri-containerd-b88ce6138031b2a58e607fae54ec5528e93cf3a2860711bc9622ba63104cf78a.scope - libcontainer container b88ce6138031b2a58e607fae54ec5528e93cf3a2860711bc9622ba63104cf78a. Mar 3 13:46:00.936956 systemd-resolved[1449]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 3 13:46:01.080710 containerd[1556]: time="2026-03-03T13:46:01.080554182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-468gm,Uid:1d197286-6496-40b9-b42f-7c09e055ab02,Namespace:calico-system,Attempt:0,} returns sandbox id \"b88ce6138031b2a58e607fae54ec5528e93cf3a2860711bc9622ba63104cf78a\"" Mar 3 13:46:01.111837 containerd[1556]: time="2026-03-03T13:46:01.110004155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 3 13:46:01.603828 systemd-networkd[1442]: cali9b053f04c47: Gained IPv6LL Mar 3 13:46:01.645149 kubelet[2840]: E0303 13:46:01.644941 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:46:01.863262 kubelet[2840]: E0303 13:46:01.861740 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:46:02.526660 kubelet[2840]: E0303 13:46:02.524923 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:46:03.316434 containerd[1556]: time="2026-03-03T13:46:03.313115480Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:46:03.373042 containerd[1556]: time="2026-03-03T13:46:03.357772017Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 3 13:46:03.377751 containerd[1556]: time="2026-03-03T13:46:03.377258275Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:46:03.416221 containerd[1556]: time="2026-03-03T13:46:03.416012967Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:46:03.428353 containerd[1556]: time="2026-03-03T13:46:03.424219106Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 2.314112419s" Mar 3 13:46:03.428353 containerd[1556]: time="2026-03-03T13:46:03.424412566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 3 13:46:03.474067 containerd[1556]: time="2026-03-03T13:46:03.471994586Z" level=info msg="CreateContainer within sandbox \"b88ce6138031b2a58e607fae54ec5528e93cf3a2860711bc9622ba63104cf78a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 3 13:46:03.584857 containerd[1556]: time="2026-03-03T13:46:03.582679673Z" level=info msg="Container fa97773d3716220cf2ce212cf69464e8b1b9041ffe04dfb04e9e1a9efa9d2fea: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:46:03.690255 containerd[1556]: time="2026-03-03T13:46:03.690123056Z" level=info msg="CreateContainer within sandbox \"b88ce6138031b2a58e607fae54ec5528e93cf3a2860711bc9622ba63104cf78a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"fa97773d3716220cf2ce212cf69464e8b1b9041ffe04dfb04e9e1a9efa9d2fea\"" Mar 3 13:46:03.693734 containerd[1556]: time="2026-03-03T13:46:03.693644054Z" level=info msg="StartContainer for \"fa97773d3716220cf2ce212cf69464e8b1b9041ffe04dfb04e9e1a9efa9d2fea\"" Mar 3 13:46:03.705023 containerd[1556]: time="2026-03-03T13:46:03.704882945Z" level=info msg="connecting to shim fa97773d3716220cf2ce212cf69464e8b1b9041ffe04dfb04e9e1a9efa9d2fea" address="unix:///run/containerd/s/5eae1752bf382d3acf46300f97c6c67f21284020820bd2f0a8547c467c41bfff" protocol=ttrpc version=3 Mar 3 13:46:03.838208 systemd[1]: Started cri-containerd-fa97773d3716220cf2ce212cf69464e8b1b9041ffe04dfb04e9e1a9efa9d2fea.scope - libcontainer container fa97773d3716220cf2ce212cf69464e8b1b9041ffe04dfb04e9e1a9efa9d2fea. Mar 3 13:46:04.243070 containerd[1556]: time="2026-03-03T13:46:04.242881277Z" level=info msg="StartContainer for \"fa97773d3716220cf2ce212cf69464e8b1b9041ffe04dfb04e9e1a9efa9d2fea\" returns successfully" Mar 3 13:46:04.249402 containerd[1556]: time="2026-03-03T13:46:04.249079795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 3 13:46:05.255714 systemd-networkd[1442]: vxlan.calico: Link UP Mar 3 13:46:05.255775 systemd-networkd[1442]: vxlan.calico: Gained carrier Mar 3 13:46:06.465884 systemd-networkd[1442]: vxlan.calico: Gained IPv6LL Mar 3 13:46:07.456060 containerd[1556]: time="2026-03-03T13:46:07.456000051Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:46:07.460162 containerd[1556]: time="2026-03-03T13:46:07.460102075Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 3 13:46:07.469390 containerd[1556]: time="2026-03-03T13:46:07.469194907Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:46:07.533638 containerd[1556]: time="2026-03-03T13:46:07.533450457Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:46:07.541395 containerd[1556]: time="2026-03-03T13:46:07.541124875Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 3.291988707s" Mar 3 13:46:07.541395 containerd[1556]: time="2026-03-03T13:46:07.541233076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 3 13:46:07.569733 containerd[1556]: time="2026-03-03T13:46:07.569379446Z" level=info msg="CreateContainer within sandbox \"b88ce6138031b2a58e607fae54ec5528e93cf3a2860711bc9622ba63104cf78a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 3 13:46:07.618623 containerd[1556]: time="2026-03-03T13:46:07.615169886Z" level=info msg="Container b32a545db3c81e960e559ae6321100fe6b0e3958f215f7497329a8c6433512e1: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:46:07.650717 containerd[1556]: time="2026-03-03T13:46:07.649752854Z" level=info msg="CreateContainer within sandbox \"b88ce6138031b2a58e607fae54ec5528e93cf3a2860711bc9622ba63104cf78a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b32a545db3c81e960e559ae6321100fe6b0e3958f215f7497329a8c6433512e1\"" Mar 3 13:46:07.651848 containerd[1556]: time="2026-03-03T13:46:07.651397988Z" level=info msg="StartContainer for \"b32a545db3c81e960e559ae6321100fe6b0e3958f215f7497329a8c6433512e1\"" Mar 3 13:46:07.656848 containerd[1556]: time="2026-03-03T13:46:07.656758748Z" level=info msg="connecting to shim b32a545db3c81e960e559ae6321100fe6b0e3958f215f7497329a8c6433512e1" address="unix:///run/containerd/s/5eae1752bf382d3acf46300f97c6c67f21284020820bd2f0a8547c467c41bfff" protocol=ttrpc version=3 Mar 3 13:46:07.756911 systemd[1]: Started cri-containerd-b32a545db3c81e960e559ae6321100fe6b0e3958f215f7497329a8c6433512e1.scope - libcontainer container b32a545db3c81e960e559ae6321100fe6b0e3958f215f7497329a8c6433512e1. Mar 3 13:46:08.115805 containerd[1556]: time="2026-03-03T13:46:08.115054499Z" level=info msg="StartContainer for \"b32a545db3c81e960e559ae6321100fe6b0e3958f215f7497329a8c6433512e1\" returns successfully" Mar 3 13:46:08.534151 kubelet[2840]: I0303 13:46:08.532018 2840 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 3 13:46:08.538611 kubelet[2840]: I0303 13:46:08.537901 2840 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 3 13:46:09.161128 kubelet[2840]: I0303 13:46:09.161045 2840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-468gm" podStartSLOduration=110.716816027 podStartE2EDuration="1m57.161022678s" podCreationTimestamp="2026-03-03 13:44:12 +0000 UTC" firstStartedPulling="2026-03-03 13:46:01.100701828 +0000 UTC m=+184.032287850" lastFinishedPulling="2026-03-03 13:46:07.544908478 +0000 UTC m=+190.476494501" observedRunningTime="2026-03-03 13:46:09.155829623 +0000 UTC m=+192.087415686" watchObservedRunningTime="2026-03-03 13:46:09.161022678 +0000 UTC m=+192.092608721" Mar 3 13:46:12.065165 kubelet[2840]: I0303 13:46:12.065010 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4zgg\" (UniqueName: \"kubernetes.io/projected/ff00a450-c4d4-4e77-8007-51142b9dc51b-kube-api-access-g4zgg\") pod \"coredns-66bc5c9577-bnn4w\" (UID: \"ff00a450-c4d4-4e77-8007-51142b9dc51b\") " pod="kube-system/coredns-66bc5c9577-bnn4w" Mar 3 13:46:12.065165 kubelet[2840]: I0303 13:46:12.065087 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff00a450-c4d4-4e77-8007-51142b9dc51b-config-volume\") pod \"coredns-66bc5c9577-bnn4w\" (UID: \"ff00a450-c4d4-4e77-8007-51142b9dc51b\") " pod="kube-system/coredns-66bc5c9577-bnn4w" Mar 3 13:46:12.085176 systemd[1]: Created slice kubepods-burstable-podff00a450_c4d4_4e77_8007_51142b9dc51b.slice - libcontainer container kubepods-burstable-podff00a450_c4d4_4e77_8007_51142b9dc51b.slice. Mar 3 13:46:12.135989 systemd[1]: Created slice kubepods-besteffort-pod336ae713_c8e4_4738_a9ca_31c5a3b0440d.slice - libcontainer container kubepods-besteffort-pod336ae713_c8e4_4738_a9ca_31c5a3b0440d.slice. Mar 3 13:46:12.168021 kubelet[2840]: I0303 13:46:12.166226 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/336ae713-c8e4-4738-a9ca-31c5a3b0440d-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-hn5jc\" (UID: \"336ae713-c8e4-4738-a9ca-31c5a3b0440d\") " pod="calico-system/goldmane-cccfbd5cf-hn5jc" Mar 3 13:46:12.182060 kubelet[2840]: I0303 13:46:12.181729 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/336ae713-c8e4-4738-a9ca-31c5a3b0440d-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-hn5jc\" (UID: \"336ae713-c8e4-4738-a9ca-31c5a3b0440d\") " pod="calico-system/goldmane-cccfbd5cf-hn5jc" Mar 3 13:46:12.189188 kubelet[2840]: I0303 13:46:12.189153 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bmqn\" (UniqueName: \"kubernetes.io/projected/7a7b4e4f-cfb0-4dd7-8b2f-1f646235fb43-kube-api-access-7bmqn\") pod \"whisker-79dbfb954-6l64x\" (UID: \"7a7b4e4f-cfb0-4dd7-8b2f-1f646235fb43\") " pod="calico-system/whisker-79dbfb954-6l64x" Mar 3 13:46:12.189613 kubelet[2840]: I0303 13:46:12.189589 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a7b4e4f-cfb0-4dd7-8b2f-1f646235fb43-whisker-ca-bundle\") pod \"whisker-79dbfb954-6l64x\" (UID: \"7a7b4e4f-cfb0-4dd7-8b2f-1f646235fb43\") " pod="calico-system/whisker-79dbfb954-6l64x" Mar 3 13:46:12.193404 kubelet[2840]: I0303 13:46:12.189732 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58cq8\" (UniqueName: \"kubernetes.io/projected/aa113212-d6fe-4f61-a3b2-6710b541c030-kube-api-access-58cq8\") pod \"calico-kube-controllers-5f5ffcc77c-mxflh\" (UID: \"aa113212-d6fe-4f61-a3b2-6710b541c030\") " pod="calico-system/calico-kube-controllers-5f5ffcc77c-mxflh" Mar 3 13:46:12.193404 kubelet[2840]: I0303 13:46:12.189759 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/7a7b4e4f-cfb0-4dd7-8b2f-1f646235fb43-nginx-config\") pod \"whisker-79dbfb954-6l64x\" (UID: \"7a7b4e4f-cfb0-4dd7-8b2f-1f646235fb43\") " pod="calico-system/whisker-79dbfb954-6l64x" Mar 3 13:46:12.194118 kubelet[2840]: I0303 13:46:12.189786 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwrp8\" (UniqueName: \"kubernetes.io/projected/86b8c8a1-11f6-4a45-9df7-233830b8adcc-kube-api-access-gwrp8\") pod \"calico-apiserver-fc7c666f6-9x69k\" (UID: \"86b8c8a1-11f6-4a45-9df7-233830b8adcc\") " pod="calico-system/calico-apiserver-fc7c666f6-9x69k" Mar 3 13:46:12.200070 kubelet[2840]: I0303 13:46:12.197646 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/336ae713-c8e4-4738-a9ca-31c5a3b0440d-config\") pod \"goldmane-cccfbd5cf-hn5jc\" (UID: \"336ae713-c8e4-4738-a9ca-31c5a3b0440d\") " pod="calico-system/goldmane-cccfbd5cf-hn5jc" Mar 3 13:46:12.206701 kubelet[2840]: I0303 13:46:12.204247 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa113212-d6fe-4f61-a3b2-6710b541c030-tigera-ca-bundle\") pod \"calico-kube-controllers-5f5ffcc77c-mxflh\" (UID: \"aa113212-d6fe-4f61-a3b2-6710b541c030\") " pod="calico-system/calico-kube-controllers-5f5ffcc77c-mxflh" Mar 3 13:46:12.206701 kubelet[2840]: I0303 13:46:12.204402 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7a7b4e4f-cfb0-4dd7-8b2f-1f646235fb43-whisker-backend-key-pair\") pod \"whisker-79dbfb954-6l64x\" (UID: \"7a7b4e4f-cfb0-4dd7-8b2f-1f646235fb43\") " pod="calico-system/whisker-79dbfb954-6l64x" Mar 3 13:46:12.206701 kubelet[2840]: I0303 13:46:12.204834 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/86b8c8a1-11f6-4a45-9df7-233830b8adcc-calico-apiserver-certs\") pod \"calico-apiserver-fc7c666f6-9x69k\" (UID: \"86b8c8a1-11f6-4a45-9df7-233830b8adcc\") " pod="calico-system/calico-apiserver-fc7c666f6-9x69k" Mar 3 13:46:12.206701 kubelet[2840]: I0303 13:46:12.204876 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t82nj\" (UniqueName: \"kubernetes.io/projected/336ae713-c8e4-4738-a9ca-31c5a3b0440d-kube-api-access-t82nj\") pod \"goldmane-cccfbd5cf-hn5jc\" (UID: \"336ae713-c8e4-4738-a9ca-31c5a3b0440d\") " pod="calico-system/goldmane-cccfbd5cf-hn5jc" Mar 3 13:46:12.206701 kubelet[2840]: I0303 13:46:12.204901 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bcebe7da-1239-4617-90b7-eb0a49f4ab99-calico-apiserver-certs\") pod \"calico-apiserver-fc7c666f6-mtrtx\" (UID: \"bcebe7da-1239-4617-90b7-eb0a49f4ab99\") " pod="calico-system/calico-apiserver-fc7c666f6-mtrtx" Mar 3 13:46:12.207003 kubelet[2840]: I0303 13:46:12.204922 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d7d4\" (UniqueName: \"kubernetes.io/projected/bcebe7da-1239-4617-90b7-eb0a49f4ab99-kube-api-access-5d7d4\") pod \"calico-apiserver-fc7c666f6-mtrtx\" (UID: \"bcebe7da-1239-4617-90b7-eb0a49f4ab99\") " pod="calico-system/calico-apiserver-fc7c666f6-mtrtx" Mar 3 13:46:12.236479 systemd[1]: Created slice kubepods-besteffort-podaa113212_d6fe_4f61_a3b2_6710b541c030.slice - libcontainer container kubepods-besteffort-podaa113212_d6fe_4f61_a3b2_6710b541c030.slice. Mar 3 13:46:12.242666 systemd[1]: Started sshd@9-10.0.0.57:22-10.0.0.1:50628.service - OpenSSH per-connection server daemon (10.0.0.1:50628). Mar 3 13:46:12.310634 kubelet[2840]: I0303 13:46:12.309803 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd9gv\" (UniqueName: \"kubernetes.io/projected/7281a48e-7aea-4dfa-a2c0-dfba1b9bafb6-kube-api-access-dd9gv\") pod \"coredns-66bc5c9577-mnk57\" (UID: \"7281a48e-7aea-4dfa-a2c0-dfba1b9bafb6\") " pod="kube-system/coredns-66bc5c9577-mnk57" Mar 3 13:46:12.320848 kubelet[2840]: I0303 13:46:12.316859 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7281a48e-7aea-4dfa-a2c0-dfba1b9bafb6-config-volume\") pod \"coredns-66bc5c9577-mnk57\" (UID: \"7281a48e-7aea-4dfa-a2c0-dfba1b9bafb6\") " pod="kube-system/coredns-66bc5c9577-mnk57" Mar 3 13:46:12.329075 systemd[1]: Created slice kubepods-besteffort-podbcebe7da_1239_4617_90b7_eb0a49f4ab99.slice - libcontainer container kubepods-besteffort-podbcebe7da_1239_4617_90b7_eb0a49f4ab99.slice. Mar 3 13:46:12.452718 systemd[1]: Created slice kubepods-besteffort-pod86b8c8a1_11f6_4a45_9df7_233830b8adcc.slice - libcontainer container kubepods-besteffort-pod86b8c8a1_11f6_4a45_9df7_233830b8adcc.slice. Mar 3 13:46:12.564833 kubelet[2840]: E0303 13:46:12.564646 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:46:12.640083 containerd[1556]: time="2026-03-03T13:46:12.639241110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-bnn4w,Uid:ff00a450-c4d4-4e77-8007-51142b9dc51b,Namespace:kube-system,Attempt:0,}" Mar 3 13:46:12.643202 systemd[1]: Created slice kubepods-besteffort-pod7a7b4e4f_cfb0_4dd7_8b2f_1f646235fb43.slice - libcontainer container kubepods-besteffort-pod7a7b4e4f_cfb0_4dd7_8b2f_1f646235fb43.slice. Mar 3 13:46:12.757137 sshd[4399]: Accepted publickey for core from 10.0.0.1 port 50628 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 13:46:12.761987 sshd-session[4399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:46:12.803052 systemd-logind[1547]: New session 10 of user core. Mar 3 13:46:12.808848 containerd[1556]: time="2026-03-03T13:46:12.808725210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f5ffcc77c-mxflh,Uid:aa113212-d6fe-4f61-a3b2-6710b541c030,Namespace:calico-system,Attempt:0,}" Mar 3 13:46:12.810251 containerd[1556]: time="2026-03-03T13:46:12.810030611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79dbfb954-6l64x,Uid:7a7b4e4f-cfb0-4dd7-8b2f-1f646235fb43,Namespace:calico-system,Attempt:0,}" Mar 3 13:46:12.826878 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 3 13:46:12.836150 systemd[1]: Created slice kubepods-burstable-pod7281a48e_7aea_4dfa_a2c0_dfba1b9bafb6.slice - libcontainer container kubepods-burstable-pod7281a48e_7aea_4dfa_a2c0_dfba1b9bafb6.slice. Mar 3 13:46:12.979466 containerd[1556]: time="2026-03-03T13:46:12.963464438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-hn5jc,Uid:336ae713-c8e4-4738-a9ca-31c5a3b0440d,Namespace:calico-system,Attempt:0,}" Mar 3 13:46:13.019546 kubelet[2840]: E0303 13:46:13.019436 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:46:13.092187 containerd[1556]: time="2026-03-03T13:46:13.092126442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fc7c666f6-9x69k,Uid:86b8c8a1-11f6-4a45-9df7-233830b8adcc,Namespace:calico-system,Attempt:0,}" Mar 3 13:46:13.115734 containerd[1556]: time="2026-03-03T13:46:13.096692377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mnk57,Uid:7281a48e-7aea-4dfa-a2c0-dfba1b9bafb6,Namespace:kube-system,Attempt:0,}" Mar 3 13:46:13.128621 containerd[1556]: time="2026-03-03T13:46:13.096937572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fc7c666f6-mtrtx,Uid:bcebe7da-1239-4617-90b7-eb0a49f4ab99,Namespace:calico-system,Attempt:0,}" Mar 3 13:46:14.026039 sshd[4435]: Connection closed by 10.0.0.1 port 50628 Mar 3 13:46:14.024668 sshd-session[4399]: pam_unix(sshd:session): session closed for user core Mar 3 13:46:14.049609 systemd[1]: sshd@9-10.0.0.57:22-10.0.0.1:50628.service: Deactivated successfully. Mar 3 13:46:14.076258 systemd[1]: session-10.scope: Deactivated successfully. Mar 3 13:46:14.099574 systemd-logind[1547]: Session 10 logged out. Waiting for processes to exit. Mar 3 13:46:14.105126 systemd-logind[1547]: Removed session 10. Mar 3 13:46:14.755055 systemd-networkd[1442]: cali02f5c20f6a7: Link UP Mar 3 13:46:14.763574 systemd-networkd[1442]: cali02f5c20f6a7: Gained carrier Mar 3 13:46:14.882838 containerd[1556]: 2026-03-03 13:46:13.046 [INFO][4414] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--bnn4w-eth0 coredns-66bc5c9577- kube-system ff00a450-c4d4-4e77-8007-51142b9dc51b 1258 0 2026-03-03 13:43:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-bnn4w eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali02f5c20f6a7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="9aae4e284bcaa6d0211e8f79265e251678608584741578fb7288f82fc1a1783f" Namespace="kube-system" Pod="coredns-66bc5c9577-bnn4w" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--bnn4w-" Mar 3 13:46:14.882838 containerd[1556]: 2026-03-03 13:46:13.046 [INFO][4414] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9aae4e284bcaa6d0211e8f79265e251678608584741578fb7288f82fc1a1783f" Namespace="kube-system" Pod="coredns-66bc5c9577-bnn4w" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--bnn4w-eth0" Mar 3 13:46:14.882838 containerd[1556]: 2026-03-03 13:46:13.805 [INFO][4492] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9aae4e284bcaa6d0211e8f79265e251678608584741578fb7288f82fc1a1783f" HandleID="k8s-pod-network.9aae4e284bcaa6d0211e8f79265e251678608584741578fb7288f82fc1a1783f" Workload="localhost-k8s-coredns--66bc5c9577--bnn4w-eth0" Mar 3 13:46:14.883773 containerd[1556]: 2026-03-03 13:46:14.016 [INFO][4492] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9aae4e284bcaa6d0211e8f79265e251678608584741578fb7288f82fc1a1783f" HandleID="k8s-pod-network.9aae4e284bcaa6d0211e8f79265e251678608584741578fb7288f82fc1a1783f" Workload="localhost-k8s-coredns--66bc5c9577--bnn4w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000530a70), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-bnn4w", "timestamp":"2026-03-03 13:46:13.805632259 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00056c580)} Mar 3 13:46:14.883773 containerd[1556]: 2026-03-03 13:46:14.042 [INFO][4492] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 13:46:14.883773 containerd[1556]: 2026-03-03 13:46:14.077 [INFO][4492] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 13:46:14.883773 containerd[1556]: 2026-03-03 13:46:14.077 [INFO][4492] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 3 13:46:14.883773 containerd[1556]: 2026-03-03 13:46:14.112 [INFO][4492] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9aae4e284bcaa6d0211e8f79265e251678608584741578fb7288f82fc1a1783f" host="localhost" Mar 3 13:46:14.883773 containerd[1556]: 2026-03-03 13:46:14.231 [INFO][4492] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 3 13:46:14.883773 containerd[1556]: 2026-03-03 13:46:14.423 [INFO][4492] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 3 13:46:14.883773 containerd[1556]: 2026-03-03 13:46:14.458 [INFO][4492] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 3 13:46:14.883773 containerd[1556]: 2026-03-03 13:46:14.474 [INFO][4492] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 3 13:46:14.883773 containerd[1556]: 2026-03-03 13:46:14.474 [INFO][4492] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9aae4e284bcaa6d0211e8f79265e251678608584741578fb7288f82fc1a1783f" host="localhost" Mar 3 13:46:14.886372 containerd[1556]: 2026-03-03 13:46:14.513 [INFO][4492] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9aae4e284bcaa6d0211e8f79265e251678608584741578fb7288f82fc1a1783f Mar 3 13:46:14.886372 containerd[1556]: 2026-03-03 13:46:14.583 [INFO][4492] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9aae4e284bcaa6d0211e8f79265e251678608584741578fb7288f82fc1a1783f" host="localhost" Mar 3 13:46:14.886372 containerd[1556]: 2026-03-03 13:46:14.621 [INFO][4492] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.9aae4e284bcaa6d0211e8f79265e251678608584741578fb7288f82fc1a1783f" host="localhost" Mar 3 13:46:14.886372 containerd[1556]: 2026-03-03 13:46:14.621 [INFO][4492] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.9aae4e284bcaa6d0211e8f79265e251678608584741578fb7288f82fc1a1783f" host="localhost" Mar 3 13:46:14.886372 containerd[1556]: 2026-03-03 13:46:14.622 [INFO][4492] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 13:46:14.886372 containerd[1556]: 2026-03-03 13:46:14.622 [INFO][4492] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="9aae4e284bcaa6d0211e8f79265e251678608584741578fb7288f82fc1a1783f" HandleID="k8s-pod-network.9aae4e284bcaa6d0211e8f79265e251678608584741578fb7288f82fc1a1783f" Workload="localhost-k8s-coredns--66bc5c9577--bnn4w-eth0" Mar 3 13:46:14.887771 containerd[1556]: 2026-03-03 13:46:14.714 [INFO][4414] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9aae4e284bcaa6d0211e8f79265e251678608584741578fb7288f82fc1a1783f" Namespace="kube-system" Pod="coredns-66bc5c9577-bnn4w" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--bnn4w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--bnn4w-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"ff00a450-c4d4-4e77-8007-51142b9dc51b", ResourceVersion:"1258", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 43, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-bnn4w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali02f5c20f6a7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:46:14.887771 containerd[1556]: 2026-03-03 13:46:14.714 [INFO][4414] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="9aae4e284bcaa6d0211e8f79265e251678608584741578fb7288f82fc1a1783f" Namespace="kube-system" Pod="coredns-66bc5c9577-bnn4w" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--bnn4w-eth0" Mar 3 13:46:14.887771 containerd[1556]: 2026-03-03 13:46:14.714 [INFO][4414] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali02f5c20f6a7 ContainerID="9aae4e284bcaa6d0211e8f79265e251678608584741578fb7288f82fc1a1783f" Namespace="kube-system" Pod="coredns-66bc5c9577-bnn4w" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--bnn4w-eth0" Mar 3 13:46:14.887771 containerd[1556]: 2026-03-03 13:46:14.787 [INFO][4414] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9aae4e284bcaa6d0211e8f79265e251678608584741578fb7288f82fc1a1783f" Namespace="kube-system" Pod="coredns-66bc5c9577-bnn4w" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--bnn4w-eth0" Mar 3 13:46:14.887771 containerd[1556]: 2026-03-03 13:46:14.792 [INFO][4414] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9aae4e284bcaa6d0211e8f79265e251678608584741578fb7288f82fc1a1783f" Namespace="kube-system" Pod="coredns-66bc5c9577-bnn4w" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--bnn4w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--bnn4w-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"ff00a450-c4d4-4e77-8007-51142b9dc51b", ResourceVersion:"1258", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 43, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9aae4e284bcaa6d0211e8f79265e251678608584741578fb7288f82fc1a1783f", Pod:"coredns-66bc5c9577-bnn4w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali02f5c20f6a7", MAC:"4a:4f:11:1a:d3:bd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:46:14.887771 containerd[1556]: 2026-03-03 13:46:14.863 [INFO][4414] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9aae4e284bcaa6d0211e8f79265e251678608584741578fb7288f82fc1a1783f" Namespace="kube-system" Pod="coredns-66bc5c9577-bnn4w" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--bnn4w-eth0" Mar 3 13:46:15.147480 systemd-networkd[1442]: calic356acf92cf: Link UP Mar 3 13:46:15.149196 systemd-networkd[1442]: calic356acf92cf: Gained carrier Mar 3 13:46:15.194227 containerd[1556]: time="2026-03-03T13:46:15.194101182Z" level=info msg="connecting to shim 9aae4e284bcaa6d0211e8f79265e251678608584741578fb7288f82fc1a1783f" address="unix:///run/containerd/s/71aa612ff06ab455eca2888beea39e9e736282b465aef444320651a167ccf19e" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:46:15.267411 containerd[1556]: 2026-03-03 13:46:14.416 [INFO][4485] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--fc7c666f6--mtrtx-eth0 calico-apiserver-fc7c666f6- calico-system bcebe7da-1239-4617-90b7-eb0a49f4ab99 1273 0 2026-03-03 13:44:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:fc7c666f6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-fc7c666f6-mtrtx eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calic356acf92cf [] [] }} ContainerID="9a1bee19bc9b7520c70a286cea00d18c1ba3d0801a84c08eef157eab32e4c22d" Namespace="calico-system" Pod="calico-apiserver-fc7c666f6-mtrtx" WorkloadEndpoint="localhost-k8s-calico--apiserver--fc7c666f6--mtrtx-" Mar 3 13:46:15.267411 containerd[1556]: 2026-03-03 13:46:14.423 [INFO][4485] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9a1bee19bc9b7520c70a286cea00d18c1ba3d0801a84c08eef157eab32e4c22d" Namespace="calico-system" Pod="calico-apiserver-fc7c666f6-mtrtx" WorkloadEndpoint="localhost-k8s-calico--apiserver--fc7c666f6--mtrtx-eth0" Mar 3 13:46:15.267411 containerd[1556]: 2026-03-03 13:46:14.725 [INFO][4571] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9a1bee19bc9b7520c70a286cea00d18c1ba3d0801a84c08eef157eab32e4c22d" HandleID="k8s-pod-network.9a1bee19bc9b7520c70a286cea00d18c1ba3d0801a84c08eef157eab32e4c22d" Workload="localhost-k8s-calico--apiserver--fc7c666f6--mtrtx-eth0" Mar 3 13:46:15.267411 containerd[1556]: 2026-03-03 13:46:14.790 [INFO][4571] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9a1bee19bc9b7520c70a286cea00d18c1ba3d0801a84c08eef157eab32e4c22d" HandleID="k8s-pod-network.9a1bee19bc9b7520c70a286cea00d18c1ba3d0801a84c08eef157eab32e4c22d" Workload="localhost-k8s-calico--apiserver--fc7c666f6--mtrtx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00010fb10), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-fc7c666f6-mtrtx", "timestamp":"2026-03-03 13:46:14.725688064 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00052e000)} Mar 3 13:46:15.267411 containerd[1556]: 2026-03-03 13:46:14.790 [INFO][4571] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 13:46:15.267411 containerd[1556]: 2026-03-03 13:46:14.790 [INFO][4571] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 13:46:15.267411 containerd[1556]: 2026-03-03 13:46:14.792 [INFO][4571] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 3 13:46:15.267411 containerd[1556]: 2026-03-03 13:46:14.809 [INFO][4571] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9a1bee19bc9b7520c70a286cea00d18c1ba3d0801a84c08eef157eab32e4c22d" host="localhost" Mar 3 13:46:15.267411 containerd[1556]: 2026-03-03 13:46:14.907 [INFO][4571] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 3 13:46:15.267411 containerd[1556]: 2026-03-03 13:46:14.929 [INFO][4571] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 3 13:46:15.267411 containerd[1556]: 2026-03-03 13:46:14.946 [INFO][4571] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 3 13:46:15.267411 containerd[1556]: 2026-03-03 13:46:14.974 [INFO][4571] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 3 13:46:15.267411 containerd[1556]: 2026-03-03 13:46:14.977 [INFO][4571] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9a1bee19bc9b7520c70a286cea00d18c1ba3d0801a84c08eef157eab32e4c22d" host="localhost" Mar 3 13:46:15.267411 containerd[1556]: 2026-03-03 13:46:14.986 [INFO][4571] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9a1bee19bc9b7520c70a286cea00d18c1ba3d0801a84c08eef157eab32e4c22d Mar 3 13:46:15.267411 containerd[1556]: 2026-03-03 13:46:15.013 [INFO][4571] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9a1bee19bc9b7520c70a286cea00d18c1ba3d0801a84c08eef157eab32e4c22d" host="localhost" Mar 3 13:46:15.267411 containerd[1556]: 2026-03-03 13:46:15.082 [INFO][4571] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.9a1bee19bc9b7520c70a286cea00d18c1ba3d0801a84c08eef157eab32e4c22d" host="localhost" Mar 3 13:46:15.267411 containerd[1556]: 2026-03-03 13:46:15.088 [INFO][4571] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.9a1bee19bc9b7520c70a286cea00d18c1ba3d0801a84c08eef157eab32e4c22d" host="localhost" Mar 3 13:46:15.267411 containerd[1556]: 2026-03-03 13:46:15.090 [INFO][4571] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 13:46:15.267411 containerd[1556]: 2026-03-03 13:46:15.096 [INFO][4571] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="9a1bee19bc9b7520c70a286cea00d18c1ba3d0801a84c08eef157eab32e4c22d" HandleID="k8s-pod-network.9a1bee19bc9b7520c70a286cea00d18c1ba3d0801a84c08eef157eab32e4c22d" Workload="localhost-k8s-calico--apiserver--fc7c666f6--mtrtx-eth0" Mar 3 13:46:15.268691 containerd[1556]: 2026-03-03 13:46:15.116 [INFO][4485] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9a1bee19bc9b7520c70a286cea00d18c1ba3d0801a84c08eef157eab32e4c22d" Namespace="calico-system" Pod="calico-apiserver-fc7c666f6-mtrtx" WorkloadEndpoint="localhost-k8s-calico--apiserver--fc7c666f6--mtrtx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--fc7c666f6--mtrtx-eth0", GenerateName:"calico-apiserver-fc7c666f6-", Namespace:"calico-system", SelfLink:"", UID:"bcebe7da-1239-4617-90b7-eb0a49f4ab99", ResourceVersion:"1273", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 44, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fc7c666f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-fc7c666f6-mtrtx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calic356acf92cf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:46:15.268691 containerd[1556]: 2026-03-03 13:46:15.117 [INFO][4485] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="9a1bee19bc9b7520c70a286cea00d18c1ba3d0801a84c08eef157eab32e4c22d" Namespace="calico-system" Pod="calico-apiserver-fc7c666f6-mtrtx" WorkloadEndpoint="localhost-k8s-calico--apiserver--fc7c666f6--mtrtx-eth0" Mar 3 13:46:15.268691 containerd[1556]: 2026-03-03 13:46:15.117 [INFO][4485] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic356acf92cf ContainerID="9a1bee19bc9b7520c70a286cea00d18c1ba3d0801a84c08eef157eab32e4c22d" Namespace="calico-system" Pod="calico-apiserver-fc7c666f6-mtrtx" WorkloadEndpoint="localhost-k8s-calico--apiserver--fc7c666f6--mtrtx-eth0" Mar 3 13:46:15.268691 containerd[1556]: 2026-03-03 13:46:15.148 [INFO][4485] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9a1bee19bc9b7520c70a286cea00d18c1ba3d0801a84c08eef157eab32e4c22d" Namespace="calico-system" Pod="calico-apiserver-fc7c666f6-mtrtx" WorkloadEndpoint="localhost-k8s-calico--apiserver--fc7c666f6--mtrtx-eth0" Mar 3 13:46:15.268691 containerd[1556]: 2026-03-03 13:46:15.149 [INFO][4485] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9a1bee19bc9b7520c70a286cea00d18c1ba3d0801a84c08eef157eab32e4c22d" Namespace="calico-system" Pod="calico-apiserver-fc7c666f6-mtrtx" WorkloadEndpoint="localhost-k8s-calico--apiserver--fc7c666f6--mtrtx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--fc7c666f6--mtrtx-eth0", GenerateName:"calico-apiserver-fc7c666f6-", Namespace:"calico-system", SelfLink:"", UID:"bcebe7da-1239-4617-90b7-eb0a49f4ab99", ResourceVersion:"1273", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 44, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fc7c666f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9a1bee19bc9b7520c70a286cea00d18c1ba3d0801a84c08eef157eab32e4c22d", Pod:"calico-apiserver-fc7c666f6-mtrtx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calic356acf92cf", MAC:"16:41:09:79:b2:8a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:46:15.268691 containerd[1556]: 2026-03-03 13:46:15.227 [INFO][4485] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9a1bee19bc9b7520c70a286cea00d18c1ba3d0801a84c08eef157eab32e4c22d" Namespace="calico-system" Pod="calico-apiserver-fc7c666f6-mtrtx" WorkloadEndpoint="localhost-k8s-calico--apiserver--fc7c666f6--mtrtx-eth0" Mar 3 13:46:15.446188 containerd[1556]: time="2026-03-03T13:46:15.446133965Z" level=info msg="connecting to shim 9a1bee19bc9b7520c70a286cea00d18c1ba3d0801a84c08eef157eab32e4c22d" address="unix:///run/containerd/s/5f672ac2d07d3fbc26f90aa221367ca697e8e560eb507fe7e83a7587145cc7b9" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:46:15.532790 systemd-networkd[1442]: cali56a73d11870: Link UP Mar 3 13:46:15.546943 systemd-networkd[1442]: cali56a73d11870: Gained carrier Mar 3 13:46:15.555112 systemd[1]: Started cri-containerd-9aae4e284bcaa6d0211e8f79265e251678608584741578fb7288f82fc1a1783f.scope - libcontainer container 9aae4e284bcaa6d0211e8f79265e251678608584741578fb7288f82fc1a1783f. Mar 3 13:46:15.703202 containerd[1556]: 2026-03-03 13:46:14.161 [INFO][4516] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--fc7c666f6--9x69k-eth0 calico-apiserver-fc7c666f6- calico-system 86b8c8a1-11f6-4a45-9df7-233830b8adcc 1266 0 2026-03-03 13:44:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:fc7c666f6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-fc7c666f6-9x69k eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali56a73d11870 [] [] }} ContainerID="c1b1dd83d735103802563aab9ca3af5612f987c9cb3e740db3816c97a5c53d24" Namespace="calico-system" Pod="calico-apiserver-fc7c666f6-9x69k" WorkloadEndpoint="localhost-k8s-calico--apiserver--fc7c666f6--9x69k-" Mar 3 13:46:15.703202 containerd[1556]: 2026-03-03 13:46:14.205 [INFO][4516] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c1b1dd83d735103802563aab9ca3af5612f987c9cb3e740db3816c97a5c53d24" Namespace="calico-system" Pod="calico-apiserver-fc7c666f6-9x69k" WorkloadEndpoint="localhost-k8s-calico--apiserver--fc7c666f6--9x69k-eth0" Mar 3 13:46:15.703202 containerd[1556]: 2026-03-03 13:46:14.842 [INFO][4563] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c1b1dd83d735103802563aab9ca3af5612f987c9cb3e740db3816c97a5c53d24" HandleID="k8s-pod-network.c1b1dd83d735103802563aab9ca3af5612f987c9cb3e740db3816c97a5c53d24" Workload="localhost-k8s-calico--apiserver--fc7c666f6--9x69k-eth0" Mar 3 13:46:15.703202 containerd[1556]: 2026-03-03 13:46:14.891 [INFO][4563] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c1b1dd83d735103802563aab9ca3af5612f987c9cb3e740db3816c97a5c53d24" HandleID="k8s-pod-network.c1b1dd83d735103802563aab9ca3af5612f987c9cb3e740db3816c97a5c53d24" Workload="localhost-k8s-calico--apiserver--fc7c666f6--9x69k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000420e10), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-fc7c666f6-9x69k", "timestamp":"2026-03-03 13:46:14.84245776 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000398580)} Mar 3 13:46:15.703202 containerd[1556]: 2026-03-03 13:46:14.891 [INFO][4563] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 13:46:15.703202 containerd[1556]: 2026-03-03 13:46:15.093 [INFO][4563] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 13:46:15.703202 containerd[1556]: 2026-03-03 13:46:15.098 [INFO][4563] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 3 13:46:15.703202 containerd[1556]: 2026-03-03 13:46:15.115 [INFO][4563] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c1b1dd83d735103802563aab9ca3af5612f987c9cb3e740db3816c97a5c53d24" host="localhost" Mar 3 13:46:15.703202 containerd[1556]: 2026-03-03 13:46:15.212 [INFO][4563] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 3 13:46:15.703202 containerd[1556]: 2026-03-03 13:46:15.282 [INFO][4563] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 3 13:46:15.703202 containerd[1556]: 2026-03-03 13:46:15.314 [INFO][4563] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 3 13:46:15.703202 containerd[1556]: 2026-03-03 13:46:15.348 [INFO][4563] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 3 13:46:15.703202 containerd[1556]: 2026-03-03 13:46:15.351 [INFO][4563] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c1b1dd83d735103802563aab9ca3af5612f987c9cb3e740db3816c97a5c53d24" host="localhost" Mar 3 13:46:15.703202 containerd[1556]: 2026-03-03 13:46:15.374 [INFO][4563] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c1b1dd83d735103802563aab9ca3af5612f987c9cb3e740db3816c97a5c53d24 Mar 3 13:46:15.703202 containerd[1556]: 2026-03-03 13:46:15.432 [INFO][4563] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c1b1dd83d735103802563aab9ca3af5612f987c9cb3e740db3816c97a5c53d24" host="localhost" Mar 3 13:46:15.703202 containerd[1556]: 2026-03-03 13:46:15.468 [INFO][4563] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.c1b1dd83d735103802563aab9ca3af5612f987c9cb3e740db3816c97a5c53d24" host="localhost" Mar 3 13:46:15.703202 containerd[1556]: 2026-03-03 13:46:15.468 [INFO][4563] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.c1b1dd83d735103802563aab9ca3af5612f987c9cb3e740db3816c97a5c53d24" host="localhost" Mar 3 13:46:15.703202 containerd[1556]: 2026-03-03 13:46:15.472 [INFO][4563] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 13:46:15.703202 containerd[1556]: 2026-03-03 13:46:15.472 [INFO][4563] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="c1b1dd83d735103802563aab9ca3af5612f987c9cb3e740db3816c97a5c53d24" HandleID="k8s-pod-network.c1b1dd83d735103802563aab9ca3af5612f987c9cb3e740db3816c97a5c53d24" Workload="localhost-k8s-calico--apiserver--fc7c666f6--9x69k-eth0" Mar 3 13:46:15.706136 containerd[1556]: 2026-03-03 13:46:15.497 [INFO][4516] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c1b1dd83d735103802563aab9ca3af5612f987c9cb3e740db3816c97a5c53d24" Namespace="calico-system" Pod="calico-apiserver-fc7c666f6-9x69k" WorkloadEndpoint="localhost-k8s-calico--apiserver--fc7c666f6--9x69k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--fc7c666f6--9x69k-eth0", GenerateName:"calico-apiserver-fc7c666f6-", Namespace:"calico-system", SelfLink:"", UID:"86b8c8a1-11f6-4a45-9df7-233830b8adcc", ResourceVersion:"1266", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 44, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fc7c666f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-fc7c666f6-9x69k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali56a73d11870", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:46:15.706136 containerd[1556]: 2026-03-03 13:46:15.498 [INFO][4516] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="c1b1dd83d735103802563aab9ca3af5612f987c9cb3e740db3816c97a5c53d24" Namespace="calico-system" Pod="calico-apiserver-fc7c666f6-9x69k" WorkloadEndpoint="localhost-k8s-calico--apiserver--fc7c666f6--9x69k-eth0" Mar 3 13:46:15.706136 containerd[1556]: 2026-03-03 13:46:15.498 [INFO][4516] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali56a73d11870 ContainerID="c1b1dd83d735103802563aab9ca3af5612f987c9cb3e740db3816c97a5c53d24" Namespace="calico-system" Pod="calico-apiserver-fc7c666f6-9x69k" WorkloadEndpoint="localhost-k8s-calico--apiserver--fc7c666f6--9x69k-eth0" Mar 3 13:46:15.706136 containerd[1556]: 2026-03-03 13:46:15.577 [INFO][4516] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c1b1dd83d735103802563aab9ca3af5612f987c9cb3e740db3816c97a5c53d24" Namespace="calico-system" Pod="calico-apiserver-fc7c666f6-9x69k" WorkloadEndpoint="localhost-k8s-calico--apiserver--fc7c666f6--9x69k-eth0" Mar 3 13:46:15.706136 containerd[1556]: 2026-03-03 13:46:15.605 [INFO][4516] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c1b1dd83d735103802563aab9ca3af5612f987c9cb3e740db3816c97a5c53d24" Namespace="calico-system" Pod="calico-apiserver-fc7c666f6-9x69k" WorkloadEndpoint="localhost-k8s-calico--apiserver--fc7c666f6--9x69k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--fc7c666f6--9x69k-eth0", GenerateName:"calico-apiserver-fc7c666f6-", Namespace:"calico-system", SelfLink:"", UID:"86b8c8a1-11f6-4a45-9df7-233830b8adcc", ResourceVersion:"1266", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 44, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fc7c666f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c1b1dd83d735103802563aab9ca3af5612f987c9cb3e740db3816c97a5c53d24", Pod:"calico-apiserver-fc7c666f6-9x69k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali56a73d11870", MAC:"9a:f7:93:3a:22:8c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:46:15.706136 containerd[1556]: 2026-03-03 13:46:15.678 [INFO][4516] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c1b1dd83d735103802563aab9ca3af5612f987c9cb3e740db3816c97a5c53d24" Namespace="calico-system" Pod="calico-apiserver-fc7c666f6-9x69k" WorkloadEndpoint="localhost-k8s-calico--apiserver--fc7c666f6--9x69k-eth0" Mar 3 13:46:15.718366 systemd-resolved[1449]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 3 13:46:15.797555 systemd[1]: Started cri-containerd-9a1bee19bc9b7520c70a286cea00d18c1ba3d0801a84c08eef157eab32e4c22d.scope - libcontainer container 9a1bee19bc9b7520c70a286cea00d18c1ba3d0801a84c08eef157eab32e4c22d. Mar 3 13:46:15.904650 systemd-networkd[1442]: cali08301e548b2: Link UP Mar 3 13:46:15.912036 systemd-networkd[1442]: cali08301e548b2: Gained carrier Mar 3 13:46:15.977847 systemd-resolved[1449]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 3 13:46:15.995743 containerd[1556]: time="2026-03-03T13:46:15.995203068Z" level=info msg="connecting to shim c1b1dd83d735103802563aab9ca3af5612f987c9cb3e740db3816c97a5c53d24" address="unix:///run/containerd/s/8bd6e6cebcb7f9761ceca9e90ec7576f869112789b0ba390675112e114072e5c" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:46:16.010403 containerd[1556]: time="2026-03-03T13:46:16.009047998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-bnn4w,Uid:ff00a450-c4d4-4e77-8007-51142b9dc51b,Namespace:kube-system,Attempt:0,} returns sandbox id \"9aae4e284bcaa6d0211e8f79265e251678608584741578fb7288f82fc1a1783f\"" Mar 3 13:46:16.028751 kubelet[2840]: E0303 13:46:16.027873 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:46:16.044962 containerd[1556]: 2026-03-03 13:46:14.439 [INFO][4473] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--cccfbd5cf--hn5jc-eth0 goldmane-cccfbd5cf- calico-system 336ae713-c8e4-4738-a9ca-31c5a3b0440d 1263 0 2026-03-03 13:44:10 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-cccfbd5cf-hn5jc eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali08301e548b2 [] [] }} ContainerID="51b97c5a7ff9c20d3e611d3f4ebf4a5c6e977a5c700e5961240471ff4d970d8d" Namespace="calico-system" Pod="goldmane-cccfbd5cf-hn5jc" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--hn5jc-" Mar 3 13:46:16.044962 containerd[1556]: 2026-03-03 13:46:14.439 [INFO][4473] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="51b97c5a7ff9c20d3e611d3f4ebf4a5c6e977a5c700e5961240471ff4d970d8d" Namespace="calico-system" Pod="goldmane-cccfbd5cf-hn5jc" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--hn5jc-eth0" Mar 3 13:46:16.044962 containerd[1556]: 2026-03-03 13:46:14.921 [INFO][4578] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="51b97c5a7ff9c20d3e611d3f4ebf4a5c6e977a5c700e5961240471ff4d970d8d" HandleID="k8s-pod-network.51b97c5a7ff9c20d3e611d3f4ebf4a5c6e977a5c700e5961240471ff4d970d8d" Workload="localhost-k8s-goldmane--cccfbd5cf--hn5jc-eth0" Mar 3 13:46:16.044962 containerd[1556]: 2026-03-03 13:46:14.997 [INFO][4578] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="51b97c5a7ff9c20d3e611d3f4ebf4a5c6e977a5c700e5961240471ff4d970d8d" HandleID="k8s-pod-network.51b97c5a7ff9c20d3e611d3f4ebf4a5c6e977a5c700e5961240471ff4d970d8d" Workload="localhost-k8s-goldmane--cccfbd5cf--hn5jc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003430e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-cccfbd5cf-hn5jc", "timestamp":"2026-03-03 13:46:14.921081013 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000381600)} Mar 3 13:46:16.044962 containerd[1556]: 2026-03-03 13:46:14.998 [INFO][4578] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 13:46:16.044962 containerd[1556]: 2026-03-03 13:46:15.477 [INFO][4578] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 13:46:16.044962 containerd[1556]: 2026-03-03 13:46:15.477 [INFO][4578] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 3 13:46:16.044962 containerd[1556]: 2026-03-03 13:46:15.495 [INFO][4578] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.51b97c5a7ff9c20d3e611d3f4ebf4a5c6e977a5c700e5961240471ff4d970d8d" host="localhost" Mar 3 13:46:16.044962 containerd[1556]: 2026-03-03 13:46:15.575 [INFO][4578] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 3 13:46:16.044962 containerd[1556]: 2026-03-03 13:46:15.645 [INFO][4578] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 3 13:46:16.044962 containerd[1556]: 2026-03-03 13:46:15.668 [INFO][4578] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 3 13:46:16.044962 containerd[1556]: 2026-03-03 13:46:15.725 [INFO][4578] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 3 13:46:16.044962 containerd[1556]: 2026-03-03 13:46:15.729 [INFO][4578] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.51b97c5a7ff9c20d3e611d3f4ebf4a5c6e977a5c700e5961240471ff4d970d8d" host="localhost" Mar 3 13:46:16.044962 containerd[1556]: 2026-03-03 13:46:15.745 [INFO][4578] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.51b97c5a7ff9c20d3e611d3f4ebf4a5c6e977a5c700e5961240471ff4d970d8d Mar 3 13:46:16.044962 containerd[1556]: 2026-03-03 13:46:15.789 [INFO][4578] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.51b97c5a7ff9c20d3e611d3f4ebf4a5c6e977a5c700e5961240471ff4d970d8d" host="localhost" Mar 3 13:46:16.044962 containerd[1556]: 2026-03-03 13:46:15.822 [INFO][4578] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.51b97c5a7ff9c20d3e611d3f4ebf4a5c6e977a5c700e5961240471ff4d970d8d" host="localhost" Mar 3 13:46:16.044962 containerd[1556]: 2026-03-03 13:46:15.827 [INFO][4578] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.51b97c5a7ff9c20d3e611d3f4ebf4a5c6e977a5c700e5961240471ff4d970d8d" host="localhost" Mar 3 13:46:16.044962 containerd[1556]: 2026-03-03 13:46:15.829 [INFO][4578] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 13:46:16.044962 containerd[1556]: 2026-03-03 13:46:15.832 [INFO][4578] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="51b97c5a7ff9c20d3e611d3f4ebf4a5c6e977a5c700e5961240471ff4d970d8d" HandleID="k8s-pod-network.51b97c5a7ff9c20d3e611d3f4ebf4a5c6e977a5c700e5961240471ff4d970d8d" Workload="localhost-k8s-goldmane--cccfbd5cf--hn5jc-eth0" Mar 3 13:46:16.048481 containerd[1556]: 2026-03-03 13:46:15.863 [INFO][4473] cni-plugin/k8s.go 418: Populated endpoint ContainerID="51b97c5a7ff9c20d3e611d3f4ebf4a5c6e977a5c700e5961240471ff4d970d8d" Namespace="calico-system" Pod="goldmane-cccfbd5cf-hn5jc" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--hn5jc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--hn5jc-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"336ae713-c8e4-4738-a9ca-31c5a3b0440d", ResourceVersion:"1263", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 44, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-cccfbd5cf-hn5jc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali08301e548b2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:46:16.048481 containerd[1556]: 2026-03-03 13:46:15.864 [INFO][4473] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="51b97c5a7ff9c20d3e611d3f4ebf4a5c6e977a5c700e5961240471ff4d970d8d" Namespace="calico-system" Pod="goldmane-cccfbd5cf-hn5jc" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--hn5jc-eth0" Mar 3 13:46:16.048481 containerd[1556]: 2026-03-03 13:46:15.864 [INFO][4473] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali08301e548b2 ContainerID="51b97c5a7ff9c20d3e611d3f4ebf4a5c6e977a5c700e5961240471ff4d970d8d" Namespace="calico-system" Pod="goldmane-cccfbd5cf-hn5jc" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--hn5jc-eth0" Mar 3 13:46:16.048481 containerd[1556]: 2026-03-03 13:46:15.935 [INFO][4473] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="51b97c5a7ff9c20d3e611d3f4ebf4a5c6e977a5c700e5961240471ff4d970d8d" Namespace="calico-system" Pod="goldmane-cccfbd5cf-hn5jc" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--hn5jc-eth0" Mar 3 13:46:16.048481 containerd[1556]: 2026-03-03 13:46:15.951 [INFO][4473] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="51b97c5a7ff9c20d3e611d3f4ebf4a5c6e977a5c700e5961240471ff4d970d8d" Namespace="calico-system" Pod="goldmane-cccfbd5cf-hn5jc" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--hn5jc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--hn5jc-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"336ae713-c8e4-4738-a9ca-31c5a3b0440d", ResourceVersion:"1263", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 44, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"51b97c5a7ff9c20d3e611d3f4ebf4a5c6e977a5c700e5961240471ff4d970d8d", Pod:"goldmane-cccfbd5cf-hn5jc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali08301e548b2", MAC:"6e:c7:09:27:f1:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:46:16.048481 containerd[1556]: 2026-03-03 13:46:16.020 [INFO][4473] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="51b97c5a7ff9c20d3e611d3f4ebf4a5c6e977a5c700e5961240471ff4d970d8d" Namespace="calico-system" Pod="goldmane-cccfbd5cf-hn5jc" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--hn5jc-eth0" Mar 3 13:46:16.059787 containerd[1556]: time="2026-03-03T13:46:16.058159134Z" level=info msg="CreateContainer within sandbox \"9aae4e284bcaa6d0211e8f79265e251678608584741578fb7288f82fc1a1783f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 3 13:46:16.222169 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2984672042.mount: Deactivated successfully. Mar 3 13:46:16.232817 containerd[1556]: time="2026-03-03T13:46:16.230938174Z" level=info msg="Container 565f09453ba404f07dae8b7ff4546caa3fe34d8ed68cb698ae7987b270ee3b55: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:46:16.313643 systemd-networkd[1442]: cali2717b4b8de5: Link UP Mar 3 13:46:16.320797 systemd-networkd[1442]: cali2717b4b8de5: Gained carrier Mar 3 13:46:16.349686 systemd[1]: Started cri-containerd-c1b1dd83d735103802563aab9ca3af5612f987c9cb3e740db3816c97a5c53d24.scope - libcontainer container c1b1dd83d735103802563aab9ca3af5612f987c9cb3e740db3816c97a5c53d24. Mar 3 13:46:16.397249 containerd[1556]: time="2026-03-03T13:46:16.397189960Z" level=info msg="CreateContainer within sandbox \"9aae4e284bcaa6d0211e8f79265e251678608584741578fb7288f82fc1a1783f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"565f09453ba404f07dae8b7ff4546caa3fe34d8ed68cb698ae7987b270ee3b55\"" Mar 3 13:46:16.412653 containerd[1556]: time="2026-03-03T13:46:16.409931922Z" level=info msg="connecting to shim 51b97c5a7ff9c20d3e611d3f4ebf4a5c6e977a5c700e5961240471ff4d970d8d" address="unix:///run/containerd/s/b5d5deec25060e1065d1c21832eaeaafc4ab6d31489197ddc1e934b74204d932" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:46:16.413821 containerd[1556]: time="2026-03-03T13:46:16.413173954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fc7c666f6-mtrtx,Uid:bcebe7da-1239-4617-90b7-eb0a49f4ab99,Namespace:calico-system,Attempt:0,} returns sandbox id \"9a1bee19bc9b7520c70a286cea00d18c1ba3d0801a84c08eef157eab32e4c22d\"" Mar 3 13:46:16.420401 containerd[1556]: time="2026-03-03T13:46:16.420090784Z" level=info msg="StartContainer for \"565f09453ba404f07dae8b7ff4546caa3fe34d8ed68cb698ae7987b270ee3b55\"" Mar 3 13:46:16.427760 containerd[1556]: time="2026-03-03T13:46:16.427615578Z" level=info msg="connecting to shim 565f09453ba404f07dae8b7ff4546caa3fe34d8ed68cb698ae7987b270ee3b55" address="unix:///run/containerd/s/71aa612ff06ab455eca2888beea39e9e736282b465aef444320651a167ccf19e" protocol=ttrpc version=3 Mar 3 13:46:16.448414 systemd-networkd[1442]: cali02f5c20f6a7: Gained IPv6LL Mar 3 13:46:16.469457 containerd[1556]: 2026-03-03 13:46:14.080 [INFO][4438] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5f5ffcc77c--mxflh-eth0 calico-kube-controllers-5f5ffcc77c- calico-system aa113212-d6fe-4f61-a3b2-6710b541c030 1271 0 2026-03-03 13:44:12 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5f5ffcc77c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5f5ffcc77c-mxflh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2717b4b8de5 [] [] }} ContainerID="63f5bb245d65b202fcf337dea48c347511295359027e603327ef45ab15f811bf" Namespace="calico-system" Pod="calico-kube-controllers-5f5ffcc77c-mxflh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f5ffcc77c--mxflh-" Mar 3 13:46:16.469457 containerd[1556]: 2026-03-03 13:46:14.085 [INFO][4438] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="63f5bb245d65b202fcf337dea48c347511295359027e603327ef45ab15f811bf" Namespace="calico-system" Pod="calico-kube-controllers-5f5ffcc77c-mxflh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f5ffcc77c--mxflh-eth0" Mar 3 13:46:16.469457 containerd[1556]: 2026-03-03 13:46:14.927 [INFO][4569] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="63f5bb245d65b202fcf337dea48c347511295359027e603327ef45ab15f811bf" HandleID="k8s-pod-network.63f5bb245d65b202fcf337dea48c347511295359027e603327ef45ab15f811bf" Workload="localhost-k8s-calico--kube--controllers--5f5ffcc77c--mxflh-eth0" Mar 3 13:46:16.469457 containerd[1556]: 2026-03-03 13:46:15.037 [INFO][4569] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="63f5bb245d65b202fcf337dea48c347511295359027e603327ef45ab15f811bf" HandleID="k8s-pod-network.63f5bb245d65b202fcf337dea48c347511295359027e603327ef45ab15f811bf" Workload="localhost-k8s-calico--kube--controllers--5f5ffcc77c--mxflh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0006846d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5f5ffcc77c-mxflh", "timestamp":"2026-03-03 13:46:14.927085906 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00020c160)} Mar 3 13:46:16.469457 containerd[1556]: 2026-03-03 13:46:15.037 [INFO][4569] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 13:46:16.469457 containerd[1556]: 2026-03-03 13:46:15.827 [INFO][4569] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 13:46:16.469457 containerd[1556]: 2026-03-03 13:46:15.828 [INFO][4569] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 3 13:46:16.469457 containerd[1556]: 2026-03-03 13:46:15.865 [INFO][4569] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.63f5bb245d65b202fcf337dea48c347511295359027e603327ef45ab15f811bf" host="localhost" Mar 3 13:46:16.469457 containerd[1556]: 2026-03-03 13:46:15.952 [INFO][4569] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 3 13:46:16.469457 containerd[1556]: 2026-03-03 13:46:15.983 [INFO][4569] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 3 13:46:16.469457 containerd[1556]: 2026-03-03 13:46:16.019 [INFO][4569] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 3 13:46:16.469457 containerd[1556]: 2026-03-03 13:46:16.029 [INFO][4569] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 3 13:46:16.469457 containerd[1556]: 2026-03-03 13:46:16.029 [INFO][4569] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.63f5bb245d65b202fcf337dea48c347511295359027e603327ef45ab15f811bf" host="localhost" Mar 3 13:46:16.469457 containerd[1556]: 2026-03-03 13:46:16.054 [INFO][4569] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.63f5bb245d65b202fcf337dea48c347511295359027e603327ef45ab15f811bf Mar 3 13:46:16.469457 containerd[1556]: 2026-03-03 13:46:16.115 [INFO][4569] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.63f5bb245d65b202fcf337dea48c347511295359027e603327ef45ab15f811bf" host="localhost" Mar 3 13:46:16.469457 containerd[1556]: 2026-03-03 13:46:16.168 [INFO][4569] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.63f5bb245d65b202fcf337dea48c347511295359027e603327ef45ab15f811bf" host="localhost" Mar 3 13:46:16.469457 containerd[1556]: 2026-03-03 13:46:16.168 [INFO][4569] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.63f5bb245d65b202fcf337dea48c347511295359027e603327ef45ab15f811bf" host="localhost" Mar 3 13:46:16.469457 containerd[1556]: 2026-03-03 13:46:16.168 [INFO][4569] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 13:46:16.469457 containerd[1556]: 2026-03-03 13:46:16.168 [INFO][4569] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="63f5bb245d65b202fcf337dea48c347511295359027e603327ef45ab15f811bf" HandleID="k8s-pod-network.63f5bb245d65b202fcf337dea48c347511295359027e603327ef45ab15f811bf" Workload="localhost-k8s-calico--kube--controllers--5f5ffcc77c--mxflh-eth0" Mar 3 13:46:16.470759 containerd[1556]: 2026-03-03 13:46:16.244 [INFO][4438] cni-plugin/k8s.go 418: Populated endpoint ContainerID="63f5bb245d65b202fcf337dea48c347511295359027e603327ef45ab15f811bf" Namespace="calico-system" Pod="calico-kube-controllers-5f5ffcc77c-mxflh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f5ffcc77c--mxflh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5f5ffcc77c--mxflh-eth0", GenerateName:"calico-kube-controllers-5f5ffcc77c-", Namespace:"calico-system", SelfLink:"", UID:"aa113212-d6fe-4f61-a3b2-6710b541c030", ResourceVersion:"1271", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 44, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f5ffcc77c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5f5ffcc77c-mxflh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2717b4b8de5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:46:16.470759 containerd[1556]: 2026-03-03 13:46:16.253 [INFO][4438] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="63f5bb245d65b202fcf337dea48c347511295359027e603327ef45ab15f811bf" Namespace="calico-system" Pod="calico-kube-controllers-5f5ffcc77c-mxflh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f5ffcc77c--mxflh-eth0" Mar 3 13:46:16.470759 containerd[1556]: 2026-03-03 13:46:16.284 [INFO][4438] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2717b4b8de5 ContainerID="63f5bb245d65b202fcf337dea48c347511295359027e603327ef45ab15f811bf" Namespace="calico-system" Pod="calico-kube-controllers-5f5ffcc77c-mxflh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f5ffcc77c--mxflh-eth0" Mar 3 13:46:16.470759 containerd[1556]: 2026-03-03 13:46:16.323 [INFO][4438] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="63f5bb245d65b202fcf337dea48c347511295359027e603327ef45ab15f811bf" Namespace="calico-system" Pod="calico-kube-controllers-5f5ffcc77c-mxflh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f5ffcc77c--mxflh-eth0" Mar 3 13:46:16.470759 containerd[1556]: 2026-03-03 13:46:16.330 [INFO][4438] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="63f5bb245d65b202fcf337dea48c347511295359027e603327ef45ab15f811bf" Namespace="calico-system" Pod="calico-kube-controllers-5f5ffcc77c-mxflh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f5ffcc77c--mxflh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5f5ffcc77c--mxflh-eth0", GenerateName:"calico-kube-controllers-5f5ffcc77c-", Namespace:"calico-system", SelfLink:"", UID:"aa113212-d6fe-4f61-a3b2-6710b541c030", ResourceVersion:"1271", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 44, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f5ffcc77c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"63f5bb245d65b202fcf337dea48c347511295359027e603327ef45ab15f811bf", Pod:"calico-kube-controllers-5f5ffcc77c-mxflh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2717b4b8de5", MAC:"c6:87:0e:9b:57:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:46:16.470759 containerd[1556]: 2026-03-03 13:46:16.386 [INFO][4438] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="63f5bb245d65b202fcf337dea48c347511295359027e603327ef45ab15f811bf" Namespace="calico-system" Pod="calico-kube-controllers-5f5ffcc77c-mxflh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f5ffcc77c--mxflh-eth0" Mar 3 13:46:16.474039 containerd[1556]: time="2026-03-03T13:46:16.472253287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 3 13:46:16.512258 systemd-networkd[1442]: calic356acf92cf: Gained IPv6LL Mar 3 13:46:16.526097 systemd-resolved[1449]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 3 13:46:16.614606 systemd-networkd[1442]: califc1eab0c719: Link UP Mar 3 13:46:16.628036 systemd-networkd[1442]: califc1eab0c719: Gained carrier Mar 3 13:46:16.644776 systemd-networkd[1442]: cali56a73d11870: Gained IPv6LL Mar 3 13:46:16.666727 systemd[1]: Started cri-containerd-565f09453ba404f07dae8b7ff4546caa3fe34d8ed68cb698ae7987b270ee3b55.scope - libcontainer container 565f09453ba404f07dae8b7ff4546caa3fe34d8ed68cb698ae7987b270ee3b55. Mar 3 13:46:16.718903 systemd[1]: Started cri-containerd-51b97c5a7ff9c20d3e611d3f4ebf4a5c6e977a5c700e5961240471ff4d970d8d.scope - libcontainer container 51b97c5a7ff9c20d3e611d3f4ebf4a5c6e977a5c700e5961240471ff4d970d8d. Mar 3 13:46:16.821661 containerd[1556]: 2026-03-03 13:46:13.785 [INFO][4439] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--79dbfb954--6l64x-eth0 whisker-79dbfb954- calico-system 7a7b4e4f-cfb0-4dd7-8b2f-1f646235fb43 1268 0 2026-03-03 13:44:23 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:79dbfb954 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-79dbfb954-6l64x eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] califc1eab0c719 [] [] }} ContainerID="2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" Namespace="calico-system" Pod="whisker-79dbfb954-6l64x" WorkloadEndpoint="localhost-k8s-whisker--79dbfb954--6l64x-" Mar 3 13:46:16.821661 containerd[1556]: 2026-03-03 13:46:13.795 [INFO][4439] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" Namespace="calico-system" Pod="whisker-79dbfb954-6l64x" WorkloadEndpoint="localhost-k8s-whisker--79dbfb954--6l64x-eth0" Mar 3 13:46:16.821661 containerd[1556]: 2026-03-03 13:46:14.998 [INFO][4553] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" HandleID="k8s-pod-network.2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" Workload="localhost-k8s-whisker--79dbfb954--6l64x-eth0" Mar 3 13:46:16.821661 containerd[1556]: 2026-03-03 13:46:15.050 [INFO][4553] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" HandleID="k8s-pod-network.2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" Workload="localhost-k8s-whisker--79dbfb954--6l64x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000be700), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-79dbfb954-6l64x", "timestamp":"2026-03-03 13:46:14.998691055 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00031edc0)} Mar 3 13:46:16.821661 containerd[1556]: 2026-03-03 13:46:15.051 [INFO][4553] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 13:46:16.821661 containerd[1556]: 2026-03-03 13:46:16.169 [INFO][4553] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 13:46:16.821661 containerd[1556]: 2026-03-03 13:46:16.169 [INFO][4553] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 3 13:46:16.821661 containerd[1556]: 2026-03-03 13:46:16.185 [INFO][4553] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" host="localhost" Mar 3 13:46:16.821661 containerd[1556]: 2026-03-03 13:46:16.248 [INFO][4553] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 3 13:46:16.821661 containerd[1556]: 2026-03-03 13:46:16.299 [INFO][4553] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 3 13:46:16.821661 containerd[1556]: 2026-03-03 13:46:16.365 [INFO][4553] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 3 13:46:16.821661 containerd[1556]: 2026-03-03 13:46:16.395 [INFO][4553] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 3 13:46:16.821661 containerd[1556]: 2026-03-03 13:46:16.395 [INFO][4553] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" host="localhost" Mar 3 13:46:16.821661 containerd[1556]: 2026-03-03 13:46:16.433 [INFO][4553] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c Mar 3 13:46:16.821661 containerd[1556]: 2026-03-03 13:46:16.466 [INFO][4553] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" host="localhost" Mar 3 13:46:16.821661 containerd[1556]: 2026-03-03 13:46:16.502 [INFO][4553] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" host="localhost" Mar 3 13:46:16.821661 containerd[1556]: 2026-03-03 13:46:16.502 [INFO][4553] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" host="localhost" Mar 3 13:46:16.821661 containerd[1556]: 2026-03-03 13:46:16.507 [INFO][4553] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 13:46:16.821661 containerd[1556]: 2026-03-03 13:46:16.513 [INFO][4553] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" HandleID="k8s-pod-network.2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" Workload="localhost-k8s-whisker--79dbfb954--6l64x-eth0" Mar 3 13:46:16.822791 containerd[1556]: 2026-03-03 13:46:16.557 [INFO][4439] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" Namespace="calico-system" Pod="whisker-79dbfb954-6l64x" WorkloadEndpoint="localhost-k8s-whisker--79dbfb954--6l64x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--79dbfb954--6l64x-eth0", GenerateName:"whisker-79dbfb954-", Namespace:"calico-system", SelfLink:"", UID:"7a7b4e4f-cfb0-4dd7-8b2f-1f646235fb43", ResourceVersion:"1268", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 44, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79dbfb954", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-79dbfb954-6l64x", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califc1eab0c719", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:46:16.822791 containerd[1556]: 2026-03-03 13:46:16.558 [INFO][4439] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" Namespace="calico-system" Pod="whisker-79dbfb954-6l64x" WorkloadEndpoint="localhost-k8s-whisker--79dbfb954--6l64x-eth0" Mar 3 13:46:16.822791 containerd[1556]: 2026-03-03 13:46:16.559 [INFO][4439] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califc1eab0c719 ContainerID="2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" Namespace="calico-system" Pod="whisker-79dbfb954-6l64x" WorkloadEndpoint="localhost-k8s-whisker--79dbfb954--6l64x-eth0" Mar 3 13:46:16.822791 containerd[1556]: 2026-03-03 13:46:16.629 [INFO][4439] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" Namespace="calico-system" Pod="whisker-79dbfb954-6l64x" WorkloadEndpoint="localhost-k8s-whisker--79dbfb954--6l64x-eth0" Mar 3 13:46:16.822791 containerd[1556]: 2026-03-03 13:46:16.673 [INFO][4439] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" Namespace="calico-system" Pod="whisker-79dbfb954-6l64x" WorkloadEndpoint="localhost-k8s-whisker--79dbfb954--6l64x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--79dbfb954--6l64x-eth0", GenerateName:"whisker-79dbfb954-", Namespace:"calico-system", SelfLink:"", UID:"7a7b4e4f-cfb0-4dd7-8b2f-1f646235fb43", ResourceVersion:"1268", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 44, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79dbfb954", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c", Pod:"whisker-79dbfb954-6l64x", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califc1eab0c719", MAC:"8a:47:b2:11:58:39", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:46:16.822791 containerd[1556]: 2026-03-03 13:46:16.779 [INFO][4439] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" Namespace="calico-system" Pod="whisker-79dbfb954-6l64x" WorkloadEndpoint="localhost-k8s-whisker--79dbfb954--6l64x-eth0" Mar 3 13:46:16.916073 containerd[1556]: time="2026-03-03T13:46:16.911025060Z" level=info msg="connecting to shim 63f5bb245d65b202fcf337dea48c347511295359027e603327ef45ab15f811bf" address="unix:///run/containerd/s/d966073e5c9357c76997e9d5617ca77da6efdd7bedcd8879faf7f8a3314ad9e9" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:46:16.933874 systemd-resolved[1449]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 3 13:46:17.032698 systemd-networkd[1442]: cali4e8226248ae: Link UP Mar 3 13:46:17.033058 systemd-networkd[1442]: cali4e8226248ae: Gained carrier Mar 3 13:46:17.039396 containerd[1556]: time="2026-03-03T13:46:17.038755357Z" level=info msg="connecting to shim 2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" address="unix:///run/containerd/s/a5374646e96ce4a56bc82528df0f397c7deeaf90721cf7061bfc57e41a423f09" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:46:17.102779 systemd-networkd[1442]: cali08301e548b2: Gained IPv6LL Mar 3 13:46:17.190234 containerd[1556]: 2026-03-03 13:46:14.742 [INFO][4515] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--mnk57-eth0 coredns-66bc5c9577- kube-system 7281a48e-7aea-4dfa-a2c0-dfba1b9bafb6 1269 0 2026-03-03 13:43:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-mnk57 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4e8226248ae [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="ea13f2dea074000a748e43780f575600bccfbf6cde51ba0694ac323ceb1c03cb" Namespace="kube-system" Pod="coredns-66bc5c9577-mnk57" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--mnk57-" Mar 3 13:46:17.190234 containerd[1556]: 2026-03-03 13:46:14.742 [INFO][4515] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ea13f2dea074000a748e43780f575600bccfbf6cde51ba0694ac323ceb1c03cb" Namespace="kube-system" Pod="coredns-66bc5c9577-mnk57" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--mnk57-eth0" Mar 3 13:46:17.190234 containerd[1556]: 2026-03-03 13:46:15.042 [INFO][4606] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ea13f2dea074000a748e43780f575600bccfbf6cde51ba0694ac323ceb1c03cb" HandleID="k8s-pod-network.ea13f2dea074000a748e43780f575600bccfbf6cde51ba0694ac323ceb1c03cb" Workload="localhost-k8s-coredns--66bc5c9577--mnk57-eth0" Mar 3 13:46:17.190234 containerd[1556]: 2026-03-03 13:46:15.079 [INFO][4606] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ea13f2dea074000a748e43780f575600bccfbf6cde51ba0694ac323ceb1c03cb" HandleID="k8s-pod-network.ea13f2dea074000a748e43780f575600bccfbf6cde51ba0694ac323ceb1c03cb" Workload="localhost-k8s-coredns--66bc5c9577--mnk57-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000396d30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-mnk57", "timestamp":"2026-03-03 13:46:15.042650621 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000459600)} Mar 3 13:46:17.190234 containerd[1556]: 2026-03-03 13:46:15.079 [INFO][4606] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 13:46:17.190234 containerd[1556]: 2026-03-03 13:46:16.502 [INFO][4606] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 13:46:17.190234 containerd[1556]: 2026-03-03 13:46:16.503 [INFO][4606] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 3 13:46:17.190234 containerd[1556]: 2026-03-03 13:46:16.527 [INFO][4606] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ea13f2dea074000a748e43780f575600bccfbf6cde51ba0694ac323ceb1c03cb" host="localhost" Mar 3 13:46:17.190234 containerd[1556]: 2026-03-03 13:46:16.563 [INFO][4606] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 3 13:46:17.190234 containerd[1556]: 2026-03-03 13:46:16.695 [INFO][4606] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 3 13:46:17.190234 containerd[1556]: 2026-03-03 13:46:16.710 [INFO][4606] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 3 13:46:17.190234 containerd[1556]: 2026-03-03 13:46:16.775 [INFO][4606] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 3 13:46:17.190234 containerd[1556]: 2026-03-03 13:46:16.776 [INFO][4606] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ea13f2dea074000a748e43780f575600bccfbf6cde51ba0694ac323ceb1c03cb" host="localhost" Mar 3 13:46:17.190234 containerd[1556]: 2026-03-03 13:46:16.810 [INFO][4606] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ea13f2dea074000a748e43780f575600bccfbf6cde51ba0694ac323ceb1c03cb Mar 3 13:46:17.190234 containerd[1556]: 2026-03-03 13:46:16.880 [INFO][4606] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ea13f2dea074000a748e43780f575600bccfbf6cde51ba0694ac323ceb1c03cb" host="localhost" Mar 3 13:46:17.190234 containerd[1556]: 2026-03-03 13:46:16.925 [INFO][4606] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.ea13f2dea074000a748e43780f575600bccfbf6cde51ba0694ac323ceb1c03cb" host="localhost" Mar 3 13:46:17.190234 containerd[1556]: 2026-03-03 13:46:16.927 [INFO][4606] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.ea13f2dea074000a748e43780f575600bccfbf6cde51ba0694ac323ceb1c03cb" host="localhost" Mar 3 13:46:17.190234 containerd[1556]: 2026-03-03 13:46:16.927 [INFO][4606] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 13:46:17.190234 containerd[1556]: 2026-03-03 13:46:16.927 [INFO][4606] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="ea13f2dea074000a748e43780f575600bccfbf6cde51ba0694ac323ceb1c03cb" HandleID="k8s-pod-network.ea13f2dea074000a748e43780f575600bccfbf6cde51ba0694ac323ceb1c03cb" Workload="localhost-k8s-coredns--66bc5c9577--mnk57-eth0" Mar 3 13:46:17.191729 containerd[1556]: 2026-03-03 13:46:16.976 [INFO][4515] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ea13f2dea074000a748e43780f575600bccfbf6cde51ba0694ac323ceb1c03cb" Namespace="kube-system" Pod="coredns-66bc5c9577-mnk57" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--mnk57-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--mnk57-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"7281a48e-7aea-4dfa-a2c0-dfba1b9bafb6", ResourceVersion:"1269", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 43, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-mnk57", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4e8226248ae", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:46:17.191729 containerd[1556]: 2026-03-03 13:46:16.977 [INFO][4515] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="ea13f2dea074000a748e43780f575600bccfbf6cde51ba0694ac323ceb1c03cb" Namespace="kube-system" Pod="coredns-66bc5c9577-mnk57" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--mnk57-eth0" Mar 3 13:46:17.191729 containerd[1556]: 2026-03-03 13:46:16.977 [INFO][4515] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4e8226248ae ContainerID="ea13f2dea074000a748e43780f575600bccfbf6cde51ba0694ac323ceb1c03cb" Namespace="kube-system" Pod="coredns-66bc5c9577-mnk57" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--mnk57-eth0" Mar 3 13:46:17.191729 containerd[1556]: 2026-03-03 13:46:17.030 [INFO][4515] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ea13f2dea074000a748e43780f575600bccfbf6cde51ba0694ac323ceb1c03cb" Namespace="kube-system" Pod="coredns-66bc5c9577-mnk57" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--mnk57-eth0" Mar 3 13:46:17.191729 containerd[1556]: 2026-03-03 13:46:17.032 [INFO][4515] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ea13f2dea074000a748e43780f575600bccfbf6cde51ba0694ac323ceb1c03cb" Namespace="kube-system" Pod="coredns-66bc5c9577-mnk57" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--mnk57-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--mnk57-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"7281a48e-7aea-4dfa-a2c0-dfba1b9bafb6", ResourceVersion:"1269", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 43, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ea13f2dea074000a748e43780f575600bccfbf6cde51ba0694ac323ceb1c03cb", Pod:"coredns-66bc5c9577-mnk57", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4e8226248ae", MAC:"ae:b6:30:8f:e4:02", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:46:17.191729 containerd[1556]: 2026-03-03 13:46:17.087 [INFO][4515] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ea13f2dea074000a748e43780f575600bccfbf6cde51ba0694ac323ceb1c03cb" Namespace="kube-system" Pod="coredns-66bc5c9577-mnk57" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--mnk57-eth0" Mar 3 13:46:17.229936 systemd[1]: Started cri-containerd-63f5bb245d65b202fcf337dea48c347511295359027e603327ef45ab15f811bf.scope - libcontainer container 63f5bb245d65b202fcf337dea48c347511295359027e603327ef45ab15f811bf. Mar 3 13:46:17.240984 containerd[1556]: time="2026-03-03T13:46:17.240922528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fc7c666f6-9x69k,Uid:86b8c8a1-11f6-4a45-9df7-233830b8adcc,Namespace:calico-system,Attempt:0,} returns sandbox id \"c1b1dd83d735103802563aab9ca3af5612f987c9cb3e740db3816c97a5c53d24\"" Mar 3 13:46:17.360098 containerd[1556]: time="2026-03-03T13:46:17.359763447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-hn5jc,Uid:336ae713-c8e4-4738-a9ca-31c5a3b0440d,Namespace:calico-system,Attempt:0,} returns sandbox id \"51b97c5a7ff9c20d3e611d3f4ebf4a5c6e977a5c700e5961240471ff4d970d8d\"" Mar 3 13:46:17.413609 containerd[1556]: time="2026-03-03T13:46:17.413003760Z" level=info msg="StartContainer for \"565f09453ba404f07dae8b7ff4546caa3fe34d8ed68cb698ae7987b270ee3b55\" returns successfully" Mar 3 13:46:17.569242 systemd[1]: Started cri-containerd-2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c.scope - libcontainer container 2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c. Mar 3 13:46:17.628763 containerd[1556]: time="2026-03-03T13:46:17.626613561Z" level=info msg="connecting to shim ea13f2dea074000a748e43780f575600bccfbf6cde51ba0694ac323ceb1c03cb" address="unix:///run/containerd/s/f0042a6ed12245fa0fc7153c7a25c263be7422ebddd0607abbfa906f170ce29a" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:46:17.641485 systemd-resolved[1449]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 3 13:46:17.734792 kubelet[2840]: E0303 13:46:17.734647 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:46:17.817837 systemd[1]: Started cri-containerd-ea13f2dea074000a748e43780f575600bccfbf6cde51ba0694ac323ceb1c03cb.scope - libcontainer container ea13f2dea074000a748e43780f575600bccfbf6cde51ba0694ac323ceb1c03cb. Mar 3 13:46:17.829859 kubelet[2840]: I0303 13:46:17.827015 2840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-bnn4w" podStartSLOduration=196.826851382 podStartE2EDuration="3m16.826851382s" podCreationTimestamp="2026-03-03 13:43:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 13:46:17.820394178 +0000 UTC m=+200.751980240" watchObservedRunningTime="2026-03-03 13:46:17.826851382 +0000 UTC m=+200.758437405" Mar 3 13:46:17.919585 systemd-networkd[1442]: cali2717b4b8de5: Gained IPv6LL Mar 3 13:46:17.929592 systemd-resolved[1449]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 3 13:46:17.986430 systemd-networkd[1442]: califc1eab0c719: Gained IPv6LL Mar 3 13:46:18.192172 containerd[1556]: time="2026-03-03T13:46:18.192050798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mnk57,Uid:7281a48e-7aea-4dfa-a2c0-dfba1b9bafb6,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea13f2dea074000a748e43780f575600bccfbf6cde51ba0694ac323ceb1c03cb\"" Mar 3 13:46:18.195934 containerd[1556]: time="2026-03-03T13:46:18.195893017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79dbfb954-6l64x,Uid:7a7b4e4f-cfb0-4dd7-8b2f-1f646235fb43,Namespace:calico-system,Attempt:0,} returns sandbox id \"2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c\"" Mar 3 13:46:18.196888 kubelet[2840]: E0303 13:46:18.196801 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:46:18.210829 containerd[1556]: time="2026-03-03T13:46:18.210708985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f5ffcc77c-mxflh,Uid:aa113212-d6fe-4f61-a3b2-6710b541c030,Namespace:calico-system,Attempt:0,} returns sandbox id \"63f5bb245d65b202fcf337dea48c347511295359027e603327ef45ab15f811bf\"" Mar 3 13:46:18.232841 containerd[1556]: time="2026-03-03T13:46:18.232784962Z" level=info msg="CreateContainer within sandbox \"ea13f2dea074000a748e43780f575600bccfbf6cde51ba0694ac323ceb1c03cb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 3 13:46:18.322668 containerd[1556]: time="2026-03-03T13:46:18.322442074Z" level=info msg="Container 707f6c43e30e37e14ff5a4c6501f2d7df07369be66507d3bda2dbe5c3576b04f: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:46:18.362397 containerd[1556]: time="2026-03-03T13:46:18.360871080Z" level=info msg="CreateContainer within sandbox \"ea13f2dea074000a748e43780f575600bccfbf6cde51ba0694ac323ceb1c03cb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"707f6c43e30e37e14ff5a4c6501f2d7df07369be66507d3bda2dbe5c3576b04f\"" Mar 3 13:46:18.365899 containerd[1556]: time="2026-03-03T13:46:18.365183467Z" level=info msg="StartContainer for \"707f6c43e30e37e14ff5a4c6501f2d7df07369be66507d3bda2dbe5c3576b04f\"" Mar 3 13:46:18.373959 containerd[1556]: time="2026-03-03T13:46:18.373584669Z" level=info msg="connecting to shim 707f6c43e30e37e14ff5a4c6501f2d7df07369be66507d3bda2dbe5c3576b04f" address="unix:///run/containerd/s/f0042a6ed12245fa0fc7153c7a25c263be7422ebddd0607abbfa906f170ce29a" protocol=ttrpc version=3 Mar 3 13:46:18.537752 systemd[1]: Started cri-containerd-707f6c43e30e37e14ff5a4c6501f2d7df07369be66507d3bda2dbe5c3576b04f.scope - libcontainer container 707f6c43e30e37e14ff5a4c6501f2d7df07369be66507d3bda2dbe5c3576b04f. Mar 3 13:46:18.813195 containerd[1556]: time="2026-03-03T13:46:18.813031661Z" level=info msg="StartContainer for \"707f6c43e30e37e14ff5a4c6501f2d7df07369be66507d3bda2dbe5c3576b04f\" returns successfully" Mar 3 13:46:18.874403 kubelet[2840]: E0303 13:46:18.871671 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:46:19.091482 systemd-networkd[1442]: cali4e8226248ae: Gained IPv6LL Mar 3 13:46:19.102166 systemd[1]: Started sshd@10-10.0.0.57:22-10.0.0.1:49018.service - OpenSSH per-connection server daemon (10.0.0.1:49018). Mar 3 13:46:19.570063 sshd[5101]: Accepted publickey for core from 10.0.0.1 port 49018 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 13:46:19.581397 sshd-session[5101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:46:19.618974 systemd-logind[1547]: New session 11 of user core. Mar 3 13:46:19.660076 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 3 13:46:19.895893 kubelet[2840]: E0303 13:46:19.895156 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:46:19.911915 kubelet[2840]: E0303 13:46:19.904722 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:46:20.089719 kubelet[2840]: I0303 13:46:20.081249 2840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-mnk57" podStartSLOduration=200.081225734 podStartE2EDuration="3m20.081225734s" podCreationTimestamp="2026-03-03 13:43:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 13:46:20.006818101 +0000 UTC m=+202.938404224" watchObservedRunningTime="2026-03-03 13:46:20.081225734 +0000 UTC m=+203.012811757" Mar 3 13:46:20.296909 sshd[5113]: Connection closed by 10.0.0.1 port 49018 Mar 3 13:46:20.295918 sshd-session[5101]: pam_unix(sshd:session): session closed for user core Mar 3 13:46:20.308605 systemd[1]: sshd@10-10.0.0.57:22-10.0.0.1:49018.service: Deactivated successfully. Mar 3 13:46:20.319237 systemd[1]: session-11.scope: Deactivated successfully. Mar 3 13:46:20.325764 systemd-logind[1547]: Session 11 logged out. Waiting for processes to exit. Mar 3 13:46:20.337222 systemd-logind[1547]: Removed session 11. Mar 3 13:46:20.917200 kubelet[2840]: E0303 13:46:20.917081 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:46:20.919914 kubelet[2840]: E0303 13:46:20.918056 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:46:21.925897 kubelet[2840]: E0303 13:46:21.925855 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:46:25.347877 systemd[1]: Started sshd@11-10.0.0.57:22-10.0.0.1:49028.service - OpenSSH per-connection server daemon (10.0.0.1:49028). Mar 3 13:46:25.449329 containerd[1556]: time="2026-03-03T13:46:25.449199840Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:46:25.451747 containerd[1556]: time="2026-03-03T13:46:25.450194654Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 3 13:46:25.468457 containerd[1556]: time="2026-03-03T13:46:25.466904560Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:46:25.478691 containerd[1556]: time="2026-03-03T13:46:25.478632600Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:46:25.479474 containerd[1556]: time="2026-03-03T13:46:25.479398408Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 9.006232443s" Mar 3 13:46:25.479474 containerd[1556]: time="2026-03-03T13:46:25.479437070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 3 13:46:25.501133 containerd[1556]: time="2026-03-03T13:46:25.499225221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 3 13:46:25.520406 containerd[1556]: time="2026-03-03T13:46:25.516021936Z" level=info msg="CreateContainer within sandbox \"9a1bee19bc9b7520c70a286cea00d18c1ba3d0801a84c08eef157eab32e4c22d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 3 13:46:25.580581 containerd[1556]: time="2026-03-03T13:46:25.580451586Z" level=info msg="Container 6117a14169f9f35d2a935435d075d60acd6cfd89e804a8b579d2529cb787ddb0: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:46:25.624615 containerd[1556]: time="2026-03-03T13:46:25.621158348Z" level=info msg="CreateContainer within sandbox \"9a1bee19bc9b7520c70a286cea00d18c1ba3d0801a84c08eef157eab32e4c22d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6117a14169f9f35d2a935435d075d60acd6cfd89e804a8b579d2529cb787ddb0\"" Mar 3 13:46:25.624615 containerd[1556]: time="2026-03-03T13:46:25.624103062Z" level=info msg="StartContainer for \"6117a14169f9f35d2a935435d075d60acd6cfd89e804a8b579d2529cb787ddb0\"" Mar 3 13:46:25.638866 containerd[1556]: time="2026-03-03T13:46:25.638619923Z" level=info msg="connecting to shim 6117a14169f9f35d2a935435d075d60acd6cfd89e804a8b579d2529cb787ddb0" address="unix:///run/containerd/s/5f672ac2d07d3fbc26f90aa221367ca697e8e560eb507fe7e83a7587145cc7b9" protocol=ttrpc version=3 Mar 3 13:46:25.656138 sshd[5139]: Accepted publickey for core from 10.0.0.1 port 49028 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 13:46:25.685596 sshd-session[5139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:46:25.736493 systemd-logind[1547]: New session 12 of user core. Mar 3 13:46:25.747620 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 3 13:46:25.837494 systemd[1]: Started cri-containerd-6117a14169f9f35d2a935435d075d60acd6cfd89e804a8b579d2529cb787ddb0.scope - libcontainer container 6117a14169f9f35d2a935435d075d60acd6cfd89e804a8b579d2529cb787ddb0. Mar 3 13:46:25.879987 containerd[1556]: time="2026-03-03T13:46:25.876929519Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:46:25.886028 containerd[1556]: time="2026-03-03T13:46:25.884956333Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 3 13:46:25.893202 containerd[1556]: time="2026-03-03T13:46:25.890109534Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 390.655555ms" Mar 3 13:46:25.893202 containerd[1556]: time="2026-03-03T13:46:25.890221442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 3 13:46:25.901971 containerd[1556]: time="2026-03-03T13:46:25.899225432Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 3 13:46:25.921177 containerd[1556]: time="2026-03-03T13:46:25.918477289Z" level=info msg="CreateContainer within sandbox \"c1b1dd83d735103802563aab9ca3af5612f987c9cb3e740db3816c97a5c53d24\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 3 13:46:25.972458 containerd[1556]: time="2026-03-03T13:46:25.972219154Z" level=info msg="Container ccef4136d4de449dae6a539158ff9bfb498762a058e253229cd015709bf4ff8c: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:46:26.053250 containerd[1556]: time="2026-03-03T13:46:26.053126182Z" level=info msg="CreateContainer within sandbox \"c1b1dd83d735103802563aab9ca3af5612f987c9cb3e740db3816c97a5c53d24\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ccef4136d4de449dae6a539158ff9bfb498762a058e253229cd015709bf4ff8c\"" Mar 3 13:46:26.065946 containerd[1556]: time="2026-03-03T13:46:26.065823653Z" level=info msg="StartContainer for \"ccef4136d4de449dae6a539158ff9bfb498762a058e253229cd015709bf4ff8c\"" Mar 3 13:46:26.077012 containerd[1556]: time="2026-03-03T13:46:26.076617002Z" level=info msg="connecting to shim ccef4136d4de449dae6a539158ff9bfb498762a058e253229cd015709bf4ff8c" address="unix:///run/containerd/s/8bd6e6cebcb7f9761ceca9e90ec7576f869112789b0ba390675112e114072e5c" protocol=ttrpc version=3 Mar 3 13:46:26.173985 systemd[1]: Started cri-containerd-ccef4136d4de449dae6a539158ff9bfb498762a058e253229cd015709bf4ff8c.scope - libcontainer container ccef4136d4de449dae6a539158ff9bfb498762a058e253229cd015709bf4ff8c. Mar 3 13:46:26.235827 containerd[1556]: time="2026-03-03T13:46:26.235718618Z" level=info msg="StartContainer for \"6117a14169f9f35d2a935435d075d60acd6cfd89e804a8b579d2529cb787ddb0\" returns successfully" Mar 3 13:46:26.293432 sshd[5155]: Connection closed by 10.0.0.1 port 49028 Mar 3 13:46:26.294452 sshd-session[5139]: pam_unix(sshd:session): session closed for user core Mar 3 13:46:26.309653 systemd[1]: sshd@11-10.0.0.57:22-10.0.0.1:49028.service: Deactivated successfully. Mar 3 13:46:26.317872 systemd[1]: session-12.scope: Deactivated successfully. Mar 3 13:46:26.326957 systemd-logind[1547]: Session 12 logged out. Waiting for processes to exit. Mar 3 13:46:26.333214 systemd-logind[1547]: Removed session 12. Mar 3 13:46:26.449445 containerd[1556]: time="2026-03-03T13:46:26.447819606Z" level=info msg="StartContainer for \"ccef4136d4de449dae6a539158ff9bfb498762a058e253229cd015709bf4ff8c\" returns successfully" Mar 3 13:46:27.144491 kubelet[2840]: I0303 13:46:27.144020 2840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-fc7c666f6-mtrtx" podStartSLOduration=128.084002627 podStartE2EDuration="2m17.143992701s" podCreationTimestamp="2026-03-03 13:44:10 +0000 UTC" firstStartedPulling="2026-03-03 13:46:16.424214326 +0000 UTC m=+199.355800349" lastFinishedPulling="2026-03-03 13:46:25.48420438 +0000 UTC m=+208.415790423" observedRunningTime="2026-03-03 13:46:27.094869335 +0000 UTC m=+210.026455359" watchObservedRunningTime="2026-03-03 13:46:27.143992701 +0000 UTC m=+210.075578764" Mar 3 13:46:28.473732 kubelet[2840]: I0303 13:46:28.472083 2840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-fc7c666f6-9x69k" podStartSLOduration=129.83889543 podStartE2EDuration="2m18.472059847s" podCreationTimestamp="2026-03-03 13:44:10 +0000 UTC" firstStartedPulling="2026-03-03 13:46:17.259052279 +0000 UTC m=+200.190638312" lastFinishedPulling="2026-03-03 13:46:25.892216706 +0000 UTC m=+208.823802729" observedRunningTime="2026-03-03 13:46:27.14698485 +0000 UTC m=+210.078570883" watchObservedRunningTime="2026-03-03 13:46:28.472059847 +0000 UTC m=+211.403645880" Mar 3 13:46:29.049992 kubelet[2840]: I0303 13:46:29.049892 2840 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 3 13:46:31.199777 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4026328517.mount: Deactivated successfully. Mar 3 13:46:31.313502 systemd[1]: Started sshd@12-10.0.0.57:22-10.0.0.1:54210.service - OpenSSH per-connection server daemon (10.0.0.1:54210). Mar 3 13:46:31.530055 sshd[5292]: Accepted publickey for core from 10.0.0.1 port 54210 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 13:46:31.532792 sshd-session[5292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:46:31.548044 systemd-logind[1547]: New session 13 of user core. Mar 3 13:46:31.556937 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 3 13:46:32.158357 sshd[5295]: Connection closed by 10.0.0.1 port 54210 Mar 3 13:46:32.159738 sshd-session[5292]: pam_unix(sshd:session): session closed for user core Mar 3 13:46:32.170092 systemd[1]: sshd@12-10.0.0.57:22-10.0.0.1:54210.service: Deactivated successfully. Mar 3 13:46:32.176596 systemd[1]: session-13.scope: Deactivated successfully. Mar 3 13:46:32.180798 systemd-logind[1547]: Session 13 logged out. Waiting for processes to exit. Mar 3 13:46:32.184725 systemd-logind[1547]: Removed session 13. Mar 3 13:46:32.415475 containerd[1556]: time="2026-03-03T13:46:32.415081157Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:46:32.417825 containerd[1556]: time="2026-03-03T13:46:32.417260608Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 3 13:46:32.421452 containerd[1556]: time="2026-03-03T13:46:32.420182934Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:46:32.427593 containerd[1556]: time="2026-03-03T13:46:32.427419055Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:46:32.428783 containerd[1556]: time="2026-03-03T13:46:32.428691716Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 6.5212326s" Mar 3 13:46:32.428783 containerd[1556]: time="2026-03-03T13:46:32.428734286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 3 13:46:32.431998 containerd[1556]: time="2026-03-03T13:46:32.431864457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 3 13:46:32.447054 containerd[1556]: time="2026-03-03T13:46:32.446901993Z" level=info msg="CreateContainer within sandbox \"51b97c5a7ff9c20d3e611d3f4ebf4a5c6e977a5c700e5961240471ff4d970d8d\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 3 13:46:32.466751 containerd[1556]: time="2026-03-03T13:46:32.466624032Z" level=info msg="Container 0832f1eccdce0112920449d8235dda1fd8e8003dec74b0138a3d5e72c2a5354d: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:46:32.487237 containerd[1556]: time="2026-03-03T13:46:32.487084639Z" level=info msg="CreateContainer within sandbox \"51b97c5a7ff9c20d3e611d3f4ebf4a5c6e977a5c700e5961240471ff4d970d8d\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"0832f1eccdce0112920449d8235dda1fd8e8003dec74b0138a3d5e72c2a5354d\"" Mar 3 13:46:32.488725 containerd[1556]: time="2026-03-03T13:46:32.488616410Z" level=info msg="StartContainer for \"0832f1eccdce0112920449d8235dda1fd8e8003dec74b0138a3d5e72c2a5354d\"" Mar 3 13:46:32.489909 containerd[1556]: time="2026-03-03T13:46:32.489742697Z" level=info msg="connecting to shim 0832f1eccdce0112920449d8235dda1fd8e8003dec74b0138a3d5e72c2a5354d" address="unix:///run/containerd/s/b5d5deec25060e1065d1c21832eaeaafc4ab6d31489197ddc1e934b74204d932" protocol=ttrpc version=3 Mar 3 13:46:32.551261 systemd[1]: Started cri-containerd-0832f1eccdce0112920449d8235dda1fd8e8003dec74b0138a3d5e72c2a5354d.scope - libcontainer container 0832f1eccdce0112920449d8235dda1fd8e8003dec74b0138a3d5e72c2a5354d. Mar 3 13:46:32.758469 containerd[1556]: time="2026-03-03T13:46:32.758428113Z" level=info msg="StartContainer for \"0832f1eccdce0112920449d8235dda1fd8e8003dec74b0138a3d5e72c2a5354d\" returns successfully" Mar 3 13:46:33.122171 kubelet[2840]: I0303 13:46:33.121934 2840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-hn5jc" podStartSLOduration=128.070820318 podStartE2EDuration="2m23.121918742s" podCreationTimestamp="2026-03-03 13:44:10 +0000 UTC" firstStartedPulling="2026-03-03 13:46:17.379652343 +0000 UTC m=+200.311238365" lastFinishedPulling="2026-03-03 13:46:32.430750766 +0000 UTC m=+215.362336789" observedRunningTime="2026-03-03 13:46:33.119744193 +0000 UTC m=+216.051330217" watchObservedRunningTime="2026-03-03 13:46:33.121918742 +0000 UTC m=+216.053504765" Mar 3 13:46:33.137883 kubelet[2840]: E0303 13:46:33.137731 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:46:34.014164 containerd[1556]: time="2026-03-03T13:46:34.014025137Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:46:34.015834 containerd[1556]: time="2026-03-03T13:46:34.015703912Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 3 13:46:34.017186 containerd[1556]: time="2026-03-03T13:46:34.017072921Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:46:34.021629 containerd[1556]: time="2026-03-03T13:46:34.021442622Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:46:34.023008 containerd[1556]: time="2026-03-03T13:46:34.022880378Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.590961741s" Mar 3 13:46:34.023008 containerd[1556]: time="2026-03-03T13:46:34.022972520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 3 13:46:34.025920 containerd[1556]: time="2026-03-03T13:46:34.025823686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 3 13:46:34.032920 containerd[1556]: time="2026-03-03T13:46:34.032823866Z" level=info msg="CreateContainer within sandbox \"2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 3 13:46:34.051050 containerd[1556]: time="2026-03-03T13:46:34.050604108Z" level=info msg="Container a2badef6e30c1c13c7befa3e8b8b854c04f127ee2d44ed8c0e6b39ee28fee3c9: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:46:34.077514 containerd[1556]: time="2026-03-03T13:46:34.077453928Z" level=info msg="CreateContainer within sandbox \"2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"a2badef6e30c1c13c7befa3e8b8b854c04f127ee2d44ed8c0e6b39ee28fee3c9\"" Mar 3 13:46:34.078970 containerd[1556]: time="2026-03-03T13:46:34.078507388Z" level=info msg="StartContainer for \"a2badef6e30c1c13c7befa3e8b8b854c04f127ee2d44ed8c0e6b39ee28fee3c9\"" Mar 3 13:46:34.084781 containerd[1556]: time="2026-03-03T13:46:34.084719152Z" level=info msg="connecting to shim a2badef6e30c1c13c7befa3e8b8b854c04f127ee2d44ed8c0e6b39ee28fee3c9" address="unix:///run/containerd/s/a5374646e96ce4a56bc82528df0f397c7deeaf90721cf7061bfc57e41a423f09" protocol=ttrpc version=3 Mar 3 13:46:34.137888 systemd[1]: Started cri-containerd-a2badef6e30c1c13c7befa3e8b8b854c04f127ee2d44ed8c0e6b39ee28fee3c9.scope - libcontainer container a2badef6e30c1c13c7befa3e8b8b854c04f127ee2d44ed8c0e6b39ee28fee3c9. Mar 3 13:46:34.376137 containerd[1556]: time="2026-03-03T13:46:34.375770295Z" level=info msg="StartContainer for \"a2badef6e30c1c13c7befa3e8b8b854c04f127ee2d44ed8c0e6b39ee28fee3c9\" returns successfully" Mar 3 13:46:37.177885 systemd[1]: Started sshd@13-10.0.0.57:22-10.0.0.1:54212.service - OpenSSH per-connection server daemon (10.0.0.1:54212). Mar 3 13:46:37.365796 sshd[5474]: Accepted publickey for core from 10.0.0.1 port 54212 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 13:46:37.370041 sshd-session[5474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:46:37.379192 systemd-logind[1547]: New session 14 of user core. Mar 3 13:46:37.385763 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 3 13:46:37.848984 sshd[5478]: Connection closed by 10.0.0.1 port 54212 Mar 3 13:46:37.849714 sshd-session[5474]: pam_unix(sshd:session): session closed for user core Mar 3 13:46:37.855149 systemd[1]: sshd@13-10.0.0.57:22-10.0.0.1:54212.service: Deactivated successfully. Mar 3 13:46:37.858894 systemd[1]: session-14.scope: Deactivated successfully. Mar 3 13:46:37.863506 systemd-logind[1547]: Session 14 logged out. Waiting for processes to exit. Mar 3 13:46:37.867582 systemd-logind[1547]: Removed session 14. Mar 3 13:46:38.297027 containerd[1556]: time="2026-03-03T13:46:38.296881549Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:46:38.298676 containerd[1556]: time="2026-03-03T13:46:38.298591166Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 3 13:46:38.300934 containerd[1556]: time="2026-03-03T13:46:38.300872012Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:46:38.309685 containerd[1556]: time="2026-03-03T13:46:38.309494059Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:46:38.311337 containerd[1556]: time="2026-03-03T13:46:38.311240567Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 4.285377238s" Mar 3 13:46:38.311521 containerd[1556]: time="2026-03-03T13:46:38.311418649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 3 13:46:38.313165 containerd[1556]: time="2026-03-03T13:46:38.313033829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 3 13:46:38.353989 containerd[1556]: time="2026-03-03T13:46:38.353778505Z" level=info msg="CreateContainer within sandbox \"63f5bb245d65b202fcf337dea48c347511295359027e603327ef45ab15f811bf\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 3 13:46:38.371175 containerd[1556]: time="2026-03-03T13:46:38.370620536Z" level=info msg="Container 78c78cc13ca9056f2b51562afd5088dedb3aef636fcd4556f6198301cdc70ca1: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:46:38.406177 containerd[1556]: time="2026-03-03T13:46:38.405780606Z" level=info msg="CreateContainer within sandbox \"63f5bb245d65b202fcf337dea48c347511295359027e603327ef45ab15f811bf\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"78c78cc13ca9056f2b51562afd5088dedb3aef636fcd4556f6198301cdc70ca1\"" Mar 3 13:46:38.407666 containerd[1556]: time="2026-03-03T13:46:38.407108782Z" level=info msg="StartContainer for \"78c78cc13ca9056f2b51562afd5088dedb3aef636fcd4556f6198301cdc70ca1\"" Mar 3 13:46:38.409652 containerd[1556]: time="2026-03-03T13:46:38.409512324Z" level=info msg="connecting to shim 78c78cc13ca9056f2b51562afd5088dedb3aef636fcd4556f6198301cdc70ca1" address="unix:///run/containerd/s/d966073e5c9357c76997e9d5617ca77da6efdd7bedcd8879faf7f8a3314ad9e9" protocol=ttrpc version=3 Mar 3 13:46:38.482733 systemd[1]: Started cri-containerd-78c78cc13ca9056f2b51562afd5088dedb3aef636fcd4556f6198301cdc70ca1.scope - libcontainer container 78c78cc13ca9056f2b51562afd5088dedb3aef636fcd4556f6198301cdc70ca1. Mar 3 13:46:38.595669 containerd[1556]: time="2026-03-03T13:46:38.594763065Z" level=info msg="StartContainer for \"78c78cc13ca9056f2b51562afd5088dedb3aef636fcd4556f6198301cdc70ca1\" returns successfully" Mar 3 13:46:39.350995 kubelet[2840]: I0303 13:46:39.350468 2840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5f5ffcc77c-mxflh" podStartSLOduration=127.236985437 podStartE2EDuration="2m27.326248063s" podCreationTimestamp="2026-03-03 13:44:12 +0000 UTC" firstStartedPulling="2026-03-03 13:46:18.223581718 +0000 UTC m=+201.155167740" lastFinishedPulling="2026-03-03 13:46:38.312844343 +0000 UTC m=+221.244430366" observedRunningTime="2026-03-03 13:46:39.185617586 +0000 UTC m=+222.117203719" watchObservedRunningTime="2026-03-03 13:46:39.326248063 +0000 UTC m=+222.257834086" Mar 3 13:46:39.425823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2669065234.mount: Deactivated successfully. Mar 3 13:46:39.501438 containerd[1556]: time="2026-03-03T13:46:39.501037344Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:46:39.503735 containerd[1556]: time="2026-03-03T13:46:39.503121905Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 3 13:46:39.505154 containerd[1556]: time="2026-03-03T13:46:39.505072585Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:46:39.526434 containerd[1556]: time="2026-03-03T13:46:39.526224562Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:46:39.528063 containerd[1556]: time="2026-03-03T13:46:39.527983296Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.214923418s" Mar 3 13:46:39.528128 containerd[1556]: time="2026-03-03T13:46:39.528064316Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 3 13:46:39.536243 containerd[1556]: time="2026-03-03T13:46:39.536167022Z" level=info msg="CreateContainer within sandbox \"2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 3 13:46:39.598375 containerd[1556]: time="2026-03-03T13:46:39.596850475Z" level=info msg="Container 824e7a0204eaaebc6d2219a81cbe0b2464900d0c12fa72d5ec75c003b24febc4: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:46:39.611388 containerd[1556]: time="2026-03-03T13:46:39.611059449Z" level=info msg="CreateContainer within sandbox \"2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"824e7a0204eaaebc6d2219a81cbe0b2464900d0c12fa72d5ec75c003b24febc4\"" Mar 3 13:46:39.613195 containerd[1556]: time="2026-03-03T13:46:39.613168422Z" level=info msg="StartContainer for \"824e7a0204eaaebc6d2219a81cbe0b2464900d0c12fa72d5ec75c003b24febc4\"" Mar 3 13:46:39.615449 containerd[1556]: time="2026-03-03T13:46:39.615178103Z" level=info msg="connecting to shim 824e7a0204eaaebc6d2219a81cbe0b2464900d0c12fa72d5ec75c003b24febc4" address="unix:///run/containerd/s/a5374646e96ce4a56bc82528df0f397c7deeaf90721cf7061bfc57e41a423f09" protocol=ttrpc version=3 Mar 3 13:46:39.711525 systemd[1]: Started cri-containerd-824e7a0204eaaebc6d2219a81cbe0b2464900d0c12fa72d5ec75c003b24febc4.scope - libcontainer container 824e7a0204eaaebc6d2219a81cbe0b2464900d0c12fa72d5ec75c003b24febc4. Mar 3 13:46:39.815370 containerd[1556]: time="2026-03-03T13:46:39.814904392Z" level=info msg="StartContainer for \"824e7a0204eaaebc6d2219a81cbe0b2464900d0c12fa72d5ec75c003b24febc4\" returns successfully" Mar 3 13:46:40.187436 kubelet[2840]: I0303 13:46:40.187213 2840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-79dbfb954-6l64x" podStartSLOduration=115.866893685 podStartE2EDuration="2m17.187195298s" podCreationTimestamp="2026-03-03 13:44:23 +0000 UTC" firstStartedPulling="2026-03-03 13:46:18.208953577 +0000 UTC m=+201.140539600" lastFinishedPulling="2026-03-03 13:46:39.52925519 +0000 UTC m=+222.460841213" observedRunningTime="2026-03-03 13:46:40.183937152 +0000 UTC m=+223.115523174" watchObservedRunningTime="2026-03-03 13:46:40.187195298 +0000 UTC m=+223.118781321" Mar 3 13:46:40.237987 containerd[1556]: time="2026-03-03T13:46:40.237604395Z" level=info msg="StopContainer for \"824e7a0204eaaebc6d2219a81cbe0b2464900d0c12fa72d5ec75c003b24febc4\" with timeout 30 (s)" Mar 3 13:46:40.237987 containerd[1556]: time="2026-03-03T13:46:40.237780706Z" level=info msg="StopContainer for \"a2badef6e30c1c13c7befa3e8b8b854c04f127ee2d44ed8c0e6b39ee28fee3c9\" with timeout 30 (s)" Mar 3 13:46:40.254885 containerd[1556]: time="2026-03-03T13:46:40.254760594Z" level=info msg="Stop container \"a2badef6e30c1c13c7befa3e8b8b854c04f127ee2d44ed8c0e6b39ee28fee3c9\" with signal terminated" Mar 3 13:46:40.255748 containerd[1556]: time="2026-03-03T13:46:40.255657224Z" level=info msg="Stop container \"824e7a0204eaaebc6d2219a81cbe0b2464900d0c12fa72d5ec75c003b24febc4\" with signal terminated" Mar 3 13:46:40.279231 systemd[1]: cri-containerd-824e7a0204eaaebc6d2219a81cbe0b2464900d0c12fa72d5ec75c003b24febc4.scope: Deactivated successfully. Mar 3 13:46:40.291703 systemd[1]: cri-containerd-a2badef6e30c1c13c7befa3e8b8b854c04f127ee2d44ed8c0e6b39ee28fee3c9.scope: Deactivated successfully. Mar 3 13:46:40.292503 systemd[1]: cri-containerd-a2badef6e30c1c13c7befa3e8b8b854c04f127ee2d44ed8c0e6b39ee28fee3c9.scope: Consumed 130ms CPU time, 6.5M memory peak, 1.4M read from disk, 12K written to disk. Mar 3 13:46:40.322760 containerd[1556]: time="2026-03-03T13:46:40.322599628Z" level=info msg="received container exit event container_id:\"824e7a0204eaaebc6d2219a81cbe0b2464900d0c12fa72d5ec75c003b24febc4\" id:\"824e7a0204eaaebc6d2219a81cbe0b2464900d0c12fa72d5ec75c003b24febc4\" pid:5590 exit_status:2 exited_at:{seconds:1772545600 nanos:316208602}" Mar 3 13:46:40.323909 containerd[1556]: time="2026-03-03T13:46:40.323806257Z" level=info msg="received container exit event container_id:\"a2badef6e30c1c13c7befa3e8b8b854c04f127ee2d44ed8c0e6b39ee28fee3c9\" id:\"a2badef6e30c1c13c7befa3e8b8b854c04f127ee2d44ed8c0e6b39ee28fee3c9\" pid:5404 exited_at:{seconds:1772545600 nanos:317861960}" Mar 3 13:46:40.382208 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-824e7a0204eaaebc6d2219a81cbe0b2464900d0c12fa72d5ec75c003b24febc4-rootfs.mount: Deactivated successfully. Mar 3 13:46:40.382606 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2badef6e30c1c13c7befa3e8b8b854c04f127ee2d44ed8c0e6b39ee28fee3c9-rootfs.mount: Deactivated successfully. Mar 3 13:46:40.488730 containerd[1556]: time="2026-03-03T13:46:40.488625020Z" level=info msg="StopContainer for \"824e7a0204eaaebc6d2219a81cbe0b2464900d0c12fa72d5ec75c003b24febc4\" returns successfully" Mar 3 13:46:40.489792 containerd[1556]: time="2026-03-03T13:46:40.489746769Z" level=info msg="StopContainer for \"a2badef6e30c1c13c7befa3e8b8b854c04f127ee2d44ed8c0e6b39ee28fee3c9\" returns successfully" Mar 3 13:46:40.499734 containerd[1556]: time="2026-03-03T13:46:40.499505458Z" level=info msg="StopPodSandbox for \"2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c\"" Mar 3 13:46:40.522034 containerd[1556]: time="2026-03-03T13:46:40.521637835Z" level=info msg="Container to stop \"824e7a0204eaaebc6d2219a81cbe0b2464900d0c12fa72d5ec75c003b24febc4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 3 13:46:40.522034 containerd[1556]: time="2026-03-03T13:46:40.521724345Z" level=info msg="Container to stop \"a2badef6e30c1c13c7befa3e8b8b854c04f127ee2d44ed8c0e6b39ee28fee3c9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 3 13:46:40.550467 systemd[1]: cri-containerd-2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c.scope: Deactivated successfully. Mar 3 13:46:40.586960 containerd[1556]: time="2026-03-03T13:46:40.586735965Z" level=info msg="received sandbox exit event container_id:\"2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c\" id:\"2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c\" exit_status:137 exited_at:{seconds:1772545600 nanos:575036589}" monitor_name=podsandbox Mar 3 13:46:40.641071 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c-rootfs.mount: Deactivated successfully. Mar 3 13:46:40.648019 containerd[1556]: time="2026-03-03T13:46:40.647926166Z" level=info msg="shim disconnected" id=2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c namespace=k8s.io Mar 3 13:46:40.648019 containerd[1556]: time="2026-03-03T13:46:40.648012857Z" level=warning msg="cleaning up after shim disconnected" id=2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c namespace=k8s.io Mar 3 13:46:40.656882 containerd[1556]: time="2026-03-03T13:46:40.648023176Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 3 13:46:40.796433 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c-shm.mount: Deactivated successfully. Mar 3 13:46:40.800767 containerd[1556]: time="2026-03-03T13:46:40.800683264Z" level=info msg="received sandbox container exit event sandbox_id:\"2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c\" exit_status:137 exited_at:{seconds:1772545600 nanos:575036589}" monitor_name=criService Mar 3 13:46:41.042629 systemd-networkd[1442]: califc1eab0c719: Link DOWN Mar 3 13:46:41.043442 systemd-networkd[1442]: califc1eab0c719: Lost carrier Mar 3 13:46:41.183689 kubelet[2840]: I0303 13:46:41.183435 2840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" Mar 3 13:46:41.385676 containerd[1556]: 2026-03-03 13:46:41.031 [INFO][5699] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" Mar 3 13:46:41.385676 containerd[1556]: 2026-03-03 13:46:41.033 [INFO][5699] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" iface="eth0" netns="/var/run/netns/cni-208f9842-cb77-f3b2-6baa-6282230a2a7f" Mar 3 13:46:41.385676 containerd[1556]: 2026-03-03 13:46:41.035 [INFO][5699] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" iface="eth0" netns="/var/run/netns/cni-208f9842-cb77-f3b2-6baa-6282230a2a7f" Mar 3 13:46:41.385676 containerd[1556]: 2026-03-03 13:46:41.059 [INFO][5699] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" after=25.610775ms iface="eth0" netns="/var/run/netns/cni-208f9842-cb77-f3b2-6baa-6282230a2a7f" Mar 3 13:46:41.385676 containerd[1556]: 2026-03-03 13:46:41.060 [INFO][5699] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" Mar 3 13:46:41.385676 containerd[1556]: 2026-03-03 13:46:41.060 [INFO][5699] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" Mar 3 13:46:41.385676 containerd[1556]: 2026-03-03 13:46:41.285 [INFO][5714] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" HandleID="k8s-pod-network.2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" Workload="localhost-k8s-whisker--79dbfb954--6l64x-eth0" Mar 3 13:46:41.385676 containerd[1556]: 2026-03-03 13:46:41.289 [INFO][5714] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 13:46:41.385676 containerd[1556]: 2026-03-03 13:46:41.290 [INFO][5714] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 13:46:41.385676 containerd[1556]: 2026-03-03 13:46:41.374 [INFO][5714] ipam/ipam_plugin.go 516: Released address using handleID ContainerID="2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" HandleID="k8s-pod-network.2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" Workload="localhost-k8s-whisker--79dbfb954--6l64x-eth0" Mar 3 13:46:41.385676 containerd[1556]: 2026-03-03 13:46:41.374 [INFO][5714] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" HandleID="k8s-pod-network.2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" Workload="localhost-k8s-whisker--79dbfb954--6l64x-eth0" Mar 3 13:46:41.385676 containerd[1556]: 2026-03-03 13:46:41.377 [INFO][5714] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 13:46:41.385676 containerd[1556]: 2026-03-03 13:46:41.381 [INFO][5699] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" Mar 3 13:46:41.390018 systemd[1]: run-netns-cni\x2d208f9842\x2dcb77\x2df3b2\x2d6baa\x2d6282230a2a7f.mount: Deactivated successfully. Mar 3 13:46:41.395160 containerd[1556]: time="2026-03-03T13:46:41.395080759Z" level=info msg="TearDown network for sandbox \"2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c\" successfully" Mar 3 13:46:41.395228 containerd[1556]: time="2026-03-03T13:46:41.395171357Z" level=info msg="StopPodSandbox for \"2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c\" returns successfully" Mar 3 13:46:41.528678 kubelet[2840]: I0303 13:46:41.528481 2840 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a7b4e4f-cfb0-4dd7-8b2f-1f646235fb43-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "7a7b4e4f-cfb0-4dd7-8b2f-1f646235fb43" (UID: "7a7b4e4f-cfb0-4dd7-8b2f-1f646235fb43"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 3 13:46:41.531262 kubelet[2840]: I0303 13:46:41.530916 2840 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a7b4e4f-cfb0-4dd7-8b2f-1f646235fb43-whisker-ca-bundle\") pod \"7a7b4e4f-cfb0-4dd7-8b2f-1f646235fb43\" (UID: \"7a7b4e4f-cfb0-4dd7-8b2f-1f646235fb43\") " Mar 3 13:46:41.531262 kubelet[2840]: I0303 13:46:41.531108 2840 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bmqn\" (UniqueName: \"kubernetes.io/projected/7a7b4e4f-cfb0-4dd7-8b2f-1f646235fb43-kube-api-access-7bmqn\") pod \"7a7b4e4f-cfb0-4dd7-8b2f-1f646235fb43\" (UID: \"7a7b4e4f-cfb0-4dd7-8b2f-1f646235fb43\") " Mar 3 13:46:41.531262 kubelet[2840]: I0303 13:46:41.531161 2840 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7a7b4e4f-cfb0-4dd7-8b2f-1f646235fb43-whisker-backend-key-pair\") pod \"7a7b4e4f-cfb0-4dd7-8b2f-1f646235fb43\" (UID: \"7a7b4e4f-cfb0-4dd7-8b2f-1f646235fb43\") " Mar 3 13:46:41.531262 kubelet[2840]: I0303 13:46:41.531202 2840 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/7a7b4e4f-cfb0-4dd7-8b2f-1f646235fb43-nginx-config\") pod \"7a7b4e4f-cfb0-4dd7-8b2f-1f646235fb43\" (UID: \"7a7b4e4f-cfb0-4dd7-8b2f-1f646235fb43\") " Mar 3 13:46:41.531689 kubelet[2840]: I0303 13:46:41.531420 2840 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a7b4e4f-cfb0-4dd7-8b2f-1f646235fb43-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 3 13:46:41.531949 kubelet[2840]: I0303 13:46:41.531895 2840 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a7b4e4f-cfb0-4dd7-8b2f-1f646235fb43-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "7a7b4e4f-cfb0-4dd7-8b2f-1f646235fb43" (UID: "7a7b4e4f-cfb0-4dd7-8b2f-1f646235fb43"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 3 13:46:41.551875 kubelet[2840]: I0303 13:46:41.551780 2840 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a7b4e4f-cfb0-4dd7-8b2f-1f646235fb43-kube-api-access-7bmqn" (OuterVolumeSpecName: "kube-api-access-7bmqn") pod "7a7b4e4f-cfb0-4dd7-8b2f-1f646235fb43" (UID: "7a7b4e4f-cfb0-4dd7-8b2f-1f646235fb43"). InnerVolumeSpecName "kube-api-access-7bmqn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 3 13:46:41.553057 kubelet[2840]: I0303 13:46:41.552970 2840 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a7b4e4f-cfb0-4dd7-8b2f-1f646235fb43-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "7a7b4e4f-cfb0-4dd7-8b2f-1f646235fb43" (UID: "7a7b4e4f-cfb0-4dd7-8b2f-1f646235fb43"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 3 13:46:41.555722 systemd[1]: var-lib-kubelet-pods-7a7b4e4f\x2dcfb0\x2d4dd7\x2d8b2f\x2d1f646235fb43-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7bmqn.mount: Deactivated successfully. Mar 3 13:46:41.556030 systemd[1]: var-lib-kubelet-pods-7a7b4e4f\x2dcfb0\x2d4dd7\x2d8b2f\x2d1f646235fb43-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 3 13:46:41.632773 kubelet[2840]: I0303 13:46:41.632633 2840 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7a7b4e4f-cfb0-4dd7-8b2f-1f646235fb43-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 3 13:46:41.632773 kubelet[2840]: I0303 13:46:41.632735 2840 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/7a7b4e4f-cfb0-4dd7-8b2f-1f646235fb43-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 3 13:46:41.632773 kubelet[2840]: I0303 13:46:41.632753 2840 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7bmqn\" (UniqueName: \"kubernetes.io/projected/7a7b4e4f-cfb0-4dd7-8b2f-1f646235fb43-kube-api-access-7bmqn\") on node \"localhost\" DevicePath \"\"" Mar 3 13:46:42.174458 systemd[1]: Removed slice kubepods-besteffort-pod7a7b4e4f_cfb0_4dd7_8b2f_1f646235fb43.slice - libcontainer container kubepods-besteffort-pod7a7b4e4f_cfb0_4dd7_8b2f_1f646235fb43.slice. Mar 3 13:46:42.174628 systemd[1]: kubepods-besteffort-pod7a7b4e4f_cfb0_4dd7_8b2f_1f646235fb43.slice: Consumed 304ms CPU time, 12.6M memory peak, 1.4M read from disk, 12K written to disk. Mar 3 13:46:42.410018 systemd[1]: Created slice kubepods-besteffort-podef6ca6d8_1530_4beb_becd_15d3d3d1c88e.slice - libcontainer container kubepods-besteffort-podef6ca6d8_1530_4beb_becd_15d3d3d1c88e.slice. Mar 3 13:46:42.444726 kubelet[2840]: I0303 13:46:42.444522 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef6ca6d8-1530-4beb-becd-15d3d3d1c88e-whisker-ca-bundle\") pod \"whisker-6458cbc899-zq987\" (UID: \"ef6ca6d8-1530-4beb-becd-15d3d3d1c88e\") " pod="calico-system/whisker-6458cbc899-zq987" Mar 3 13:46:42.445245 kubelet[2840]: I0303 13:46:42.444745 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ef6ca6d8-1530-4beb-becd-15d3d3d1c88e-whisker-backend-key-pair\") pod \"whisker-6458cbc899-zq987\" (UID: \"ef6ca6d8-1530-4beb-becd-15d3d3d1c88e\") " pod="calico-system/whisker-6458cbc899-zq987" Mar 3 13:46:42.445245 kubelet[2840]: I0303 13:46:42.444768 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t9mb\" (UniqueName: \"kubernetes.io/projected/ef6ca6d8-1530-4beb-becd-15d3d3d1c88e-kube-api-access-6t9mb\") pod \"whisker-6458cbc899-zq987\" (UID: \"ef6ca6d8-1530-4beb-becd-15d3d3d1c88e\") " pod="calico-system/whisker-6458cbc899-zq987" Mar 3 13:46:42.445245 kubelet[2840]: I0303 13:46:42.444792 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/ef6ca6d8-1530-4beb-becd-15d3d3d1c88e-nginx-config\") pod \"whisker-6458cbc899-zq987\" (UID: \"ef6ca6d8-1530-4beb-becd-15d3d3d1c88e\") " pod="calico-system/whisker-6458cbc899-zq987" Mar 3 13:46:42.735956 containerd[1556]: time="2026-03-03T13:46:42.734943871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6458cbc899-zq987,Uid:ef6ca6d8-1530-4beb-becd-15d3d3d1c88e,Namespace:calico-system,Attempt:0,}" Mar 3 13:46:42.881060 systemd[1]: Started sshd@14-10.0.0.57:22-10.0.0.1:33852.service - OpenSSH per-connection server daemon (10.0.0.1:33852). Mar 3 13:46:43.243887 sshd[5753]: Accepted publickey for core from 10.0.0.1 port 33852 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 13:46:43.248077 sshd-session[5753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:46:43.284994 systemd-logind[1547]: New session 15 of user core. Mar 3 13:46:43.315011 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 3 13:46:43.590127 systemd-networkd[1442]: calib83f9552ca7: Link UP Mar 3 13:46:43.598005 systemd-networkd[1442]: calib83f9552ca7: Gained carrier Mar 3 13:46:43.682515 containerd[1556]: 2026-03-03 13:46:43.074 [INFO][5741] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6458cbc899--zq987-eth0 whisker-6458cbc899- calico-system ef6ca6d8-1530-4beb-becd-15d3d3d1c88e 1531 0 2026-03-03 13:46:42 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6458cbc899 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6458cbc899-zq987 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calib83f9552ca7 [] [] }} ContainerID="e5ec39c45ca74886e0e4d974e0ca263803183de425919bf49fed7dbd0ae9d608" Namespace="calico-system" Pod="whisker-6458cbc899-zq987" WorkloadEndpoint="localhost-k8s-whisker--6458cbc899--zq987-" Mar 3 13:46:43.682515 containerd[1556]: 2026-03-03 13:46:43.077 [INFO][5741] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e5ec39c45ca74886e0e4d974e0ca263803183de425919bf49fed7dbd0ae9d608" Namespace="calico-system" Pod="whisker-6458cbc899-zq987" WorkloadEndpoint="localhost-k8s-whisker--6458cbc899--zq987-eth0" Mar 3 13:46:43.682515 containerd[1556]: 2026-03-03 13:46:43.309 [INFO][5760] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e5ec39c45ca74886e0e4d974e0ca263803183de425919bf49fed7dbd0ae9d608" HandleID="k8s-pod-network.e5ec39c45ca74886e0e4d974e0ca263803183de425919bf49fed7dbd0ae9d608" Workload="localhost-k8s-whisker--6458cbc899--zq987-eth0" Mar 3 13:46:43.682515 containerd[1556]: 2026-03-03 13:46:43.344 [INFO][5760] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e5ec39c45ca74886e0e4d974e0ca263803183de425919bf49fed7dbd0ae9d608" HandleID="k8s-pod-network.e5ec39c45ca74886e0e4d974e0ca263803183de425919bf49fed7dbd0ae9d608" Workload="localhost-k8s-whisker--6458cbc899--zq987-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000399640), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6458cbc899-zq987", "timestamp":"2026-03-03 13:46:43.309063062 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000550000)} Mar 3 13:46:43.682515 containerd[1556]: 2026-03-03 13:46:43.345 [INFO][5760] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 13:46:43.682515 containerd[1556]: 2026-03-03 13:46:43.345 [INFO][5760] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 13:46:43.682515 containerd[1556]: 2026-03-03 13:46:43.346 [INFO][5760] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 3 13:46:43.682515 containerd[1556]: 2026-03-03 13:46:43.371 [INFO][5760] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e5ec39c45ca74886e0e4d974e0ca263803183de425919bf49fed7dbd0ae9d608" host="localhost" Mar 3 13:46:43.682515 containerd[1556]: 2026-03-03 13:46:43.403 [INFO][5760] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 3 13:46:43.682515 containerd[1556]: 2026-03-03 13:46:43.454 [INFO][5760] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 3 13:46:43.682515 containerd[1556]: 2026-03-03 13:46:43.465 [INFO][5760] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 3 13:46:43.682515 containerd[1556]: 2026-03-03 13:46:43.478 [INFO][5760] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 3 13:46:43.682515 containerd[1556]: 2026-03-03 13:46:43.479 [INFO][5760] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e5ec39c45ca74886e0e4d974e0ca263803183de425919bf49fed7dbd0ae9d608" host="localhost" Mar 3 13:46:43.682515 containerd[1556]: 2026-03-03 13:46:43.485 [INFO][5760] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e5ec39c45ca74886e0e4d974e0ca263803183de425919bf49fed7dbd0ae9d608 Mar 3 13:46:43.682515 containerd[1556]: 2026-03-03 13:46:43.507 [INFO][5760] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e5ec39c45ca74886e0e4d974e0ca263803183de425919bf49fed7dbd0ae9d608" host="localhost" Mar 3 13:46:43.682515 containerd[1556]: 2026-03-03 13:46:43.541 [INFO][5760] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.e5ec39c45ca74886e0e4d974e0ca263803183de425919bf49fed7dbd0ae9d608" host="localhost" Mar 3 13:46:43.682515 containerd[1556]: 2026-03-03 13:46:43.545 [INFO][5760] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.e5ec39c45ca74886e0e4d974e0ca263803183de425919bf49fed7dbd0ae9d608" host="localhost" Mar 3 13:46:43.682515 containerd[1556]: 2026-03-03 13:46:43.546 [INFO][5760] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 13:46:43.682515 containerd[1556]: 2026-03-03 13:46:43.546 [INFO][5760] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="e5ec39c45ca74886e0e4d974e0ca263803183de425919bf49fed7dbd0ae9d608" HandleID="k8s-pod-network.e5ec39c45ca74886e0e4d974e0ca263803183de425919bf49fed7dbd0ae9d608" Workload="localhost-k8s-whisker--6458cbc899--zq987-eth0" Mar 3 13:46:43.684478 containerd[1556]: 2026-03-03 13:46:43.564 [INFO][5741] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e5ec39c45ca74886e0e4d974e0ca263803183de425919bf49fed7dbd0ae9d608" Namespace="calico-system" Pod="whisker-6458cbc899-zq987" WorkloadEndpoint="localhost-k8s-whisker--6458cbc899--zq987-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6458cbc899--zq987-eth0", GenerateName:"whisker-6458cbc899-", Namespace:"calico-system", SelfLink:"", UID:"ef6ca6d8-1530-4beb-becd-15d3d3d1c88e", ResourceVersion:"1531", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 46, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6458cbc899", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6458cbc899-zq987", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib83f9552ca7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:46:43.684478 containerd[1556]: 2026-03-03 13:46:43.565 [INFO][5741] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="e5ec39c45ca74886e0e4d974e0ca263803183de425919bf49fed7dbd0ae9d608" Namespace="calico-system" Pod="whisker-6458cbc899-zq987" WorkloadEndpoint="localhost-k8s-whisker--6458cbc899--zq987-eth0" Mar 3 13:46:43.684478 containerd[1556]: 2026-03-03 13:46:43.565 [INFO][5741] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib83f9552ca7 ContainerID="e5ec39c45ca74886e0e4d974e0ca263803183de425919bf49fed7dbd0ae9d608" Namespace="calico-system" Pod="whisker-6458cbc899-zq987" WorkloadEndpoint="localhost-k8s-whisker--6458cbc899--zq987-eth0" Mar 3 13:46:43.684478 containerd[1556]: 2026-03-03 13:46:43.606 [INFO][5741] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e5ec39c45ca74886e0e4d974e0ca263803183de425919bf49fed7dbd0ae9d608" Namespace="calico-system" Pod="whisker-6458cbc899-zq987" WorkloadEndpoint="localhost-k8s-whisker--6458cbc899--zq987-eth0" Mar 3 13:46:43.684478 containerd[1556]: 2026-03-03 13:46:43.612 [INFO][5741] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e5ec39c45ca74886e0e4d974e0ca263803183de425919bf49fed7dbd0ae9d608" Namespace="calico-system" Pod="whisker-6458cbc899-zq987" WorkloadEndpoint="localhost-k8s-whisker--6458cbc899--zq987-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6458cbc899--zq987-eth0", GenerateName:"whisker-6458cbc899-", Namespace:"calico-system", SelfLink:"", UID:"ef6ca6d8-1530-4beb-becd-15d3d3d1c88e", ResourceVersion:"1531", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 46, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6458cbc899", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e5ec39c45ca74886e0e4d974e0ca263803183de425919bf49fed7dbd0ae9d608", Pod:"whisker-6458cbc899-zq987", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib83f9552ca7", MAC:"fe:ad:98:cc:32:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:46:43.684478 containerd[1556]: 2026-03-03 13:46:43.669 [INFO][5741] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e5ec39c45ca74886e0e4d974e0ca263803183de425919bf49fed7dbd0ae9d608" Namespace="calico-system" Pod="whisker-6458cbc899-zq987" WorkloadEndpoint="localhost-k8s-whisker--6458cbc899--zq987-eth0" Mar 3 13:46:43.905985 containerd[1556]: time="2026-03-03T13:46:43.899884328Z" level=info msg="connecting to shim e5ec39c45ca74886e0e4d974e0ca263803183de425919bf49fed7dbd0ae9d608" address="unix:///run/containerd/s/1debb1dc34cb417fc9d4f561a8cce95c132b6fb046aa577e49f026dd5e4d646c" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:46:44.060012 systemd[1]: Started cri-containerd-e5ec39c45ca74886e0e4d974e0ca263803183de425919bf49fed7dbd0ae9d608.scope - libcontainer container e5ec39c45ca74886e0e4d974e0ca263803183de425919bf49fed7dbd0ae9d608. Mar 3 13:46:44.149085 systemd-resolved[1449]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 3 13:46:44.155209 kubelet[2840]: I0303 13:46:44.155124 2840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a7b4e4f-cfb0-4dd7-8b2f-1f646235fb43" path="/var/lib/kubelet/pods/7a7b4e4f-cfb0-4dd7-8b2f-1f646235fb43/volumes" Mar 3 13:46:44.213434 sshd[5768]: Connection closed by 10.0.0.1 port 33852 Mar 3 13:46:44.214825 sshd-session[5753]: pam_unix(sshd:session): session closed for user core Mar 3 13:46:44.226921 systemd[1]: sshd@14-10.0.0.57:22-10.0.0.1:33852.service: Deactivated successfully. Mar 3 13:46:44.231885 systemd[1]: session-15.scope: Deactivated successfully. Mar 3 13:46:44.235876 systemd-logind[1547]: Session 15 logged out. Waiting for processes to exit. Mar 3 13:46:44.240942 systemd-logind[1547]: Removed session 15. Mar 3 13:46:44.316945 containerd[1556]: time="2026-03-03T13:46:44.315735915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6458cbc899-zq987,Uid:ef6ca6d8-1530-4beb-becd-15d3d3d1c88e,Namespace:calico-system,Attempt:0,} returns sandbox id \"e5ec39c45ca74886e0e4d974e0ca263803183de425919bf49fed7dbd0ae9d608\"" Mar 3 13:46:44.334333 containerd[1556]: time="2026-03-03T13:46:44.334112862Z" level=info msg="CreateContainer within sandbox \"e5ec39c45ca74886e0e4d974e0ca263803183de425919bf49fed7dbd0ae9d608\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 3 13:46:44.371138 containerd[1556]: time="2026-03-03T13:46:44.370033056Z" level=info msg="Container cab8d92b853da5f9bab51cd1ccded20f3c2451d16390f74ed5f5a9d1b93f2029: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:46:44.402766 containerd[1556]: time="2026-03-03T13:46:44.400521730Z" level=info msg="CreateContainer within sandbox \"e5ec39c45ca74886e0e4d974e0ca263803183de425919bf49fed7dbd0ae9d608\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"cab8d92b853da5f9bab51cd1ccded20f3c2451d16390f74ed5f5a9d1b93f2029\"" Mar 3 13:46:44.404986 containerd[1556]: time="2026-03-03T13:46:44.404865658Z" level=info msg="StartContainer for \"cab8d92b853da5f9bab51cd1ccded20f3c2451d16390f74ed5f5a9d1b93f2029\"" Mar 3 13:46:44.408635 containerd[1556]: time="2026-03-03T13:46:44.408453910Z" level=info msg="connecting to shim cab8d92b853da5f9bab51cd1ccded20f3c2451d16390f74ed5f5a9d1b93f2029" address="unix:///run/containerd/s/1debb1dc34cb417fc9d4f561a8cce95c132b6fb046aa577e49f026dd5e4d646c" protocol=ttrpc version=3 Mar 3 13:46:44.490959 systemd[1]: Started cri-containerd-cab8d92b853da5f9bab51cd1ccded20f3c2451d16390f74ed5f5a9d1b93f2029.scope - libcontainer container cab8d92b853da5f9bab51cd1ccded20f3c2451d16390f74ed5f5a9d1b93f2029. Mar 3 13:46:44.757927 containerd[1556]: time="2026-03-03T13:46:44.757255768Z" level=info msg="StartContainer for \"cab8d92b853da5f9bab51cd1ccded20f3c2451d16390f74ed5f5a9d1b93f2029\" returns successfully" Mar 3 13:46:44.780739 containerd[1556]: time="2026-03-03T13:46:44.780115269Z" level=info msg="CreateContainer within sandbox \"e5ec39c45ca74886e0e4d974e0ca263803183de425919bf49fed7dbd0ae9d608\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 3 13:46:44.833663 containerd[1556]: time="2026-03-03T13:46:44.829914249Z" level=info msg="Container 02138886fb11c5799a193cc5a4426827647d6bb3b4a297ab11d50682ae82f99c: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:46:44.887059 containerd[1556]: time="2026-03-03T13:46:44.886928819Z" level=info msg="CreateContainer within sandbox \"e5ec39c45ca74886e0e4d974e0ca263803183de425919bf49fed7dbd0ae9d608\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"02138886fb11c5799a193cc5a4426827647d6bb3b4a297ab11d50682ae82f99c\"" Mar 3 13:46:44.895246 containerd[1556]: time="2026-03-03T13:46:44.894868002Z" level=info msg="StartContainer for \"02138886fb11c5799a193cc5a4426827647d6bb3b4a297ab11d50682ae82f99c\"" Mar 3 13:46:44.897067 containerd[1556]: time="2026-03-03T13:46:44.897013045Z" level=info msg="connecting to shim 02138886fb11c5799a193cc5a4426827647d6bb3b4a297ab11d50682ae82f99c" address="unix:///run/containerd/s/1debb1dc34cb417fc9d4f561a8cce95c132b6fb046aa577e49f026dd5e4d646c" protocol=ttrpc version=3 Mar 3 13:46:44.971094 systemd[1]: Started cri-containerd-02138886fb11c5799a193cc5a4426827647d6bb3b4a297ab11d50682ae82f99c.scope - libcontainer container 02138886fb11c5799a193cc5a4426827647d6bb3b4a297ab11d50682ae82f99c. Mar 3 13:46:45.180897 containerd[1556]: time="2026-03-03T13:46:45.178645353Z" level=info msg="StartContainer for \"02138886fb11c5799a193cc5a4426827647d6bb3b4a297ab11d50682ae82f99c\" returns successfully" Mar 3 13:46:45.184910 systemd-networkd[1442]: calib83f9552ca7: Gained IPv6LL Mar 3 13:46:49.245030 systemd[1]: Started sshd@15-10.0.0.57:22-10.0.0.1:41208.service - OpenSSH per-connection server daemon (10.0.0.1:41208). Mar 3 13:46:49.357474 sshd[5945]: Accepted publickey for core from 10.0.0.1 port 41208 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 13:46:49.361042 sshd-session[5945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:46:49.374701 systemd-logind[1547]: New session 16 of user core. Mar 3 13:46:49.392859 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 3 13:46:49.642467 sshd[5948]: Connection closed by 10.0.0.1 port 41208 Mar 3 13:46:49.641754 sshd-session[5945]: pam_unix(sshd:session): session closed for user core Mar 3 13:46:49.649473 systemd[1]: sshd@15-10.0.0.57:22-10.0.0.1:41208.service: Deactivated successfully. Mar 3 13:46:49.653662 systemd[1]: session-16.scope: Deactivated successfully. Mar 3 13:46:49.678714 systemd-logind[1547]: Session 16 logged out. Waiting for processes to exit. Mar 3 13:46:49.683469 systemd-logind[1547]: Removed session 16. Mar 3 13:46:54.662679 systemd[1]: Started sshd@16-10.0.0.57:22-10.0.0.1:41212.service - OpenSSH per-connection server daemon (10.0.0.1:41212). Mar 3 13:46:54.742783 sshd[5969]: Accepted publickey for core from 10.0.0.1 port 41212 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 13:46:54.745187 sshd-session[5969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:46:54.756702 systemd-logind[1547]: New session 17 of user core. Mar 3 13:46:54.765899 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 3 13:46:54.977737 sshd[5972]: Connection closed by 10.0.0.1 port 41212 Mar 3 13:46:54.978856 sshd-session[5969]: pam_unix(sshd:session): session closed for user core Mar 3 13:46:54.985951 systemd[1]: sshd@16-10.0.0.57:22-10.0.0.1:41212.service: Deactivated successfully. Mar 3 13:46:54.989084 systemd[1]: session-17.scope: Deactivated successfully. Mar 3 13:46:54.992089 systemd-logind[1547]: Session 17 logged out. Waiting for processes to exit. Mar 3 13:46:54.996420 systemd-logind[1547]: Removed session 17. Mar 3 13:46:57.995663 kubelet[2840]: I0303 13:46:57.995189 2840 scope.go:117] "RemoveContainer" containerID="824e7a0204eaaebc6d2219a81cbe0b2464900d0c12fa72d5ec75c003b24febc4" Mar 3 13:46:57.999779 containerd[1556]: time="2026-03-03T13:46:57.999233015Z" level=info msg="RemoveContainer for \"824e7a0204eaaebc6d2219a81cbe0b2464900d0c12fa72d5ec75c003b24febc4\"" Mar 3 13:46:58.030830 containerd[1556]: time="2026-03-03T13:46:58.030488929Z" level=info msg="RemoveContainer for \"824e7a0204eaaebc6d2219a81cbe0b2464900d0c12fa72d5ec75c003b24febc4\" returns successfully" Mar 3 13:46:58.033040 kubelet[2840]: I0303 13:46:58.032965 2840 scope.go:117] "RemoveContainer" containerID="a2badef6e30c1c13c7befa3e8b8b854c04f127ee2d44ed8c0e6b39ee28fee3c9" Mar 3 13:46:58.046695 containerd[1556]: time="2026-03-03T13:46:58.046485427Z" level=info msg="RemoveContainer for \"a2badef6e30c1c13c7befa3e8b8b854c04f127ee2d44ed8c0e6b39ee28fee3c9\"" Mar 3 13:46:58.059454 containerd[1556]: time="2026-03-03T13:46:58.059238119Z" level=info msg="RemoveContainer for \"a2badef6e30c1c13c7befa3e8b8b854c04f127ee2d44ed8c0e6b39ee28fee3c9\" returns successfully" Mar 3 13:46:58.069896 containerd[1556]: time="2026-03-03T13:46:58.069163885Z" level=info msg="StopPodSandbox for \"2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c\"" Mar 3 13:46:58.346030 containerd[1556]: 2026-03-03 13:46:58.233 [WARNING][6022] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" WorkloadEndpoint="localhost-k8s-whisker--79dbfb954--6l64x-eth0" Mar 3 13:46:58.346030 containerd[1556]: 2026-03-03 13:46:58.234 [INFO][6022] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" Mar 3 13:46:58.346030 containerd[1556]: 2026-03-03 13:46:58.234 [INFO][6022] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" iface="eth0" netns="" Mar 3 13:46:58.346030 containerd[1556]: 2026-03-03 13:46:58.234 [INFO][6022] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" Mar 3 13:46:58.346030 containerd[1556]: 2026-03-03 13:46:58.234 [INFO][6022] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" Mar 3 13:46:58.346030 containerd[1556]: 2026-03-03 13:46:58.319 [INFO][6032] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" HandleID="k8s-pod-network.2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" Workload="localhost-k8s-whisker--79dbfb954--6l64x-eth0" Mar 3 13:46:58.346030 containerd[1556]: 2026-03-03 13:46:58.320 [INFO][6032] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 13:46:58.346030 containerd[1556]: 2026-03-03 13:46:58.321 [INFO][6032] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 13:46:58.346030 containerd[1556]: 2026-03-03 13:46:58.332 [WARNING][6032] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" HandleID="k8s-pod-network.2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" Workload="localhost-k8s-whisker--79dbfb954--6l64x-eth0" Mar 3 13:46:58.346030 containerd[1556]: 2026-03-03 13:46:58.332 [INFO][6032] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" HandleID="k8s-pod-network.2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" Workload="localhost-k8s-whisker--79dbfb954--6l64x-eth0" Mar 3 13:46:58.346030 containerd[1556]: 2026-03-03 13:46:58.336 [INFO][6032] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 13:46:58.346030 containerd[1556]: 2026-03-03 13:46:58.341 [INFO][6022] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" Mar 3 13:46:58.346030 containerd[1556]: time="2026-03-03T13:46:58.346003291Z" level=info msg="TearDown network for sandbox \"2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c\" successfully" Mar 3 13:46:58.347140 containerd[1556]: time="2026-03-03T13:46:58.346036433Z" level=info msg="StopPodSandbox for \"2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c\" returns successfully" Mar 3 13:46:58.370219 containerd[1556]: time="2026-03-03T13:46:58.369964512Z" level=info msg="RemovePodSandbox for \"2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c\"" Mar 3 13:46:58.370219 containerd[1556]: time="2026-03-03T13:46:58.370039252Z" level=info msg="Forcibly stopping sandbox \"2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c\"" Mar 3 13:46:58.590032 containerd[1556]: 2026-03-03 13:46:58.481 [WARNING][6052] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" WorkloadEndpoint="localhost-k8s-whisker--79dbfb954--6l64x-eth0" Mar 3 13:46:58.590032 containerd[1556]: 2026-03-03 13:46:58.481 [INFO][6052] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" Mar 3 13:46:58.590032 containerd[1556]: 2026-03-03 13:46:58.481 [INFO][6052] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" iface="eth0" netns="" Mar 3 13:46:58.590032 containerd[1556]: 2026-03-03 13:46:58.481 [INFO][6052] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" Mar 3 13:46:58.590032 containerd[1556]: 2026-03-03 13:46:58.481 [INFO][6052] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" Mar 3 13:46:58.590032 containerd[1556]: 2026-03-03 13:46:58.558 [INFO][6060] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" HandleID="k8s-pod-network.2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" Workload="localhost-k8s-whisker--79dbfb954--6l64x-eth0" Mar 3 13:46:58.590032 containerd[1556]: 2026-03-03 13:46:58.559 [INFO][6060] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 13:46:58.590032 containerd[1556]: 2026-03-03 13:46:58.559 [INFO][6060] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 13:46:58.590032 containerd[1556]: 2026-03-03 13:46:58.576 [WARNING][6060] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" HandleID="k8s-pod-network.2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" Workload="localhost-k8s-whisker--79dbfb954--6l64x-eth0" Mar 3 13:46:58.590032 containerd[1556]: 2026-03-03 13:46:58.576 [INFO][6060] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" HandleID="k8s-pod-network.2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" Workload="localhost-k8s-whisker--79dbfb954--6l64x-eth0" Mar 3 13:46:58.590032 containerd[1556]: 2026-03-03 13:46:58.579 [INFO][6060] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 13:46:58.590032 containerd[1556]: 2026-03-03 13:46:58.585 [INFO][6052] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c" Mar 3 13:46:58.591982 containerd[1556]: time="2026-03-03T13:46:58.590079623Z" level=info msg="TearDown network for sandbox \"2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c\" successfully" Mar 3 13:46:58.596468 containerd[1556]: time="2026-03-03T13:46:58.596249344Z" level=info msg="Ensure that sandbox 2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c in task-service has been cleanup successfully" Mar 3 13:46:58.615463 containerd[1556]: time="2026-03-03T13:46:58.615135150Z" level=info msg="RemovePodSandbox \"2b2a0c4995d6afd36ddbbdb0ff057f656536c3d9e0667470d591825431c6104c\" returns successfully" Mar 3 13:47:00.002822 systemd[1]: Started sshd@17-10.0.0.57:22-10.0.0.1:52264.service - OpenSSH per-connection server daemon (10.0.0.1:52264). Mar 3 13:47:00.140851 sshd[6069]: Accepted publickey for core from 10.0.0.1 port 52264 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 13:47:00.144581 sshd-session[6069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:47:00.158578 systemd-logind[1547]: New session 18 of user core. Mar 3 13:47:00.169200 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 3 13:47:00.451248 sshd[6072]: Connection closed by 10.0.0.1 port 52264 Mar 3 13:47:00.451819 sshd-session[6069]: pam_unix(sshd:session): session closed for user core Mar 3 13:47:00.458430 systemd[1]: sshd@17-10.0.0.57:22-10.0.0.1:52264.service: Deactivated successfully. Mar 3 13:47:00.462031 systemd[1]: session-18.scope: Deactivated successfully. Mar 3 13:47:00.464471 systemd-logind[1547]: Session 18 logged out. Waiting for processes to exit. Mar 3 13:47:00.467529 systemd-logind[1547]: Removed session 18. Mar 3 13:47:05.137446 kubelet[2840]: E0303 13:47:05.137249 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:47:05.384507 kubelet[2840]: I0303 13:47:05.384200 2840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6458cbc899-zq987" podStartSLOduration=23.384186132 podStartE2EDuration="23.384186132s" podCreationTimestamp="2026-03-03 13:46:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 13:46:45.389031921 +0000 UTC m=+228.320617974" watchObservedRunningTime="2026-03-03 13:47:05.384186132 +0000 UTC m=+248.315772154" Mar 3 13:47:05.472858 systemd[1]: Started sshd@18-10.0.0.57:22-10.0.0.1:52272.service - OpenSSH per-connection server daemon (10.0.0.1:52272). Mar 3 13:47:05.548903 sshd[6119]: Accepted publickey for core from 10.0.0.1 port 52272 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 13:47:05.551148 sshd-session[6119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:47:05.561731 systemd-logind[1547]: New session 19 of user core. Mar 3 13:47:05.573732 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 3 13:47:05.759957 sshd[6122]: Connection closed by 10.0.0.1 port 52272 Mar 3 13:47:05.761047 sshd-session[6119]: pam_unix(sshd:session): session closed for user core Mar 3 13:47:05.770507 systemd[1]: sshd@18-10.0.0.57:22-10.0.0.1:52272.service: Deactivated successfully. Mar 3 13:47:05.773943 systemd[1]: session-19.scope: Deactivated successfully. Mar 3 13:47:05.777012 systemd-logind[1547]: Session 19 logged out. Waiting for processes to exit. Mar 3 13:47:05.780450 systemd-logind[1547]: Removed session 19. Mar 3 13:47:09.134247 kubelet[2840]: E0303 13:47:09.134101 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:47:10.781756 systemd[1]: Started sshd@19-10.0.0.57:22-10.0.0.1:53274.service - OpenSSH per-connection server daemon (10.0.0.1:53274). Mar 3 13:47:10.876800 sshd[6160]: Accepted publickey for core from 10.0.0.1 port 53274 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 13:47:10.879489 sshd-session[6160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:47:10.891862 systemd-logind[1547]: New session 20 of user core. Mar 3 13:47:10.898822 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 3 13:47:11.122047 sshd[6163]: Connection closed by 10.0.0.1 port 53274 Mar 3 13:47:11.122248 sshd-session[6160]: pam_unix(sshd:session): session closed for user core Mar 3 13:47:11.131120 systemd[1]: sshd@19-10.0.0.57:22-10.0.0.1:53274.service: Deactivated successfully. Mar 3 13:47:11.135903 systemd[1]: session-20.scope: Deactivated successfully. Mar 3 13:47:11.139116 systemd-logind[1547]: Session 20 logged out. Waiting for processes to exit. Mar 3 13:47:11.144572 systemd-logind[1547]: Removed session 20. Mar 3 13:47:13.133219 kubelet[2840]: E0303 13:47:13.132930 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:47:16.146775 systemd[1]: Started sshd@20-10.0.0.57:22-10.0.0.1:53278.service - OpenSSH per-connection server daemon (10.0.0.1:53278). Mar 3 13:47:16.262473 sshd[6225]: Accepted publickey for core from 10.0.0.1 port 53278 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 13:47:16.266488 sshd-session[6225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:47:16.294432 systemd-logind[1547]: New session 21 of user core. Mar 3 13:47:16.302493 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 3 13:47:16.669242 sshd[6228]: Connection closed by 10.0.0.1 port 53278 Mar 3 13:47:16.673934 sshd-session[6225]: pam_unix(sshd:session): session closed for user core Mar 3 13:47:16.683583 systemd[1]: sshd@20-10.0.0.57:22-10.0.0.1:53278.service: Deactivated successfully. Mar 3 13:47:16.689077 systemd[1]: session-21.scope: Deactivated successfully. Mar 3 13:47:16.691748 systemd-logind[1547]: Session 21 logged out. Waiting for processes to exit. Mar 3 13:47:16.696623 systemd-logind[1547]: Removed session 21. Mar 3 13:47:21.695939 systemd[1]: Started sshd@21-10.0.0.57:22-10.0.0.1:37990.service - OpenSSH per-connection server daemon (10.0.0.1:37990). Mar 3 13:47:21.890142 sshd[6247]: Accepted publickey for core from 10.0.0.1 port 37990 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 13:47:21.892259 sshd-session[6247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:47:21.905809 systemd-logind[1547]: New session 22 of user core. Mar 3 13:47:21.918068 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 3 13:47:22.198578 sshd[6250]: Connection closed by 10.0.0.1 port 37990 Mar 3 13:47:22.199575 sshd-session[6247]: pam_unix(sshd:session): session closed for user core Mar 3 13:47:22.209555 systemd[1]: sshd@21-10.0.0.57:22-10.0.0.1:37990.service: Deactivated successfully. Mar 3 13:47:22.214156 systemd[1]: session-22.scope: Deactivated successfully. Mar 3 13:47:22.218119 systemd-logind[1547]: Session 22 logged out. Waiting for processes to exit. Mar 3 13:47:22.222175 systemd-logind[1547]: Removed session 22. Mar 3 13:47:27.134454 kubelet[2840]: E0303 13:47:27.134090 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:47:27.221078 systemd[1]: Started sshd@22-10.0.0.57:22-10.0.0.1:37994.service - OpenSSH per-connection server daemon (10.0.0.1:37994). Mar 3 13:47:27.339161 sshd[6284]: Accepted publickey for core from 10.0.0.1 port 37994 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 13:47:27.343459 sshd-session[6284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:47:27.356445 systemd-logind[1547]: New session 23 of user core. Mar 3 13:47:27.371849 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 3 13:47:27.583633 sshd[6287]: Connection closed by 10.0.0.1 port 37994 Mar 3 13:47:27.584956 sshd-session[6284]: pam_unix(sshd:session): session closed for user core Mar 3 13:47:27.602558 systemd[1]: sshd@22-10.0.0.57:22-10.0.0.1:37994.service: Deactivated successfully. Mar 3 13:47:27.605949 systemd[1]: session-23.scope: Deactivated successfully. Mar 3 13:47:27.609017 systemd-logind[1547]: Session 23 logged out. Waiting for processes to exit. Mar 3 13:47:27.613081 systemd-logind[1547]: Removed session 23. Mar 3 13:47:32.603619 systemd[1]: Started sshd@23-10.0.0.57:22-10.0.0.1:36110.service - OpenSSH per-connection server daemon (10.0.0.1:36110). Mar 3 13:47:32.709099 sshd[6329]: Accepted publickey for core from 10.0.0.1 port 36110 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 13:47:32.710875 sshd-session[6329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:47:32.723117 systemd-logind[1547]: New session 24 of user core. Mar 3 13:47:32.734117 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 3 13:47:33.108628 sshd[6332]: Connection closed by 10.0.0.1 port 36110 Mar 3 13:47:33.109495 sshd-session[6329]: pam_unix(sshd:session): session closed for user core Mar 3 13:47:33.129689 systemd[1]: sshd@23-10.0.0.57:22-10.0.0.1:36110.service: Deactivated successfully. Mar 3 13:47:33.134977 systemd[1]: session-24.scope: Deactivated successfully. Mar 3 13:47:33.146105 systemd-logind[1547]: Session 24 logged out. Waiting for processes to exit. Mar 3 13:47:33.172227 systemd-logind[1547]: Removed session 24. Mar 3 13:47:34.210821 containerd[1556]: time="2026-03-03T13:47:34.204550510Z" level=warning msg="container event discarded" container=215d020b35ad27683d504e414cce471227202edb73b42f1b2e0dfdc9776b6664 type=CONTAINER_CREATED_EVENT Mar 3 13:47:34.210821 containerd[1556]: time="2026-03-03T13:47:34.210579178Z" level=warning msg="container event discarded" container=215d020b35ad27683d504e414cce471227202edb73b42f1b2e0dfdc9776b6664 type=CONTAINER_STARTED_EVENT Mar 3 13:47:34.222520 containerd[1556]: time="2026-03-03T13:47:34.222105157Z" level=warning msg="container event discarded" container=7cba8fec01987baa9de9904b9baa73d738fcb507f344d41d78dc67d6d85d6c9e type=CONTAINER_CREATED_EVENT Mar 3 13:47:34.222520 containerd[1556]: time="2026-03-03T13:47:34.222154529Z" level=warning msg="container event discarded" container=7cba8fec01987baa9de9904b9baa73d738fcb507f344d41d78dc67d6d85d6c9e type=CONTAINER_STARTED_EVENT Mar 3 13:47:34.259222 containerd[1556]: time="2026-03-03T13:47:34.258894908Z" level=warning msg="container event discarded" container=8f0c8dded7e168349d163645487241449028ae51483a42e465a184d4eb86c3ec type=CONTAINER_CREATED_EVENT Mar 3 13:47:34.259222 containerd[1556]: time="2026-03-03T13:47:34.259211067Z" level=warning msg="container event discarded" container=8f0c8dded7e168349d163645487241449028ae51483a42e465a184d4eb86c3ec type=CONTAINER_STARTED_EVENT Mar 3 13:47:34.346195 containerd[1556]: time="2026-03-03T13:47:34.345976893Z" level=warning msg="container event discarded" container=615bf8e4847f6ad1ab8906adb6bf0b10046534849d1df941acc08c8b66becf14 type=CONTAINER_CREATED_EVENT Mar 3 13:47:34.346195 containerd[1556]: time="2026-03-03T13:47:34.346146529Z" level=warning msg="container event discarded" container=5918187958a91ca423d38b4f7ebe5c6f343142100aff71d450177f6f489f8814 type=CONTAINER_CREATED_EVENT Mar 3 13:47:34.363963 containerd[1556]: time="2026-03-03T13:47:34.363827791Z" level=warning msg="container event discarded" container=d325a1ca2bdccd655390caa116252ba2fc584c85b4281447b5615e395d24eb99 type=CONTAINER_CREATED_EVENT Mar 3 13:47:34.662129 containerd[1556]: time="2026-03-03T13:47:34.661472195Z" level=warning msg="container event discarded" container=615bf8e4847f6ad1ab8906adb6bf0b10046534849d1df941acc08c8b66becf14 type=CONTAINER_STARTED_EVENT Mar 3 13:47:34.693911 containerd[1556]: time="2026-03-03T13:47:34.693578017Z" level=warning msg="container event discarded" container=5918187958a91ca423d38b4f7ebe5c6f343142100aff71d450177f6f489f8814 type=CONTAINER_STARTED_EVENT Mar 3 13:47:34.720485 containerd[1556]: time="2026-03-03T13:47:34.720172568Z" level=warning msg="container event discarded" container=d325a1ca2bdccd655390caa116252ba2fc584c85b4281447b5615e395d24eb99 type=CONTAINER_STARTED_EVENT Mar 3 13:47:35.136724 kubelet[2840]: E0303 13:47:35.134128 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:47:38.150721 systemd[1]: Started sshd@24-10.0.0.57:22-10.0.0.1:36120.service - OpenSSH per-connection server daemon (10.0.0.1:36120). Mar 3 13:47:38.370188 sshd[6392]: Accepted publickey for core from 10.0.0.1 port 36120 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 13:47:38.384151 sshd-session[6392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:47:38.414602 systemd-logind[1547]: New session 25 of user core. Mar 3 13:47:38.426579 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 3 13:47:38.782380 sshd[6395]: Connection closed by 10.0.0.1 port 36120 Mar 3 13:47:38.782619 sshd-session[6392]: pam_unix(sshd:session): session closed for user core Mar 3 13:47:38.797962 systemd[1]: sshd@24-10.0.0.57:22-10.0.0.1:36120.service: Deactivated successfully. Mar 3 13:47:38.805980 systemd[1]: session-25.scope: Deactivated successfully. Mar 3 13:47:38.811477 systemd-logind[1547]: Session 25 logged out. Waiting for processes to exit. Mar 3 13:47:38.819951 systemd[1]: Started sshd@25-10.0.0.57:22-10.0.0.1:56420.service - OpenSSH per-connection server daemon (10.0.0.1:56420). Mar 3 13:47:38.825635 systemd-logind[1547]: Removed session 25. Mar 3 13:47:38.917576 sshd[6409]: Accepted publickey for core from 10.0.0.1 port 56420 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 13:47:38.921479 sshd-session[6409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:47:38.942127 systemd-logind[1547]: New session 26 of user core. Mar 3 13:47:38.944568 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 3 13:47:39.311549 sshd[6419]: Connection closed by 10.0.0.1 port 56420 Mar 3 13:47:39.311109 sshd-session[6409]: pam_unix(sshd:session): session closed for user core Mar 3 13:47:39.322620 systemd[1]: sshd@25-10.0.0.57:22-10.0.0.1:56420.service: Deactivated successfully. Mar 3 13:47:39.330471 systemd[1]: session-26.scope: Deactivated successfully. Mar 3 13:47:39.339845 systemd-logind[1547]: Session 26 logged out. Waiting for processes to exit. Mar 3 13:47:39.343913 systemd[1]: Started sshd@26-10.0.0.57:22-10.0.0.1:56436.service - OpenSSH per-connection server daemon (10.0.0.1:56436). Mar 3 13:47:39.352043 systemd-logind[1547]: Removed session 26. Mar 3 13:47:39.591713 sshd[6452]: Accepted publickey for core from 10.0.0.1 port 56436 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 13:47:39.594520 sshd-session[6452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:47:39.607041 systemd-logind[1547]: New session 27 of user core. Mar 3 13:47:39.615564 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 3 13:47:40.322022 sshd[6459]: Connection closed by 10.0.0.1 port 56436 Mar 3 13:47:40.323743 sshd-session[6452]: pam_unix(sshd:session): session closed for user core Mar 3 13:47:40.338688 systemd[1]: sshd@26-10.0.0.57:22-10.0.0.1:56436.service: Deactivated successfully. Mar 3 13:47:40.355464 systemd[1]: session-27.scope: Deactivated successfully. Mar 3 13:47:40.363129 systemd-logind[1547]: Session 27 logged out. Waiting for processes to exit. Mar 3 13:47:40.379142 systemd-logind[1547]: Removed session 27. Mar 3 13:47:43.150238 kubelet[2840]: E0303 13:47:43.149495 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:47:45.347663 systemd[1]: Started sshd@27-10.0.0.57:22-10.0.0.1:56442.service - OpenSSH per-connection server daemon (10.0.0.1:56442). Mar 3 13:47:45.494592 sshd[6496]: Accepted publickey for core from 10.0.0.1 port 56442 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 13:47:45.499188 sshd-session[6496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:47:45.515600 systemd-logind[1547]: New session 28 of user core. Mar 3 13:47:45.521962 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 3 13:47:45.790053 sshd[6499]: Connection closed by 10.0.0.1 port 56442 Mar 3 13:47:45.791044 sshd-session[6496]: pam_unix(sshd:session): session closed for user core Mar 3 13:47:45.800605 systemd[1]: sshd@27-10.0.0.57:22-10.0.0.1:56442.service: Deactivated successfully. Mar 3 13:47:45.804939 systemd[1]: session-28.scope: Deactivated successfully. Mar 3 13:47:45.809448 systemd-logind[1547]: Session 28 logged out. Waiting for processes to exit. Mar 3 13:47:45.820051 systemd-logind[1547]: Removed session 28. Mar 3 13:47:50.807936 systemd[1]: Started sshd@28-10.0.0.57:22-10.0.0.1:48176.service - OpenSSH per-connection server daemon (10.0.0.1:48176). Mar 3 13:47:50.896556 sshd[6512]: Accepted publickey for core from 10.0.0.1 port 48176 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 13:47:50.898665 sshd-session[6512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:47:50.910916 systemd-logind[1547]: New session 29 of user core. Mar 3 13:47:50.927616 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 3 13:47:51.137074 sshd[6515]: Connection closed by 10.0.0.1 port 48176 Mar 3 13:47:51.137517 sshd-session[6512]: pam_unix(sshd:session): session closed for user core Mar 3 13:47:51.144947 systemd[1]: sshd@28-10.0.0.57:22-10.0.0.1:48176.service: Deactivated successfully. Mar 3 13:47:51.148498 systemd[1]: session-29.scope: Deactivated successfully. Mar 3 13:47:51.152013 systemd-logind[1547]: Session 29 logged out. Waiting for processes to exit. Mar 3 13:47:51.154754 systemd-logind[1547]: Removed session 29. Mar 3 13:47:56.156490 systemd[1]: Started sshd@29-10.0.0.57:22-10.0.0.1:48184.service - OpenSSH per-connection server daemon (10.0.0.1:48184). Mar 3 13:47:56.235148 sshd[6528]: Accepted publickey for core from 10.0.0.1 port 48184 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 13:47:56.237612 sshd-session[6528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:47:56.249724 systemd-logind[1547]: New session 30 of user core. Mar 3 13:47:56.255990 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 3 13:47:56.421667 sshd[6531]: Connection closed by 10.0.0.1 port 48184 Mar 3 13:47:56.423665 sshd-session[6528]: pam_unix(sshd:session): session closed for user core Mar 3 13:47:56.434200 systemd[1]: sshd@29-10.0.0.57:22-10.0.0.1:48184.service: Deactivated successfully. Mar 3 13:47:56.437497 systemd[1]: session-30.scope: Deactivated successfully. Mar 3 13:47:56.439796 systemd-logind[1547]: Session 30 logged out. Waiting for processes to exit. Mar 3 13:47:56.445193 systemd[1]: Started sshd@30-10.0.0.57:22-10.0.0.1:48198.service - OpenSSH per-connection server daemon (10.0.0.1:48198). Mar 3 13:47:56.448206 systemd-logind[1547]: Removed session 30. Mar 3 13:47:56.529260 sshd[6545]: Accepted publickey for core from 10.0.0.1 port 48198 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 13:47:56.532396 sshd-session[6545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:47:56.542135 systemd-logind[1547]: New session 31 of user core. Mar 3 13:47:56.553769 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 3 13:47:57.090923 sshd[6548]: Connection closed by 10.0.0.1 port 48198 Mar 3 13:47:57.091587 sshd-session[6545]: pam_unix(sshd:session): session closed for user core Mar 3 13:47:57.106501 systemd[1]: sshd@30-10.0.0.57:22-10.0.0.1:48198.service: Deactivated successfully. Mar 3 13:47:57.110634 systemd[1]: session-31.scope: Deactivated successfully. Mar 3 13:47:57.112523 systemd-logind[1547]: Session 31 logged out. Waiting for processes to exit. Mar 3 13:47:57.116049 systemd[1]: Started sshd@31-10.0.0.57:22-10.0.0.1:48210.service - OpenSSH per-connection server daemon (10.0.0.1:48210). Mar 3 13:47:57.118657 systemd-logind[1547]: Removed session 31. Mar 3 13:47:57.133608 kubelet[2840]: E0303 13:47:57.133420 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:47:57.258231 sshd[6559]: Accepted publickey for core from 10.0.0.1 port 48210 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 13:47:57.260718 sshd-session[6559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:47:57.271773 systemd-logind[1547]: New session 32 of user core. Mar 3 13:47:57.279657 systemd[1]: Started session-32.scope - Session 32 of User core. Mar 3 13:47:58.128734 sshd[6562]: Connection closed by 10.0.0.1 port 48210 Mar 3 13:47:58.133473 sshd-session[6559]: pam_unix(sshd:session): session closed for user core Mar 3 13:47:58.151455 systemd[1]: sshd@31-10.0.0.57:22-10.0.0.1:48210.service: Deactivated successfully. Mar 3 13:47:58.158518 systemd[1]: session-32.scope: Deactivated successfully. Mar 3 13:47:58.162794 systemd-logind[1547]: Session 32 logged out. Waiting for processes to exit. Mar 3 13:47:58.174497 systemd[1]: Started sshd@32-10.0.0.57:22-10.0.0.1:48218.service - OpenSSH per-connection server daemon (10.0.0.1:48218). Mar 3 13:47:58.177057 systemd-logind[1547]: Removed session 32. Mar 3 13:47:58.269007 sshd[6611]: Accepted publickey for core from 10.0.0.1 port 48218 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 13:47:58.271646 sshd-session[6611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:47:58.280102 systemd-logind[1547]: New session 33 of user core. Mar 3 13:47:58.288611 systemd[1]: Started session-33.scope - Session 33 of User core. Mar 3 13:47:59.038487 sshd[6619]: Connection closed by 10.0.0.1 port 48218 Mar 3 13:47:59.040816 sshd-session[6611]: pam_unix(sshd:session): session closed for user core Mar 3 13:47:59.060244 systemd[1]: sshd@32-10.0.0.57:22-10.0.0.1:48218.service: Deactivated successfully. Mar 3 13:47:59.066150 systemd[1]: session-33.scope: Deactivated successfully. Mar 3 13:47:59.072542 systemd-logind[1547]: Session 33 logged out. Waiting for processes to exit. Mar 3 13:47:59.082648 systemd[1]: Started sshd@33-10.0.0.57:22-10.0.0.1:40528.service - OpenSSH per-connection server daemon (10.0.0.1:40528). Mar 3 13:47:59.088619 systemd-logind[1547]: Removed session 33. Mar 3 13:47:59.166930 sshd[6630]: Accepted publickey for core from 10.0.0.1 port 40528 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 13:47:59.171616 sshd-session[6630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:47:59.182109 systemd-logind[1547]: New session 34 of user core. Mar 3 13:47:59.191794 systemd[1]: Started session-34.scope - Session 34 of User core. Mar 3 13:47:59.330243 sshd[6635]: Connection closed by 10.0.0.1 port 40528 Mar 3 13:47:59.331696 sshd-session[6630]: pam_unix(sshd:session): session closed for user core Mar 3 13:47:59.338179 systemd[1]: sshd@33-10.0.0.57:22-10.0.0.1:40528.service: Deactivated successfully. Mar 3 13:47:59.340947 systemd[1]: session-34.scope: Deactivated successfully. Mar 3 13:47:59.343980 systemd-logind[1547]: Session 34 logged out. Waiting for processes to exit. Mar 3 13:47:59.346593 systemd-logind[1547]: Removed session 34. Mar 3 13:48:04.346439 systemd[1]: Started sshd@34-10.0.0.57:22-10.0.0.1:40542.service - OpenSSH per-connection server daemon (10.0.0.1:40542). Mar 3 13:48:04.420615 sshd[6655]: Accepted publickey for core from 10.0.0.1 port 40542 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 13:48:04.423136 sshd-session[6655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:48:04.431998 systemd-logind[1547]: New session 35 of user core. Mar 3 13:48:04.443512 systemd[1]: Started session-35.scope - Session 35 of User core. Mar 3 13:48:04.606262 sshd[6658]: Connection closed by 10.0.0.1 port 40542 Mar 3 13:48:04.606751 sshd-session[6655]: pam_unix(sshd:session): session closed for user core Mar 3 13:48:04.613524 systemd[1]: sshd@34-10.0.0.57:22-10.0.0.1:40542.service: Deactivated successfully. Mar 3 13:48:04.616717 systemd[1]: session-35.scope: Deactivated successfully. Mar 3 13:48:04.618162 systemd-logind[1547]: Session 35 logged out. Waiting for processes to exit. Mar 3 13:48:04.620554 systemd-logind[1547]: Removed session 35. Mar 3 13:48:05.659907 containerd[1556]: time="2026-03-03T13:48:05.659643683Z" level=warning msg="container event discarded" container=1c2df333a13ec6ccacd68d47f6bde0bf906e665c35b1a53d40d2f148ab6afd72 type=CONTAINER_CREATED_EVENT Mar 3 13:48:05.659907 containerd[1556]: time="2026-03-03T13:48:05.659796377Z" level=warning msg="container event discarded" container=1c2df333a13ec6ccacd68d47f6bde0bf906e665c35b1a53d40d2f148ab6afd72 type=CONTAINER_STARTED_EVENT Mar 3 13:48:06.351508 containerd[1556]: time="2026-03-03T13:48:06.351429107Z" level=warning msg="container event discarded" container=0c85a9ed78f32d2a6783a2254d53cd28e61ceeea71a2f3ff0ce415d6e1c0d1a9 type=CONTAINER_CREATED_EVENT Mar 3 13:48:08.253936 containerd[1556]: time="2026-03-03T13:48:08.253592104Z" level=warning msg="container event discarded" container=0c85a9ed78f32d2a6783a2254d53cd28e61ceeea71a2f3ff0ce415d6e1c0d1a9 type=CONTAINER_STARTED_EVENT Mar 3 13:48:09.631828 systemd[1]: Started sshd@35-10.0.0.57:22-10.0.0.1:36400.service - OpenSSH per-connection server daemon (10.0.0.1:36400). Mar 3 13:48:09.723069 sshd[6722]: Accepted publickey for core from 10.0.0.1 port 36400 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 13:48:09.725731 sshd-session[6722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:48:09.736169 systemd-logind[1547]: New session 36 of user core. Mar 3 13:48:09.748674 systemd[1]: Started session-36.scope - Session 36 of User core. Mar 3 13:48:09.899402 sshd[6725]: Connection closed by 10.0.0.1 port 36400 Mar 3 13:48:09.899870 sshd-session[6722]: pam_unix(sshd:session): session closed for user core Mar 3 13:48:09.907082 systemd[1]: sshd@35-10.0.0.57:22-10.0.0.1:36400.service: Deactivated successfully. Mar 3 13:48:09.910973 systemd[1]: session-36.scope: Deactivated successfully. Mar 3 13:48:09.913242 systemd-logind[1547]: Session 36 logged out. Waiting for processes to exit. Mar 3 13:48:09.916106 systemd-logind[1547]: Removed session 36. Mar 3 13:48:10.656789 containerd[1556]: time="2026-03-03T13:48:10.656667707Z" level=warning msg="container event discarded" container=6a845f6859fe8077f19edcaed78fd2025407f9107151290b535c1b201babbd40 type=CONTAINER_CREATED_EVENT Mar 3 13:48:10.656789 containerd[1556]: time="2026-03-03T13:48:10.656751814Z" level=warning msg="container event discarded" container=6a845f6859fe8077f19edcaed78fd2025407f9107151290b535c1b201babbd40 type=CONTAINER_STARTED_EVENT Mar 3 13:48:14.132712 kubelet[2840]: E0303 13:48:14.132673 2840 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:48:14.949986 systemd[1]: Started sshd@36-10.0.0.57:22-10.0.0.1:36416.service - OpenSSH per-connection server daemon (10.0.0.1:36416). Mar 3 13:48:15.050172 sshd[6789]: Accepted publickey for core from 10.0.0.1 port 36416 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 13:48:15.052448 sshd-session[6789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:48:15.061494 systemd-logind[1547]: New session 37 of user core. Mar 3 13:48:15.079751 systemd[1]: Started session-37.scope - Session 37 of User core. Mar 3 13:48:15.290000 sshd[6792]: Connection closed by 10.0.0.1 port 36416 Mar 3 13:48:15.290200 sshd-session[6789]: pam_unix(sshd:session): session closed for user core Mar 3 13:48:15.297840 systemd[1]: sshd@36-10.0.0.57:22-10.0.0.1:36416.service: Deactivated successfully. Mar 3 13:48:15.302836 systemd[1]: session-37.scope: Deactivated successfully. Mar 3 13:48:15.304846 systemd-logind[1547]: Session 37 logged out. Waiting for processes to exit. Mar 3 13:48:15.308015 systemd-logind[1547]: Removed session 37.