Mar 7 02:33:36.840165 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 6 22:36:58 -00 2026 Mar 7 02:33:36.840194 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a7a6366d1281b0033776db782dbfd465316acbffbcd17ad79a282dcdbe79601a Mar 7 02:33:36.840205 kernel: BIOS-provided physical RAM map: Mar 7 02:33:36.840217 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 7 02:33:36.840225 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 7 02:33:36.840233 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 7 02:33:36.840243 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Mar 7 02:33:36.840252 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 7 02:33:36.840260 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Mar 7 02:33:36.840268 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Mar 7 02:33:36.840277 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Mar 7 02:33:36.840285 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Mar 7 02:33:36.840296 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Mar 7 02:33:36.840305 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Mar 7 02:33:36.840315 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Mar 7 02:33:36.840324 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 7 02:33:36.840333 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Mar 7 02:33:36.840345 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Mar 7 02:33:36.840354 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Mar 7 02:33:36.840363 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Mar 7 02:33:36.840372 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Mar 7 02:33:36.840381 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 7 02:33:36.840390 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 7 02:33:36.840399 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 7 02:33:36.840407 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Mar 7 02:33:36.840416 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 7 02:33:36.840425 kernel: NX (Execute Disable) protection: active Mar 7 02:33:36.840434 kernel: APIC: Static calls initialized Mar 7 02:33:36.840467 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Mar 7 02:33:36.840476 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Mar 7 02:33:36.840485 kernel: extended physical RAM map: Mar 7 02:33:36.840494 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 7 02:33:36.840503 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 7 02:33:36.840512 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 7 02:33:36.840521 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Mar 7 02:33:36.840530 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 7 02:33:36.840539 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Mar 7 02:33:36.840548 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Mar 7 02:33:36.840557 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Mar 7 02:33:36.840573 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Mar 7 02:33:36.840586 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Mar 7 02:33:36.840596 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Mar 7 02:33:36.840605 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Mar 7 02:33:36.840615 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Mar 7 02:33:36.840627 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Mar 7 02:33:36.840672 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Mar 7 02:33:36.840685 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Mar 7 02:33:36.840695 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 7 02:33:36.840704 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Mar 7 02:33:36.840714 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Mar 7 02:33:36.840723 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Mar 7 02:33:36.840733 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Mar 7 02:33:36.840743 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Mar 7 02:33:36.840752 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 7 02:33:36.840762 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 7 02:33:36.840774 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 7 02:33:36.840784 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Mar 7 02:33:36.840793 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 7 02:33:36.840803 kernel: efi: EFI v2.7 by EDK II Mar 7 02:33:36.840812 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Mar 7 02:33:36.840822 kernel: random: crng init done Mar 7 02:33:36.840831 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Mar 7 02:33:36.840841 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Mar 7 02:33:36.840850 kernel: secureboot: Secure boot disabled Mar 7 02:33:36.840859 kernel: SMBIOS 2.8 present. Mar 7 02:33:36.840869 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Mar 7 02:33:36.840881 kernel: DMI: Memory slots populated: 1/1 Mar 7 02:33:36.840890 kernel: Hypervisor detected: KVM Mar 7 02:33:36.840900 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Mar 7 02:33:36.840909 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 7 02:33:36.840919 kernel: kvm-clock: using sched offset of 12186414057 cycles Mar 7 02:33:36.840929 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 7 02:33:36.840939 kernel: tsc: Detected 2445.424 MHz processor Mar 7 02:33:36.840949 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 7 02:33:36.840958 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 7 02:33:36.840968 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Mar 7 02:33:36.840977 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 7 02:33:36.840990 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 7 02:33:36.841000 kernel: Using GB pages for direct mapping Mar 7 02:33:36.841009 kernel: ACPI: Early table checksum verification disabled Mar 7 02:33:36.841019 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Mar 7 02:33:36.841029 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 7 02:33:36.841039 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 02:33:36.841049 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 02:33:36.841058 kernel: ACPI: FACS 0x000000009CBDD000 000040 Mar 7 02:33:36.841068 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 02:33:36.841081 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 02:33:36.841090 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 02:33:36.841100 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 02:33:36.841171 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 7 02:33:36.841182 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Mar 7 02:33:36.841192 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Mar 7 02:33:36.841202 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Mar 7 02:33:36.841212 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Mar 7 02:33:36.841224 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Mar 7 02:33:36.841234 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Mar 7 02:33:36.841244 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Mar 7 02:33:36.841253 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Mar 7 02:33:36.841263 kernel: No NUMA configuration found Mar 7 02:33:36.841273 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Mar 7 02:33:36.841283 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Mar 7 02:33:36.841293 kernel: Zone ranges: Mar 7 02:33:36.841302 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 7 02:33:36.841315 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Mar 7 02:33:36.841324 kernel: Normal empty Mar 7 02:33:36.841334 kernel: Device empty Mar 7 02:33:36.841343 kernel: Movable zone start for each node Mar 7 02:33:36.841353 kernel: Early memory node ranges Mar 7 02:33:36.841363 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 7 02:33:36.841372 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Mar 7 02:33:36.841382 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Mar 7 02:33:36.841392 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Mar 7 02:33:36.841401 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Mar 7 02:33:36.841413 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Mar 7 02:33:36.841423 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Mar 7 02:33:36.841432 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Mar 7 02:33:36.841442 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Mar 7 02:33:36.841452 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 7 02:33:36.841470 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 7 02:33:36.841483 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Mar 7 02:33:36.841493 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 7 02:33:36.841503 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Mar 7 02:33:36.841513 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Mar 7 02:33:36.841523 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Mar 7 02:33:36.841533 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Mar 7 02:33:36.841546 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Mar 7 02:33:36.841556 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 7 02:33:36.841567 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 7 02:33:36.841577 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 7 02:33:36.841587 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 7 02:33:36.841600 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 7 02:33:36.841610 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 7 02:33:36.841620 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 7 02:33:36.841630 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 7 02:33:36.843635 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 7 02:33:36.843685 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 7 02:33:36.843695 kernel: TSC deadline timer available Mar 7 02:33:36.843705 kernel: CPU topo: Max. logical packages: 1 Mar 7 02:33:36.843715 kernel: CPU topo: Max. logical dies: 1 Mar 7 02:33:36.843730 kernel: CPU topo: Max. dies per package: 1 Mar 7 02:33:36.843740 kernel: CPU topo: Max. threads per core: 1 Mar 7 02:33:36.843750 kernel: CPU topo: Num. cores per package: 4 Mar 7 02:33:36.843761 kernel: CPU topo: Num. threads per package: 4 Mar 7 02:33:36.843771 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Mar 7 02:33:36.843801 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 7 02:33:36.843812 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 7 02:33:36.843822 kernel: kvm-guest: setup PV sched yield Mar 7 02:33:36.843832 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Mar 7 02:33:36.843845 kernel: Booting paravirtualized kernel on KVM Mar 7 02:33:36.843856 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 7 02:33:36.843866 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 7 02:33:36.843876 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Mar 7 02:33:36.843887 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Mar 7 02:33:36.843897 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 7 02:33:36.843907 kernel: kvm-guest: PV spinlocks enabled Mar 7 02:33:36.843917 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 7 02:33:36.843929 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a7a6366d1281b0033776db782dbfd465316acbffbcd17ad79a282dcdbe79601a Mar 7 02:33:36.843942 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 7 02:33:36.843953 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 7 02:33:36.843963 kernel: Fallback order for Node 0: 0 Mar 7 02:33:36.843973 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Mar 7 02:33:36.843983 kernel: Policy zone: DMA32 Mar 7 02:33:36.843994 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 02:33:36.844004 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 7 02:33:36.844014 kernel: ftrace: allocating 40099 entries in 157 pages Mar 7 02:33:36.844027 kernel: ftrace: allocated 157 pages with 5 groups Mar 7 02:33:36.844037 kernel: Dynamic Preempt: voluntary Mar 7 02:33:36.844047 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 02:33:36.844058 kernel: rcu: RCU event tracing is enabled. Mar 7 02:33:36.844069 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 7 02:33:36.844080 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 02:33:36.844090 kernel: Rude variant of Tasks RCU enabled. Mar 7 02:33:36.844100 kernel: Tracing variant of Tasks RCU enabled. Mar 7 02:33:36.844154 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 02:33:36.844165 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 7 02:33:36.844179 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 7 02:33:36.844190 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 7 02:33:36.844200 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 7 02:33:36.844210 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 7 02:33:36.844221 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 02:33:36.844231 kernel: Console: colour dummy device 80x25 Mar 7 02:33:36.844242 kernel: printk: legacy console [ttyS0] enabled Mar 7 02:33:36.844252 kernel: ACPI: Core revision 20240827 Mar 7 02:33:36.844265 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 7 02:33:36.844275 kernel: APIC: Switch to symmetric I/O mode setup Mar 7 02:33:36.844285 kernel: x2apic enabled Mar 7 02:33:36.844296 kernel: APIC: Switched APIC routing to: physical x2apic Mar 7 02:33:36.844306 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 7 02:33:36.844317 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 7 02:33:36.844327 kernel: kvm-guest: setup PV IPIs Mar 7 02:33:36.844337 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 7 02:33:36.844348 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd5e8294, max_idle_ns: 440795237246 ns Mar 7 02:33:36.844361 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Mar 7 02:33:36.844371 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 7 02:33:36.844381 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 7 02:33:36.844392 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 7 02:33:36.844403 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 7 02:33:36.844413 kernel: Spectre V2 : Mitigation: Retpolines Mar 7 02:33:36.844423 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 7 02:33:36.844434 kernel: Speculative Store Bypass: Vulnerable Mar 7 02:33:36.844444 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 7 02:33:36.844458 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 7 02:33:36.844468 kernel: active return thunk: srso_alias_return_thunk Mar 7 02:33:36.844479 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 7 02:33:36.844489 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 7 02:33:36.844499 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 02:33:36.844510 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 7 02:33:36.844520 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 7 02:33:36.844530 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 7 02:33:36.844541 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 7 02:33:36.844554 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 7 02:33:36.844564 kernel: Freeing SMP alternatives memory: 32K Mar 7 02:33:36.844575 kernel: pid_max: default: 32768 minimum: 301 Mar 7 02:33:36.844585 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 7 02:33:36.844595 kernel: landlock: Up and running. Mar 7 02:33:36.844606 kernel: SELinux: Initializing. Mar 7 02:33:36.844616 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 02:33:36.844627 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 02:33:36.844673 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 7 02:33:36.844684 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 7 02:33:36.844695 kernel: signal: max sigframe size: 1776 Mar 7 02:33:36.844705 kernel: rcu: Hierarchical SRCU implementation. Mar 7 02:33:36.844715 kernel: rcu: Max phase no-delay instances is 400. Mar 7 02:33:36.844726 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 7 02:33:36.844736 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 7 02:33:36.844747 kernel: smp: Bringing up secondary CPUs ... Mar 7 02:33:36.844757 kernel: smpboot: x86: Booting SMP configuration: Mar 7 02:33:36.844770 kernel: .... node #0, CPUs: #1 #2 #3 Mar 7 02:33:36.844781 kernel: smp: Brought up 1 node, 4 CPUs Mar 7 02:33:36.844791 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Mar 7 02:33:36.844802 kernel: Memory: 2414476K/2565800K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46192K init, 2568K bss, 145388K reserved, 0K cma-reserved) Mar 7 02:33:36.844812 kernel: devtmpfs: initialized Mar 7 02:33:36.844822 kernel: x86/mm: Memory block size: 128MB Mar 7 02:33:36.844833 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Mar 7 02:33:36.844843 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Mar 7 02:33:36.844854 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Mar 7 02:33:36.844867 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Mar 7 02:33:36.844877 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Mar 7 02:33:36.844888 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Mar 7 02:33:36.844898 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 02:33:36.844908 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 7 02:33:36.844919 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 02:33:36.844929 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 02:33:36.844939 kernel: audit: initializing netlink subsys (disabled) Mar 7 02:33:36.844949 kernel: audit: type=2000 audit(1772850808.894:1): state=initialized audit_enabled=0 res=1 Mar 7 02:33:36.844962 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 02:33:36.844972 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 7 02:33:36.844982 kernel: cpuidle: using governor menu Mar 7 02:33:36.844993 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 02:33:36.845003 kernel: dca service started, version 1.12.1 Mar 7 02:33:36.845014 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Mar 7 02:33:36.845024 kernel: PCI: Using configuration type 1 for base access Mar 7 02:33:36.845034 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 7 02:33:36.845045 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 02:33:36.845058 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 02:33:36.845068 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 02:33:36.845078 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 02:33:36.845089 kernel: ACPI: Added _OSI(Module Device) Mar 7 02:33:36.845099 kernel: ACPI: Added _OSI(Processor Device) Mar 7 02:33:36.845152 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 02:33:36.845164 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 7 02:33:36.845175 kernel: ACPI: Interpreter enabled Mar 7 02:33:36.845185 kernel: ACPI: PM: (supports S0 S3 S5) Mar 7 02:33:36.845198 kernel: ACPI: Using IOAPIC for interrupt routing Mar 7 02:33:36.845209 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 7 02:33:36.845219 kernel: PCI: Using E820 reservations for host bridge windows Mar 7 02:33:36.845229 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 7 02:33:36.845240 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 7 02:33:36.845491 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 7 02:33:36.845699 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 7 02:33:36.845858 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 7 02:33:36.845875 kernel: PCI host bridge to bus 0000:00 Mar 7 02:33:36.846030 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 7 02:33:36.846234 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 7 02:33:36.848477 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 7 02:33:36.848917 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Mar 7 02:33:36.851324 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Mar 7 02:33:36.851495 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Mar 7 02:33:36.851851 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 7 02:33:36.852949 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Mar 7 02:33:36.853246 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Mar 7 02:33:36.853465 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Mar 7 02:33:36.853702 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Mar 7 02:33:36.853879 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Mar 7 02:33:36.854061 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 7 02:33:36.854322 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Mar 7 02:33:36.854501 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Mar 7 02:33:36.854740 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Mar 7 02:33:36.854917 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Mar 7 02:33:36.855257 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Mar 7 02:33:36.855501 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Mar 7 02:33:36.860956 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Mar 7 02:33:36.861206 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Mar 7 02:33:36.861390 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Mar 7 02:33:36.861539 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Mar 7 02:33:36.861736 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Mar 7 02:33:36.861902 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Mar 7 02:33:36.862079 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Mar 7 02:33:36.862320 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Mar 7 02:33:36.862491 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 7 02:33:36.867933 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 16601 usecs Mar 7 02:33:36.868197 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Mar 7 02:33:36.868370 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Mar 7 02:33:36.868540 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Mar 7 02:33:36.870859 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Mar 7 02:33:36.870989 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Mar 7 02:33:36.871000 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 7 02:33:36.871007 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 7 02:33:36.871014 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 7 02:33:36.871021 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 7 02:33:36.871028 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 7 02:33:36.871035 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 7 02:33:36.871046 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 7 02:33:36.871053 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 7 02:33:36.871060 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 7 02:33:36.871067 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 7 02:33:36.871074 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 7 02:33:36.871080 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 7 02:33:36.871087 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 7 02:33:36.871094 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 7 02:33:36.871101 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 7 02:33:36.871163 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 7 02:33:36.871172 kernel: iommu: Default domain type: Translated Mar 7 02:33:36.871179 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 7 02:33:36.871186 kernel: efivars: Registered efivars operations Mar 7 02:33:36.871193 kernel: PCI: Using ACPI for IRQ routing Mar 7 02:33:36.871200 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 7 02:33:36.871207 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Mar 7 02:33:36.871214 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Mar 7 02:33:36.871221 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Mar 7 02:33:36.871231 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Mar 7 02:33:36.871237 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Mar 7 02:33:36.871244 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Mar 7 02:33:36.871251 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Mar 7 02:33:36.871258 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Mar 7 02:33:36.871384 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 7 02:33:36.871502 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 7 02:33:36.871617 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 7 02:33:36.871630 kernel: vgaarb: loaded Mar 7 02:33:36.871684 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 7 02:33:36.871698 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 7 02:33:36.871710 kernel: clocksource: Switched to clocksource kvm-clock Mar 7 02:33:36.871720 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 02:33:36.871764 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 02:33:36.871774 kernel: pnp: PnP ACPI init Mar 7 02:33:36.871999 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Mar 7 02:33:36.872019 kernel: pnp: PnP ACPI: found 6 devices Mar 7 02:33:36.872037 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 7 02:33:36.872047 kernel: NET: Registered PF_INET protocol family Mar 7 02:33:36.872060 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 7 02:33:36.872071 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 7 02:33:36.872082 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 02:33:36.872190 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 7 02:33:36.872205 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 7 02:33:36.872218 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 7 02:33:36.872232 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 02:33:36.872245 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 02:33:36.872255 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 02:33:36.872268 kernel: NET: Registered PF_XDP protocol family Mar 7 02:33:36.872449 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Mar 7 02:33:36.872628 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Mar 7 02:33:36.872859 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 7 02:33:36.873061 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 7 02:33:36.873298 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 7 02:33:36.873462 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Mar 7 02:33:36.873622 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Mar 7 02:33:36.873831 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Mar 7 02:33:36.873848 kernel: PCI: CLS 0 bytes, default 64 Mar 7 02:33:36.873862 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd5e8294, max_idle_ns: 440795237246 ns Mar 7 02:33:36.873872 kernel: Initialise system trusted keyrings Mar 7 02:33:36.873884 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 7 02:33:36.873896 kernel: Key type asymmetric registered Mar 7 02:33:36.873913 kernel: Asymmetric key parser 'x509' registered Mar 7 02:33:36.873924 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 7 02:33:36.873935 kernel: io scheduler mq-deadline registered Mar 7 02:33:36.873948 kernel: io scheduler kyber registered Mar 7 02:33:36.873958 kernel: io scheduler bfq registered Mar 7 02:33:36.873971 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 7 02:33:36.873983 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 7 02:33:36.874000 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 7 02:33:36.874014 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 7 02:33:36.874026 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 02:33:36.874037 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 7 02:33:36.874051 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 7 02:33:36.874061 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 7 02:33:36.874074 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 7 02:33:36.874320 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 7 02:33:36.874492 kernel: rtc_cmos 00:04: registered as rtc0 Mar 7 02:33:36.874709 kernel: rtc_cmos 00:04: setting system clock to 2026-03-07T02:33:35 UTC (1772850815) Mar 7 02:33:36.874909 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Mar 7 02:33:36.874925 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 7 02:33:36.874937 kernel: efifb: probing for efifb Mar 7 02:33:36.874950 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 7 02:33:36.874963 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Mar 7 02:33:36.874978 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Mar 7 02:33:36.874990 kernel: efifb: scrolling: redraw Mar 7 02:33:36.875003 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 7 02:33:36.875013 kernel: Console: switching to colour frame buffer device 160x50 Mar 7 02:33:36.875024 kernel: fb0: EFI VGA frame buffer device Mar 7 02:33:36.875036 kernel: pstore: Using crash dump compression: deflate Mar 7 02:33:36.875046 kernel: pstore: Registered efi_pstore as persistent store backend Mar 7 02:33:36.875060 kernel: NET: Registered PF_INET6 protocol family Mar 7 02:33:36.875070 kernel: Segment Routing with IPv6 Mar 7 02:33:36.875086 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 02:33:36.875096 kernel: NET: Registered PF_PACKET protocol family Mar 7 02:33:36.875172 kernel: Key type dns_resolver registered Mar 7 02:33:36.875188 kernel: IPI shorthand broadcast: enabled Mar 7 02:33:36.875198 kernel: sched_clock: Marking stable (5647022519, 1803530157)->(8409614769, -959062093) Mar 7 02:33:36.875210 kernel: registered taskstats version 1 Mar 7 02:33:36.875221 kernel: Loading compiled-in X.509 certificates Mar 7 02:33:36.875234 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: 4993b830947107214da89b35109513d59d4558ae' Mar 7 02:33:36.875244 kernel: Demotion targets for Node 0: null Mar 7 02:33:36.875257 kernel: Key type .fscrypt registered Mar 7 02:33:36.875272 kernel: Key type fscrypt-provisioning registered Mar 7 02:33:36.875285 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 02:33:36.875296 kernel: ima: Allocated hash algorithm: sha1 Mar 7 02:33:36.875308 kernel: ima: No architecture policies found Mar 7 02:33:36.875319 kernel: clk: Disabling unused clocks Mar 7 02:33:36.875330 kernel: Warning: unable to open an initial console. Mar 7 02:33:36.875343 kernel: Freeing unused kernel image (initmem) memory: 46192K Mar 7 02:33:36.875353 kernel: Write protecting the kernel read-only data: 40960k Mar 7 02:33:36.875369 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Mar 7 02:33:36.875381 kernel: Run /init as init process Mar 7 02:33:36.875393 kernel: with arguments: Mar 7 02:33:36.875403 kernel: /init Mar 7 02:33:36.875416 kernel: with environment: Mar 7 02:33:36.875426 kernel: HOME=/ Mar 7 02:33:36.875439 kernel: TERM=linux Mar 7 02:33:36.875451 systemd[1]: Successfully made /usr/ read-only. Mar 7 02:33:36.875468 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 7 02:33:36.875487 systemd[1]: Detected virtualization kvm. Mar 7 02:33:36.875497 systemd[1]: Detected architecture x86-64. Mar 7 02:33:36.875510 systemd[1]: Running in initrd. Mar 7 02:33:36.875521 systemd[1]: No hostname configured, using default hostname. Mar 7 02:33:36.875534 systemd[1]: Hostname set to . Mar 7 02:33:36.875546 systemd[1]: Initializing machine ID from VM UUID. Mar 7 02:33:36.875558 systemd[1]: Queued start job for default target initrd.target. Mar 7 02:33:36.875572 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 02:33:36.875587 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 02:33:36.875599 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 02:33:36.875612 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 02:33:36.875624 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 02:33:36.875676 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 02:33:36.875692 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 02:33:36.875711 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 02:33:36.875722 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 02:33:36.875733 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 02:33:36.875745 systemd[1]: Reached target paths.target - Path Units. Mar 7 02:33:36.875760 systemd[1]: Reached target slices.target - Slice Units. Mar 7 02:33:36.875771 systemd[1]: Reached target swap.target - Swaps. Mar 7 02:33:36.875781 systemd[1]: Reached target timers.target - Timer Units. Mar 7 02:33:36.875793 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 02:33:36.875810 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 02:33:36.875823 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 02:33:36.875834 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 7 02:33:36.875846 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 02:33:36.875859 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 02:33:36.875870 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 02:33:36.875884 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 02:33:36.875896 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 02:33:36.875908 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 02:33:36.875923 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 02:33:36.875938 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 7 02:33:36.875949 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 02:33:36.875962 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 02:33:36.875976 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 02:33:36.875991 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 02:33:36.876002 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 02:33:36.876087 systemd-journald[203]: Collecting audit messages is disabled. Mar 7 02:33:36.876198 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 02:33:36.876213 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 02:33:36.876226 systemd-journald[203]: Journal started Mar 7 02:33:36.876252 systemd-journald[203]: Runtime Journal (/run/log/journal/8d3b7494797149a9b1e1fe9fe7a62faa) is 6M, max 48.1M, 42.1M free. Mar 7 02:33:36.861370 systemd-modules-load[204]: Inserted module 'overlay' Mar 7 02:33:36.896827 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 02:33:36.896877 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 02:33:36.894292 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 02:33:36.899226 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 02:33:36.909294 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 02:33:36.990387 systemd-tmpfiles[216]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 7 02:33:36.992035 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 02:33:37.020295 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 02:33:37.020813 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 02:33:37.116301 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 02:33:37.119260 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 02:33:37.149883 kernel: Bridge firewalling registered Mar 7 02:33:37.149980 systemd-modules-load[204]: Inserted module 'br_netfilter' Mar 7 02:33:37.164960 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 02:33:37.176040 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 02:33:37.212608 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 02:33:37.240421 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 02:33:37.275906 dracut-cmdline[236]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a7a6366d1281b0033776db782dbfd465316acbffbcd17ad79a282dcdbe79601a Mar 7 02:33:37.337491 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 02:33:37.354713 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 02:33:37.466451 systemd-resolved[273]: Positive Trust Anchors: Mar 7 02:33:37.466492 systemd-resolved[273]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 02:33:37.466532 systemd-resolved[273]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 02:33:37.469614 systemd-resolved[273]: Defaulting to hostname 'linux'. Mar 7 02:33:37.471241 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 02:33:37.491399 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 02:33:37.587747 kernel: SCSI subsystem initialized Mar 7 02:33:37.600711 kernel: Loading iSCSI transport class v2.0-870. Mar 7 02:33:37.665525 kernel: iscsi: registered transport (tcp) Mar 7 02:33:37.708498 kernel: iscsi: registered transport (qla4xxx) Mar 7 02:33:37.708569 kernel: QLogic iSCSI HBA Driver Mar 7 02:33:37.788607 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 7 02:33:37.846953 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 02:33:37.870739 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 7 02:33:38.035877 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 02:33:38.049459 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 02:33:38.138742 kernel: raid6: avx2x4 gen() 20923 MB/s Mar 7 02:33:38.155780 kernel: raid6: avx2x2 gen() 18205 MB/s Mar 7 02:33:38.174871 kernel: raid6: avx2x1 gen() 14043 MB/s Mar 7 02:33:38.174928 kernel: raid6: using algorithm avx2x4 gen() 20923 MB/s Mar 7 02:33:38.197346 kernel: raid6: .... xor() 2776 MB/s, rmw enabled Mar 7 02:33:38.197407 kernel: raid6: using avx2x2 recovery algorithm Mar 7 02:33:38.228172 kernel: xor: automatically using best checksumming function avx Mar 7 02:33:38.612747 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 02:33:38.640753 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 02:33:38.670382 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 02:33:38.755429 systemd-udevd[453]: Using default interface naming scheme 'v255'. Mar 7 02:33:38.770269 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 02:33:38.802219 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 02:33:38.874740 dracut-pre-trigger[462]: rd.md=0: removing MD RAID activation Mar 7 02:33:38.955689 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 02:33:38.974878 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 02:33:44.434209 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 4348029709 wd_nsec: 4348028106 Mar 7 02:33:44.591812 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 02:33:44.626784 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 02:33:44.878237 kernel: cryptd: max_cpu_qlen set to 1000 Mar 7 02:33:44.885540 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 7 02:33:44.888591 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 02:33:44.888869 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 02:33:44.959048 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 7 02:33:44.939469 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 02:33:45.014327 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 7 02:33:45.014363 kernel: GPT:9289727 != 19775487 Mar 7 02:33:45.014539 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 7 02:33:45.014558 kernel: GPT:9289727 != 19775487 Mar 7 02:33:45.014572 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 7 02:33:45.014585 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 02:33:44.992560 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 02:33:45.032968 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 7 02:33:45.089503 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 02:33:45.089757 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 02:33:45.128308 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 02:33:45.377445 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 02:33:45.581409 kernel: libata version 3.00 loaded. Mar 7 02:33:45.657104 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Mar 7 02:33:45.890978 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 7 02:33:45.997609 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 7 02:33:46.002330 kernel: AES CTR mode by8 optimization enabled Mar 7 02:33:46.045046 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 7 02:33:46.118324 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 7 02:33:46.235997 kernel: ahci 0000:00:1f.2: version 3.0 Mar 7 02:33:46.236418 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 7 02:33:46.236441 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Mar 7 02:33:46.275769 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Mar 7 02:33:46.276087 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 7 02:33:46.276381 kernel: scsi host0: ahci Mar 7 02:33:46.196390 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 7 02:33:46.329225 kernel: scsi host1: ahci Mar 7 02:33:46.328958 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 02:33:46.391194 kernel: scsi host2: ahci Mar 7 02:33:46.398213 kernel: scsi host3: ahci Mar 7 02:33:46.405203 kernel: scsi host4: ahci Mar 7 02:33:46.419218 kernel: scsi host5: ahci Mar 7 02:33:46.419562 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Mar 7 02:33:46.419585 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Mar 7 02:33:46.432753 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Mar 7 02:33:46.432811 disk-uuid[602]: Primary Header is updated. Mar 7 02:33:46.432811 disk-uuid[602]: Secondary Entries is updated. Mar 7 02:33:46.432811 disk-uuid[602]: Secondary Header is updated. Mar 7 02:33:46.530034 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 02:33:46.530073 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Mar 7 02:33:46.530092 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Mar 7 02:33:46.530108 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Mar 7 02:33:46.805394 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 7 02:33:46.805462 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 7 02:33:46.805479 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 7 02:33:46.805494 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 7 02:33:46.818325 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 7 02:33:46.836780 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 7 02:33:46.854847 kernel: ata3.00: LPM support broken, forcing max_power Mar 7 02:33:46.854908 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 7 02:33:46.854925 kernel: ata3.00: applying bridge limits Mar 7 02:33:46.905618 kernel: ata3.00: LPM support broken, forcing max_power Mar 7 02:33:46.905712 kernel: ata3.00: configured for UDMA/100 Mar 7 02:33:46.935785 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 7 02:33:47.143786 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 7 02:33:47.146377 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 7 02:33:47.195913 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 7 02:33:47.580925 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 02:33:47.580991 disk-uuid[607]: The operation has completed successfully. Mar 7 02:33:47.621381 kernel: block device autoloading is deprecated and will be removed. Mar 7 02:33:48.172590 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 02:33:48.175935 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 02:33:48.201584 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 02:33:48.271875 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 02:33:48.294041 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 02:33:48.350912 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 02:33:48.428633 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 02:33:48.531918 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 02:33:48.609969 sh[643]: Success Mar 7 02:33:48.634925 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 02:33:48.825511 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 02:33:48.825595 kernel: device-mapper: uevent: version 1.0.3 Mar 7 02:33:48.839808 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 7 02:33:49.017388 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Mar 7 02:33:49.161825 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 02:33:49.181838 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 02:33:49.239472 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 02:33:49.307847 kernel: BTRFS: device fsid 13a9d0ca-821a-4a58-bd70-d4baef218662 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (662) Mar 7 02:33:49.307878 kernel: BTRFS info (device dm-0): first mount of filesystem 13a9d0ca-821a-4a58-bd70-d4baef218662 Mar 7 02:33:49.309256 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 7 02:33:49.377402 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 7 02:33:49.377487 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 7 02:33:49.382307 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 02:33:49.386576 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 7 02:33:49.421345 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 02:33:49.427099 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 02:33:49.492026 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 02:33:49.582424 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (687) Mar 7 02:33:49.597201 kernel: BTRFS info (device vda6): first mount of filesystem 8d83d2c9-1413-453e-b695-56a2340fa565 Mar 7 02:33:49.597255 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 02:33:49.656400 kernel: BTRFS info (device vda6): turning on async discard Mar 7 02:33:49.656474 kernel: BTRFS info (device vda6): enabling free space tree Mar 7 02:33:49.691202 kernel: BTRFS info (device vda6): last unmount of filesystem 8d83d2c9-1413-453e-b695-56a2340fa565 Mar 7 02:33:49.718409 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 02:33:49.735561 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 02:33:50.977495 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 02:33:50.987368 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 02:33:51.082762 ignition[749]: Ignition 2.22.0 Mar 7 02:33:51.082802 ignition[749]: Stage: fetch-offline Mar 7 02:33:51.082992 ignition[749]: no configs at "/usr/lib/ignition/base.d" Mar 7 02:33:51.083006 ignition[749]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 02:33:51.083730 ignition[749]: parsed url from cmdline: "" Mar 7 02:33:51.083738 ignition[749]: no config URL provided Mar 7 02:33:51.083779 ignition[749]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 02:33:51.083798 ignition[749]: no config at "/usr/lib/ignition/user.ign" Mar 7 02:33:51.138543 systemd-networkd[836]: lo: Link UP Mar 7 02:33:51.083868 ignition[749]: op(1): [started] loading QEMU firmware config module Mar 7 02:33:51.138548 systemd-networkd[836]: lo: Gained carrier Mar 7 02:33:51.083876 ignition[749]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 7 02:33:51.141534 systemd-networkd[836]: Enumeration completed Mar 7 02:33:51.142779 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 02:33:51.210698 ignition[749]: op(1): [finished] loading QEMU firmware config module Mar 7 02:33:51.147504 systemd-networkd[836]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 02:33:51.147511 systemd-networkd[836]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 02:33:51.159045 systemd[1]: Reached target network.target - Network. Mar 7 02:33:51.162789 systemd-networkd[836]: eth0: Link UP Mar 7 02:33:51.165879 systemd-networkd[836]: eth0: Gained carrier Mar 7 02:33:51.165897 systemd-networkd[836]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 02:33:51.226700 systemd-networkd[836]: eth0: DHCPv4 address 10.0.0.118/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 7 02:33:51.509239 systemd-resolved[273]: Detected conflict on linux IN A 10.0.0.118 Mar 7 02:33:51.509284 systemd-resolved[273]: Hostname conflict, changing published hostname from 'linux' to 'linux2'. Mar 7 02:33:51.767273 ignition[749]: parsing config with SHA512: d9eb1c645161c75355b6fa94f39e7cc4298fdb0644ccec109c9ce154c3d90b5d642a0e5f88003318f0bdad5c2f93095e94ba0cc1a4010a8f0019eac779c5ce12 Mar 7 02:33:51.798566 unknown[749]: fetched base config from "system" Mar 7 02:33:51.799062 unknown[749]: fetched user config from "qemu" Mar 7 02:33:51.802301 ignition[749]: fetch-offline: fetch-offline passed Mar 7 02:33:51.802731 ignition[749]: Ignition finished successfully Mar 7 02:33:51.846233 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 02:33:51.851577 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 7 02:33:51.858932 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 02:33:52.064065 ignition[844]: Ignition 2.22.0 Mar 7 02:33:52.068894 ignition[844]: Stage: kargs Mar 7 02:33:52.074251 ignition[844]: no configs at "/usr/lib/ignition/base.d" Mar 7 02:33:52.074274 ignition[844]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 02:33:52.200472 ignition[844]: kargs: kargs passed Mar 7 02:33:52.227474 ignition[844]: Ignition finished successfully Mar 7 02:33:52.260202 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 02:33:52.286588 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 02:33:52.414376 systemd-networkd[836]: eth0: Gained IPv6LL Mar 7 02:33:52.493801 ignition[851]: Ignition 2.22.0 Mar 7 02:33:52.499688 ignition[851]: Stage: disks Mar 7 02:33:52.499885 ignition[851]: no configs at "/usr/lib/ignition/base.d" Mar 7 02:33:52.499899 ignition[851]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 02:33:52.504900 ignition[851]: disks: disks passed Mar 7 02:33:52.504981 ignition[851]: Ignition finished successfully Mar 7 02:33:52.531699 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 02:33:52.537836 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 02:33:52.555917 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 02:33:52.570440 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 02:33:52.581582 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 02:33:52.581635 systemd[1]: Reached target basic.target - Basic System. Mar 7 02:33:52.648064 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 02:33:52.741393 systemd-fsck[862]: ROOT: clean, 15/553520 files, 52789/553472 blocks Mar 7 02:33:52.779053 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 02:33:52.793291 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 02:33:53.165753 kernel: hrtimer: interrupt took 4846420 ns Mar 7 02:33:53.578266 kernel: EXT4-fs (vda9): mounted filesystem 7661fa34-1ec8-43b3-a7b4-2fe8e4393215 r/w with ordered data mode. Quota mode: none. Mar 7 02:33:53.581515 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 02:33:53.593697 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 02:33:53.609907 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 02:33:53.709272 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 02:33:53.784028 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (871) Mar 7 02:33:53.722059 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 7 02:33:53.819085 kernel: BTRFS info (device vda6): first mount of filesystem 8d83d2c9-1413-453e-b695-56a2340fa565 Mar 7 02:33:53.819175 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 02:33:53.722193 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 02:33:53.722234 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 02:33:53.842747 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 02:33:53.880962 kernel: BTRFS info (device vda6): turning on async discard Mar 7 02:33:53.880992 kernel: BTRFS info (device vda6): enabling free space tree Mar 7 02:33:53.865902 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 02:33:53.874035 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 02:33:54.169760 initrd-setup-root[895]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 02:33:54.202706 initrd-setup-root[902]: cut: /sysroot/etc/group: No such file or directory Mar 7 02:33:54.234239 initrd-setup-root[909]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 02:33:54.267735 initrd-setup-root[916]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 02:33:55.071696 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 02:33:55.102365 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 02:33:55.142904 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 02:33:55.194840 kernel: BTRFS info (device vda6): last unmount of filesystem 8d83d2c9-1413-453e-b695-56a2340fa565 Mar 7 02:33:55.188498 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 02:33:55.331238 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 02:33:55.494108 ignition[984]: INFO : Ignition 2.22.0 Mar 7 02:33:55.494108 ignition[984]: INFO : Stage: mount Mar 7 02:33:55.518871 ignition[984]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 02:33:55.518871 ignition[984]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 02:33:55.518871 ignition[984]: INFO : mount: mount passed Mar 7 02:33:55.518871 ignition[984]: INFO : Ignition finished successfully Mar 7 02:33:55.542817 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 02:33:55.567932 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 02:33:55.637009 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 02:33:55.722196 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (997) Mar 7 02:33:55.732816 kernel: BTRFS info (device vda6): first mount of filesystem 8d83d2c9-1413-453e-b695-56a2340fa565 Mar 7 02:33:55.732909 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 02:33:55.761774 kernel: BTRFS info (device vda6): turning on async discard Mar 7 02:33:55.761861 kernel: BTRFS info (device vda6): enabling free space tree Mar 7 02:33:55.773724 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 02:33:55.922421 ignition[1014]: INFO : Ignition 2.22.0 Mar 7 02:33:55.922421 ignition[1014]: INFO : Stage: files Mar 7 02:33:55.940688 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 02:33:55.940688 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 02:33:55.940688 ignition[1014]: DEBUG : files: compiled without relabeling support, skipping Mar 7 02:33:55.940688 ignition[1014]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 02:33:55.940688 ignition[1014]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 02:33:56.007269 ignition[1014]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 02:33:56.007269 ignition[1014]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 02:33:56.007269 ignition[1014]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 02:33:55.997610 unknown[1014]: wrote ssh authorized keys file for user: core Mar 7 02:33:56.055220 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 02:33:56.055220 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 7 02:33:56.153224 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 7 02:33:56.919096 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 02:33:56.919096 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 7 02:33:56.962288 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 02:33:56.962288 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 02:33:56.985348 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 02:33:56.985348 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 02:33:56.985348 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 02:33:56.985348 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 02:33:56.985348 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 02:33:57.031189 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 02:33:57.031189 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 02:33:57.031189 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 7 02:33:57.086270 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 7 02:33:57.086270 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 7 02:33:57.086270 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Mar 7 02:33:57.552760 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 7 02:33:58.471719 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 7 02:33:58.471719 ignition[1014]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 7 02:33:58.520977 ignition[1014]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 02:33:58.592867 ignition[1014]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 02:33:58.592867 ignition[1014]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 7 02:33:58.592867 ignition[1014]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 7 02:33:58.592867 ignition[1014]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 7 02:33:58.592867 ignition[1014]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 7 02:33:58.592867 ignition[1014]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 7 02:33:58.592867 ignition[1014]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 7 02:33:58.763209 ignition[1014]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 7 02:33:58.781219 ignition[1014]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 7 02:33:58.781219 ignition[1014]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 7 02:33:58.798606 ignition[1014]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 7 02:33:58.798606 ignition[1014]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 02:33:58.798606 ignition[1014]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 02:33:58.798606 ignition[1014]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 02:33:58.798606 ignition[1014]: INFO : files: files passed Mar 7 02:33:58.798606 ignition[1014]: INFO : Ignition finished successfully Mar 7 02:33:58.828200 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 02:33:58.855634 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 02:33:58.880773 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 02:33:58.903616 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 02:33:58.903871 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 02:33:58.928462 initrd-setup-root-after-ignition[1042]: grep: /sysroot/oem/oem-release: No such file or directory Mar 7 02:33:58.940951 initrd-setup-root-after-ignition[1045]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 02:33:58.940951 initrd-setup-root-after-ignition[1045]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 02:33:58.977522 initrd-setup-root-after-ignition[1049]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 02:33:58.941917 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 02:33:58.952445 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 02:33:58.955866 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 02:33:59.056473 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 02:33:59.056912 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 02:33:59.083610 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 02:33:59.092430 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 02:33:59.092562 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 02:33:59.113708 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 02:33:59.213551 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 02:33:59.225638 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 02:33:59.268913 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 02:33:59.274319 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 02:33:59.283321 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 02:33:59.287094 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 02:33:59.288716 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 02:33:59.311391 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 02:33:59.330277 systemd[1]: Stopped target basic.target - Basic System. Mar 7 02:33:59.341990 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 02:33:59.348095 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 02:33:59.352765 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 02:33:59.356860 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 7 02:33:59.393866 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 02:33:59.398714 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 02:33:59.410925 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 02:33:59.425050 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 02:33:59.435473 systemd[1]: Stopped target swap.target - Swaps. Mar 7 02:33:59.438807 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 02:33:59.438988 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 02:33:59.462944 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 02:33:59.467257 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 02:33:59.481638 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 02:33:59.483369 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 02:33:59.488824 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 02:33:59.488985 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 02:33:59.515059 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 02:33:59.515326 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 02:33:59.523930 systemd[1]: Stopped target paths.target - Path Units. Mar 7 02:33:59.533940 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 02:33:59.534565 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 02:33:59.542779 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 02:33:59.545324 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 02:33:59.564907 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 02:33:59.565023 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 02:33:59.572603 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 02:33:59.572924 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 02:33:59.582578 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 02:33:59.584881 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 02:33:59.598782 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 02:33:59.598934 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 02:33:59.624507 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 02:33:59.756807 ignition[1070]: INFO : Ignition 2.22.0 Mar 7 02:33:59.756807 ignition[1070]: INFO : Stage: umount Mar 7 02:33:59.756807 ignition[1070]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 02:33:59.756807 ignition[1070]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 02:33:59.756807 ignition[1070]: INFO : umount: umount passed Mar 7 02:33:59.756807 ignition[1070]: INFO : Ignition finished successfully Mar 7 02:33:59.650369 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 02:33:59.669810 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 02:33:59.670007 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 02:33:59.686726 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 02:33:59.686886 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 02:33:59.768872 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 02:33:59.778633 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 02:33:59.778856 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 02:33:59.790084 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 02:33:59.790354 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 02:33:59.816432 systemd[1]: Stopped target network.target - Network. Mar 7 02:33:59.827101 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 02:33:59.827296 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 02:33:59.961891 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 02:33:59.962031 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 02:33:59.971938 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 02:33:59.972015 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 02:33:59.986003 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 02:33:59.986065 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 02:34:00.000022 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 02:34:00.000089 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 02:34:00.000722 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 02:34:00.000888 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 02:34:00.001504 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 02:34:00.001631 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 02:34:00.074098 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 02:34:00.074389 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 02:34:00.110998 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 7 02:34:00.113002 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 02:34:00.113249 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 02:34:00.155024 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 7 02:34:00.156493 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 7 02:34:00.164777 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 02:34:00.164848 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 02:34:00.204734 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 02:34:00.237815 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 02:34:00.237919 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 02:34:00.266730 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 02:34:00.266879 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 02:34:00.289909 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 02:34:00.289994 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 02:34:00.308807 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 02:34:00.308888 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 02:34:00.336862 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 02:34:00.349638 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 7 02:34:00.351950 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 7 02:34:00.397902 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 02:34:00.422095 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 02:34:00.463555 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 02:34:00.463734 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 02:34:00.555466 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 02:34:00.555597 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 02:34:00.574252 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 02:34:00.574314 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 02:34:00.577732 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 02:34:00.577812 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 02:34:00.597842 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 02:34:00.597927 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 02:34:00.613438 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 02:34:00.613504 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 02:34:00.630858 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 02:34:00.702349 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 7 02:34:00.702509 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 02:34:00.749247 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 02:34:00.749369 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 02:34:00.782090 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 7 02:34:00.782539 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 02:34:00.799760 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 02:34:00.799849 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 02:34:00.818041 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 02:34:00.818244 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 02:34:00.832947 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Mar 7 02:34:00.833035 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Mar 7 02:34:00.833099 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 7 02:34:00.833254 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 7 02:34:00.837002 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 02:34:00.837236 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 02:34:00.843773 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 02:34:00.846945 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 02:34:00.949957 systemd[1]: Switching root. Mar 7 02:34:01.020630 systemd-journald[203]: Journal stopped Mar 7 02:34:05.775200 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Mar 7 02:34:05.775299 kernel: SELinux: policy capability network_peer_controls=1 Mar 7 02:34:05.775323 kernel: SELinux: policy capability open_perms=1 Mar 7 02:34:05.775340 kernel: SELinux: policy capability extended_socket_class=1 Mar 7 02:34:05.775374 kernel: SELinux: policy capability always_check_network=0 Mar 7 02:34:05.775391 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 7 02:34:05.775406 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 7 02:34:05.775487 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 7 02:34:05.775505 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 7 02:34:05.775520 kernel: SELinux: policy capability userspace_initial_context=0 Mar 7 02:34:05.775536 kernel: audit: type=1403 audit(1772850841.477:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 7 02:34:05.775556 systemd[1]: Successfully loaded SELinux policy in 190.179ms. Mar 7 02:34:05.775575 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.873ms. Mar 7 02:34:05.775597 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 7 02:34:05.775614 systemd[1]: Detected virtualization kvm. Mar 7 02:34:05.775631 systemd[1]: Detected architecture x86-64. Mar 7 02:34:05.775647 systemd[1]: Detected first boot. Mar 7 02:34:05.775909 systemd[1]: Initializing machine ID from VM UUID. Mar 7 02:34:05.775934 zram_generator::config[1115]: No configuration found. Mar 7 02:34:05.775953 kernel: Guest personality initialized and is inactive Mar 7 02:34:05.775971 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Mar 7 02:34:05.775994 kernel: Initialized host personality Mar 7 02:34:05.776020 kernel: NET: Registered PF_VSOCK protocol family Mar 7 02:34:05.776038 systemd[1]: Populated /etc with preset unit settings. Mar 7 02:34:05.776057 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 7 02:34:05.776075 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 7 02:34:05.776092 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 7 02:34:05.776108 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 7 02:34:05.776302 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 7 02:34:05.776323 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 7 02:34:05.776348 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 7 02:34:05.776367 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 7 02:34:05.776387 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 7 02:34:05.776407 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 7 02:34:05.776426 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 7 02:34:05.776445 systemd[1]: Created slice user.slice - User and Session Slice. Mar 7 02:34:05.776471 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 02:34:05.776491 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 02:34:05.776517 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 7 02:34:05.776539 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 7 02:34:05.776612 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 7 02:34:05.776633 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 02:34:05.776651 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 7 02:34:05.776720 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 02:34:05.776740 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 02:34:05.776758 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 7 02:34:05.776782 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 7 02:34:05.776800 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 7 02:34:05.776817 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 7 02:34:05.776835 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 02:34:05.776853 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 02:34:05.776870 systemd[1]: Reached target slices.target - Slice Units. Mar 7 02:34:05.776887 systemd[1]: Reached target swap.target - Swaps. Mar 7 02:34:05.776905 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 7 02:34:05.776925 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 7 02:34:05.776949 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 7 02:34:05.776968 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 02:34:05.776987 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 02:34:05.777004 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 02:34:05.777021 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 7 02:34:05.777039 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 7 02:34:05.777106 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 7 02:34:05.778387 systemd[1]: Mounting media.mount - External Media Directory... Mar 7 02:34:05.778405 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 02:34:05.778426 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 7 02:34:05.778442 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 7 02:34:05.778461 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 7 02:34:05.778479 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 7 02:34:05.778497 systemd[1]: Reached target machines.target - Containers. Mar 7 02:34:05.778517 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 7 02:34:05.778537 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 02:34:05.778556 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 02:34:05.778574 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 7 02:34:05.778595 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 02:34:05.778614 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 02:34:05.778632 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 02:34:05.778652 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 7 02:34:05.778716 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 02:34:05.778736 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 7 02:34:05.778756 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 7 02:34:05.778774 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 7 02:34:05.778797 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 7 02:34:05.778868 systemd[1]: Stopped systemd-fsck-usr.service. Mar 7 02:34:05.778886 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 7 02:34:05.778901 kernel: fuse: init (API version 7.41) Mar 7 02:34:05.778919 kernel: ACPI: bus type drm_connector registered Mar 7 02:34:05.778937 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 02:34:05.778956 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 02:34:05.778975 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 7 02:34:05.778993 kernel: loop: module loaded Mar 7 02:34:05.779013 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 7 02:34:05.779029 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 7 02:34:05.779045 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 02:34:05.779066 systemd[1]: verity-setup.service: Deactivated successfully. Mar 7 02:34:05.779084 systemd[1]: Stopped verity-setup.service. Mar 7 02:34:05.779104 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 02:34:05.779345 systemd-journald[1200]: Collecting audit messages is disabled. Mar 7 02:34:05.779383 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 7 02:34:05.779403 systemd-journald[1200]: Journal started Mar 7 02:34:05.779442 systemd-journald[1200]: Runtime Journal (/run/log/journal/8d3b7494797149a9b1e1fe9fe7a62faa) is 6M, max 48.1M, 42.1M free. Mar 7 02:34:05.792865 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 7 02:34:03.777277 systemd[1]: Queued start job for default target multi-user.target. Mar 7 02:34:03.819551 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 7 02:34:03.820629 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 7 02:34:03.823767 systemd[1]: systemd-journald.service: Consumed 1.371s CPU time. Mar 7 02:34:05.822830 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 02:34:05.824396 systemd[1]: Mounted media.mount - External Media Directory. Mar 7 02:34:05.830800 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 7 02:34:05.839099 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 7 02:34:05.851031 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 7 02:34:05.869626 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 7 02:34:05.876610 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 02:34:05.883809 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 7 02:34:05.884331 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 7 02:34:05.894926 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 02:34:05.895344 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 02:34:05.901997 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 02:34:05.902309 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 02:34:05.909513 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 02:34:05.909807 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 02:34:05.922979 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 7 02:34:05.924874 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 7 02:34:05.937509 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 02:34:05.937985 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 02:34:05.946371 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 02:34:05.964073 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 02:34:05.976837 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 7 02:34:05.991297 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 7 02:34:06.021617 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 7 02:34:06.033392 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 7 02:34:06.044190 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 7 02:34:06.051487 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 7 02:34:06.051592 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 02:34:06.067805 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 7 02:34:06.095268 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 7 02:34:06.103092 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 02:34:06.111919 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 7 02:34:06.139383 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 7 02:34:06.158499 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 02:34:06.167792 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 7 02:34:06.167961 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 02:34:06.239882 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 02:34:06.263972 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 7 02:34:06.311315 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 02:34:06.354229 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 02:34:06.399875 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 7 02:34:06.459432 systemd-journald[1200]: Time spent on flushing to /var/log/journal/8d3b7494797149a9b1e1fe9fe7a62faa is 70.643ms for 1073 entries. Mar 7 02:34:06.459432 systemd-journald[1200]: System Journal (/var/log/journal/8d3b7494797149a9b1e1fe9fe7a62faa) is 8M, max 195.6M, 187.6M free. Mar 7 02:34:06.594817 systemd-journald[1200]: Received client request to flush runtime journal. Mar 7 02:34:06.594875 kernel: loop0: detected capacity change from 0 to 217752 Mar 7 02:34:06.428591 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 7 02:34:06.459210 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 7 02:34:06.503604 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 7 02:34:06.520072 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 7 02:34:06.607062 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 7 02:34:06.627863 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Mar 7 02:34:06.627886 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Mar 7 02:34:06.660438 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 02:34:06.685070 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 7 02:34:07.794620 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 7 02:34:07.799603 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 02:34:07.817766 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 7 02:34:07.879974 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 7 02:34:07.980246 kernel: loop1: detected capacity change from 0 to 128560 Mar 7 02:34:07.986057 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 7 02:34:08.007257 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 02:34:08.507240 kernel: loop2: detected capacity change from 0 to 110984 Mar 7 02:34:08.540528 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Mar 7 02:34:08.540556 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Mar 7 02:34:08.588288 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 02:34:08.731184 kernel: loop3: detected capacity change from 0 to 217752 Mar 7 02:34:08.911862 kernel: loop4: detected capacity change from 0 to 128560 Mar 7 02:34:08.981404 kernel: loop5: detected capacity change from 0 to 110984 Mar 7 02:34:09.001602 (sd-merge)[1262]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 7 02:34:09.002954 (sd-merge)[1262]: Merged extensions into '/usr'. Mar 7 02:34:09.044808 systemd[1]: Reload requested from client PID 1234 ('systemd-sysext') (unit systemd-sysext.service)... Mar 7 02:34:09.044828 systemd[1]: Reloading... Mar 7 02:34:09.276239 zram_generator::config[1284]: No configuration found. Mar 7 02:34:09.909011 ldconfig[1229]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 7 02:34:10.509036 systemd[1]: Reloading finished in 1463 ms. Mar 7 02:34:10.572725 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 7 02:34:10.585575 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 7 02:34:10.612441 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 7 02:34:10.668294 systemd[1]: Starting ensure-sysext.service... Mar 7 02:34:10.684949 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 02:34:10.711530 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 02:34:10.789053 systemd[1]: Reload requested from client PID 1326 ('systemctl') (unit ensure-sysext.service)... Mar 7 02:34:10.789176 systemd[1]: Reloading... Mar 7 02:34:11.116177 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 7 02:34:11.116239 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 7 02:34:11.116884 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 7 02:34:11.117428 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 7 02:34:11.126371 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 7 02:34:11.132924 systemd-tmpfiles[1327]: ACLs are not supported, ignoring. Mar 7 02:34:11.142523 systemd-tmpfiles[1327]: ACLs are not supported, ignoring. Mar 7 02:34:11.187213 systemd-tmpfiles[1327]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 02:34:11.195205 systemd-tmpfiles[1327]: Skipping /boot Mar 7 02:34:11.198255 zram_generator::config[1353]: No configuration found. Mar 7 02:34:11.218890 systemd-tmpfiles[1327]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 02:34:11.218939 systemd-tmpfiles[1327]: Skipping /boot Mar 7 02:34:11.227833 systemd-udevd[1328]: Using default interface naming scheme 'v255'. Mar 7 02:34:12.494190 systemd[1]: Reloading finished in 1704 ms. Mar 7 02:34:12.541739 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 02:34:12.620348 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 7 02:34:12.620477 kernel: mousedev: PS/2 mouse device common for all mice Mar 7 02:34:12.622446 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 02:34:12.637326 kernel: ACPI: button: Power Button [PWRF] Mar 7 02:34:12.788433 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 7 02:34:12.804952 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 7 02:34:12.805768 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 7 02:34:12.806002 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 7 02:34:12.841966 systemd[1]: Finished ensure-sysext.service. Mar 7 02:34:12.918885 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 7 02:34:13.493470 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 02:34:13.497526 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 7 02:34:13.505303 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 7 02:34:13.511092 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 02:34:13.513477 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 02:34:13.523208 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 02:34:13.545768 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 02:34:13.566226 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 02:34:13.576547 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 02:34:13.579926 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 7 02:34:13.584429 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 7 02:34:13.599061 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 7 02:34:13.645431 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 02:34:13.670721 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 02:34:13.676999 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 7 02:34:13.684543 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 7 02:34:13.686595 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 02:34:13.690350 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 02:34:13.690649 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 02:34:13.696565 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 02:34:13.697014 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 02:34:13.710457 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 02:34:13.726516 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 02:34:13.729913 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 02:34:13.730439 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 02:34:13.778331 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 02:34:13.778729 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 02:34:13.783921 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 7 02:34:14.138555 augenrules[1482]: No rules Mar 7 02:34:14.620883 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 02:34:14.625443 systemd[1]: audit-rules.service: Deactivated successfully. Mar 7 02:34:14.631994 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 7 02:34:14.640046 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 7 02:34:14.677966 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 7 02:34:14.743210 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 7 02:34:14.743990 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 7 02:34:15.146496 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 7 02:34:15.188342 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 02:34:15.188771 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 02:34:15.204538 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 7 02:34:15.231284 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 02:34:15.241576 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 7 02:34:15.596722 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 7 02:34:15.831099 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 02:34:16.127786 systemd-resolved[1462]: Positive Trust Anchors: Mar 7 02:34:16.130391 systemd-resolved[1462]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 02:34:16.130434 systemd-resolved[1462]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 02:34:16.205579 systemd-resolved[1462]: Defaulting to hostname 'linux'. Mar 7 02:34:16.213971 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 02:34:16.217892 systemd-networkd[1458]: lo: Link UP Mar 7 02:34:16.217898 systemd-networkd[1458]: lo: Gained carrier Mar 7 02:34:16.225080 systemd-networkd[1458]: Enumeration completed Mar 7 02:34:16.229761 systemd-networkd[1458]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 02:34:16.230488 systemd-networkd[1458]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 02:34:16.234015 systemd-networkd[1458]: eth0: Link UP Mar 7 02:34:16.236592 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 02:34:16.237796 systemd-networkd[1458]: eth0: Gained carrier Mar 7 02:34:16.238067 systemd-networkd[1458]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 02:34:16.270456 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 7 02:34:16.294291 systemd[1]: Reached target network.target - Network. Mar 7 02:34:16.306862 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 02:34:16.322914 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 02:34:16.362240 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 7 02:34:16.374759 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 7 02:34:16.406503 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Mar 7 02:34:16.423184 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 7 02:34:16.438084 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 7 02:34:16.438414 systemd[1]: Reached target paths.target - Path Units. Mar 7 02:34:16.456445 systemd[1]: Reached target time-set.target - System Time Set. Mar 7 02:34:16.472988 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 7 02:34:16.494892 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 7 02:34:16.512517 systemd-networkd[1458]: eth0: DHCPv4 address 10.0.0.118/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 7 02:34:16.517058 systemd-timesyncd[1463]: Network configuration changed, trying to establish connection. Mar 7 02:34:16.520782 systemd[1]: Reached target timers.target - Timer Units. Mar 7 02:34:17.833458 systemd-resolved[1462]: Clock change detected. Flushing caches. Mar 7 02:34:17.833610 systemd-timesyncd[1463]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 7 02:34:17.833662 systemd-timesyncd[1463]: Initial clock synchronization to Sat 2026-03-07 02:34:17.833297 UTC. Mar 7 02:34:17.841226 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 7 02:34:17.916910 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 7 02:34:18.041603 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 7 02:34:18.101676 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 7 02:34:18.135693 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 7 02:34:18.278424 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 7 02:34:18.287373 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 7 02:34:18.306116 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 7 02:34:18.318996 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 7 02:34:18.354746 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 7 02:34:18.375216 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 02:34:18.394066 systemd[1]: Reached target basic.target - Basic System. Mar 7 02:34:18.406169 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 7 02:34:18.406248 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 7 02:34:18.540709 systemd[1]: Starting containerd.service - containerd container runtime... Mar 7 02:34:18.560960 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 7 02:34:18.622579 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 7 02:34:18.643847 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 7 02:34:18.654124 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 7 02:34:18.666271 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 7 02:34:18.673201 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Mar 7 02:34:18.691489 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 7 02:34:18.694514 jq[1519]: false Mar 7 02:34:18.714584 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 7 02:34:18.734287 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 7 02:34:18.737689 extend-filesystems[1520]: Found /dev/vda6 Mar 7 02:34:18.764400 extend-filesystems[1520]: Found /dev/vda9 Mar 7 02:34:18.776380 extend-filesystems[1520]: Checking size of /dev/vda9 Mar 7 02:34:18.797586 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 7 02:34:18.887289 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 7 02:34:18.890061 extend-filesystems[1520]: Resized partition /dev/vda9 Mar 7 02:34:18.910643 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 7 02:34:18.919110 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 7 02:34:18.934656 systemd[1]: Starting update-engine.service - Update Engine... Mar 7 02:34:18.956134 extend-filesystems[1543]: resize2fs 1.47.3 (8-Jul-2025) Mar 7 02:34:18.966489 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 7 02:34:18.964131 oslogin_cache_refresh[1521]: Refreshing passwd entry cache Mar 7 02:34:18.986823 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Refreshing passwd entry cache Mar 7 02:34:18.989719 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 7 02:34:18.997484 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 7 02:34:19.001014 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 7 02:34:19.004509 systemd[1]: motdgen.service: Deactivated successfully. Mar 7 02:34:19.004999 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 7 02:34:19.029369 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 7 02:34:19.066126 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Failure getting users, quitting Mar 7 02:34:19.066126 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 7 02:34:19.066126 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Refreshing group entry cache Mar 7 02:34:19.064628 oslogin_cache_refresh[1521]: Failure getting users, quitting Mar 7 02:34:19.064655 oslogin_cache_refresh[1521]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 7 02:34:19.064733 oslogin_cache_refresh[1521]: Refreshing group entry cache Mar 7 02:34:19.077040 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 7 02:34:19.077935 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 7 02:34:19.098650 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Failure getting groups, quitting Mar 7 02:34:19.098650 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 7 02:34:19.091178 oslogin_cache_refresh[1521]: Failure getting groups, quitting Mar 7 02:34:19.091193 oslogin_cache_refresh[1521]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 7 02:34:19.134093 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Mar 7 02:34:19.134680 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Mar 7 02:34:19.141118 jq[1546]: true Mar 7 02:34:19.234472 jq[1550]: true Mar 7 02:34:19.237641 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 7 02:34:19.268714 tar[1548]: linux-amd64/LICENSE Mar 7 02:34:19.271390 tar[1548]: linux-amd64/helm Mar 7 02:34:19.284622 (ntainerd)[1560]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 7 02:34:19.298007 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 7 02:34:19.302495 update_engine[1544]: I20260307 02:34:19.302294 1544 main.cc:92] Flatcar Update Engine starting Mar 7 02:34:19.377387 extend-filesystems[1543]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 7 02:34:19.377387 extend-filesystems[1543]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 7 02:34:19.377387 extend-filesystems[1543]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 7 02:34:19.379169 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 7 02:34:19.431076 extend-filesystems[1520]: Resized filesystem in /dev/vda9 Mar 7 02:34:19.381391 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 7 02:34:19.574129 systemd-networkd[1458]: eth0: Gained IPv6LL Mar 7 02:34:19.610842 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 7 02:34:20.326551 systemd-logind[1539]: Watching system buttons on /dev/input/event2 (Power Button) Mar 7 02:34:20.326588 systemd-logind[1539]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 7 02:34:20.326959 systemd-logind[1539]: New seat seat0. Mar 7 02:34:20.334284 systemd[1]: Started systemd-logind.service - User Login Management. Mar 7 02:34:20.394650 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 7 02:34:20.423505 systemd[1]: Reached target network-online.target - Network is Online. Mar 7 02:34:20.484777 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 7 02:34:20.512697 dbus-daemon[1517]: [system] SELinux support is enabled Mar 7 02:34:20.532721 update_engine[1544]: I20260307 02:34:20.532665 1544 update_check_scheduler.cc:74] Next update check in 3m5s Mar 7 02:34:20.581626 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 02:34:20.936721 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 7 02:34:20.953463 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 7 02:34:20.962962 bash[1583]: Updated "/home/core/.ssh/authorized_keys" Mar 7 02:34:20.996491 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 7 02:34:21.019182 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 7 02:34:21.020557 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 7 02:34:21.020601 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 7 02:34:21.023655 dbus-daemon[1517]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 7 02:34:21.025197 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 7 02:34:21.025222 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 7 02:34:21.031266 systemd[1]: Started update-engine.service - Update Engine. Mar 7 02:34:21.053495 kernel: kvm_amd: TSC scaling supported Mar 7 02:34:21.053545 kernel: kvm_amd: Nested Virtualization enabled Mar 7 02:34:21.053567 kernel: kvm_amd: Nested Paging enabled Mar 7 02:34:21.060099 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 7 02:34:21.077766 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 7 02:34:21.079439 kernel: kvm_amd: PMU virtualization is disabled Mar 7 02:34:21.125563 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 7 02:34:21.163571 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 7 02:34:21.164032 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 7 02:34:21.413048 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 7 02:34:21.489235 sshd_keygen[1545]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 7 02:34:21.687113 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 7 02:34:21.708675 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 7 02:34:21.726921 systemd[1]: Started sshd@0-10.0.0.118:22-10.0.0.1:58158.service - OpenSSH per-connection server daemon (10.0.0.1:58158). Mar 7 02:34:22.313196 systemd[1]: issuegen.service: Deactivated successfully. Mar 7 02:34:22.320615 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 7 02:34:22.429859 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 7 02:34:22.753436 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 7 02:34:22.774226 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 7 02:34:22.791598 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 7 02:34:23.337942 systemd[1]: Reached target getty.target - Login Prompts. Mar 7 02:34:24.018477 locksmithd[1594]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 7 02:34:24.330773 kernel: EDAC MC: Ver: 3.0.0 Mar 7 02:34:24.774652 sshd[1613]: Accepted publickey for core from 10.0.0.1 port 58158 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:34:25.085576 sshd-session[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:34:25.119261 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 7 02:34:25.152994 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 7 02:34:25.194780 systemd-logind[1539]: New session 1 of user core. Mar 7 02:34:25.218069 containerd[1560]: time="2026-03-07T02:34:25Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 7 02:34:25.229472 containerd[1560]: time="2026-03-07T02:34:25.229401026Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 7 02:34:25.299055 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 7 02:34:25.318010 containerd[1560]: time="2026-03-07T02:34:25.317959333Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="102.661µs" Mar 7 02:34:25.318134 containerd[1560]: time="2026-03-07T02:34:25.318115795Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 7 02:34:25.318259 containerd[1560]: time="2026-03-07T02:34:25.318241339Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 7 02:34:25.318705 containerd[1560]: time="2026-03-07T02:34:25.318682403Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 7 02:34:25.318834 containerd[1560]: time="2026-03-07T02:34:25.318814950Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 7 02:34:25.321085 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 7 02:34:25.324181 containerd[1560]: time="2026-03-07T02:34:25.324155693Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 7 02:34:25.324473 containerd[1560]: time="2026-03-07T02:34:25.324450243Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 7 02:34:25.324538 containerd[1560]: time="2026-03-07T02:34:25.324523560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 7 02:34:25.325100 containerd[1560]: time="2026-03-07T02:34:25.325074660Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 7 02:34:25.325166 containerd[1560]: time="2026-03-07T02:34:25.325152505Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 7 02:34:25.325219 containerd[1560]: time="2026-03-07T02:34:25.325204852Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 7 02:34:25.325265 containerd[1560]: time="2026-03-07T02:34:25.325253253Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 7 02:34:25.325659 containerd[1560]: time="2026-03-07T02:34:25.325637931Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 7 02:34:25.326437 containerd[1560]: time="2026-03-07T02:34:25.326414842Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 7 02:34:25.326532 containerd[1560]: time="2026-03-07T02:34:25.326515980Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 7 02:34:25.326641 containerd[1560]: time="2026-03-07T02:34:25.326624343Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 7 02:34:25.326793 containerd[1560]: time="2026-03-07T02:34:25.326775115Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 7 02:34:25.327534 containerd[1560]: time="2026-03-07T02:34:25.327484800Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 7 02:34:25.327731 containerd[1560]: time="2026-03-07T02:34:25.327712164Z" level=info msg="metadata content store policy set" policy=shared Mar 7 02:34:25.464205 containerd[1560]: time="2026-03-07T02:34:25.461201513Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 7 02:34:25.464205 containerd[1560]: time="2026-03-07T02:34:25.464047207Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 7 02:34:25.464205 containerd[1560]: time="2026-03-07T02:34:25.464103953Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 7 02:34:25.464501 containerd[1560]: time="2026-03-07T02:34:25.464249505Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 7 02:34:25.464501 containerd[1560]: time="2026-03-07T02:34:25.464427367Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 7 02:34:25.464550 containerd[1560]: time="2026-03-07T02:34:25.464504811Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 7 02:34:25.464550 containerd[1560]: time="2026-03-07T02:34:25.464529888Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 7 02:34:25.464550 containerd[1560]: time="2026-03-07T02:34:25.464546198Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 7 02:34:25.464622 containerd[1560]: time="2026-03-07T02:34:25.464560255Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 7 02:34:25.464622 containerd[1560]: time="2026-03-07T02:34:25.464575874Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 7 02:34:25.464622 containerd[1560]: time="2026-03-07T02:34:25.464591002Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 7 02:34:25.464622 containerd[1560]: time="2026-03-07T02:34:25.464607173Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 7 02:34:25.465085 containerd[1560]: time="2026-03-07T02:34:25.465030653Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 7 02:34:25.465179 containerd[1560]: time="2026-03-07T02:34:25.465128366Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 7 02:34:25.465212 containerd[1560]: time="2026-03-07T02:34:25.465182236Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 7 02:34:25.465212 containerd[1560]: time="2026-03-07T02:34:25.465199588Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 7 02:34:25.465268 containerd[1560]: time="2026-03-07T02:34:25.465213444Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 7 02:34:25.465468 containerd[1560]: time="2026-03-07T02:34:25.465414169Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 7 02:34:25.477368 containerd[1560]: time="2026-03-07T02:34:25.475432710Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 7 02:34:25.477368 containerd[1560]: time="2026-03-07T02:34:25.475563223Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 7 02:34:25.477368 containerd[1560]: time="2026-03-07T02:34:25.475589052Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 7 02:34:25.477368 containerd[1560]: time="2026-03-07T02:34:25.475604731Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 7 02:34:25.477368 containerd[1560]: time="2026-03-07T02:34:25.475624438Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 7 02:34:25.477368 containerd[1560]: time="2026-03-07T02:34:25.475955576Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 7 02:34:25.477368 containerd[1560]: time="2026-03-07T02:34:25.475979521Z" level=info msg="Start snapshots syncer" Mar 7 02:34:25.477368 containerd[1560]: time="2026-03-07T02:34:25.476105567Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 7 02:34:25.483061 (systemd)[1637]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 7 02:34:25.487191 containerd[1560]: time="2026-03-07T02:34:25.483724394Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 7 02:34:25.487833 containerd[1560]: time="2026-03-07T02:34:25.487807538Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 7 02:34:25.488126 containerd[1560]: time="2026-03-07T02:34:25.488099112Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 7 02:34:25.488467 containerd[1560]: time="2026-03-07T02:34:25.488445169Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 7 02:34:25.488556 containerd[1560]: time="2026-03-07T02:34:25.488540246Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 7 02:34:25.488619 containerd[1560]: time="2026-03-07T02:34:25.488605077Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 7 02:34:25.488678 containerd[1560]: time="2026-03-07T02:34:25.488664148Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 7 02:34:25.488738 containerd[1560]: time="2026-03-07T02:34:25.488721394Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 7 02:34:25.488853 containerd[1560]: time="2026-03-07T02:34:25.488834646Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 7 02:34:25.489007 containerd[1560]: time="2026-03-07T02:34:25.488988523Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 7 02:34:25.489153 containerd[1560]: time="2026-03-07T02:34:25.489135458Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 7 02:34:25.489222 containerd[1560]: time="2026-03-07T02:34:25.489208704Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 7 02:34:25.493186 containerd[1560]: time="2026-03-07T02:34:25.493159762Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 7 02:34:25.493430 containerd[1560]: time="2026-03-07T02:34:25.493410401Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 7 02:34:25.493509 containerd[1560]: time="2026-03-07T02:34:25.493491632Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 7 02:34:25.493565 containerd[1560]: time="2026-03-07T02:34:25.493547497Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 7 02:34:25.493622 containerd[1560]: time="2026-03-07T02:34:25.493603791Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 7 02:34:25.493670 containerd[1560]: time="2026-03-07T02:34:25.493657612Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 7 02:34:25.493741 containerd[1560]: time="2026-03-07T02:34:25.493726641Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 7 02:34:25.493942 containerd[1560]: time="2026-03-07T02:34:25.493829593Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 7 02:34:25.494055 containerd[1560]: time="2026-03-07T02:34:25.494039185Z" level=info msg="runtime interface created" Mar 7 02:34:25.494106 containerd[1560]: time="2026-03-07T02:34:25.494094528Z" level=info msg="created NRI interface" Mar 7 02:34:25.494162 containerd[1560]: time="2026-03-07T02:34:25.494148599Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 7 02:34:25.494216 containerd[1560]: time="2026-03-07T02:34:25.494205315Z" level=info msg="Connect containerd service" Mar 7 02:34:25.494404 containerd[1560]: time="2026-03-07T02:34:25.494381454Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 7 02:34:25.505644 systemd-logind[1539]: New session c1 of user core. Mar 7 02:34:25.554700 containerd[1560]: time="2026-03-07T02:34:25.553073052Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 02:34:27.382816 systemd[1637]: Queued start job for default target default.target. Mar 7 02:34:27.404037 systemd[1637]: Created slice app.slice - User Application Slice. Mar 7 02:34:27.404091 systemd[1637]: Reached target paths.target - Paths. Mar 7 02:34:27.408431 systemd[1637]: Reached target timers.target - Timers. Mar 7 02:34:27.414800 systemd[1637]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 7 02:34:27.953185 systemd[1637]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 7 02:34:27.953540 systemd[1637]: Reached target sockets.target - Sockets. Mar 7 02:34:27.953604 systemd[1637]: Reached target basic.target - Basic System. Mar 7 02:34:27.953687 systemd[1637]: Reached target default.target - Main User Target. Mar 7 02:34:27.953861 systemd[1637]: Startup finished in 2.394s. Mar 7 02:34:27.964819 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 7 02:34:28.020453 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 7 02:34:28.124501 systemd[1]: Started sshd@1-10.0.0.118:22-10.0.0.1:45226.service - OpenSSH per-connection server daemon (10.0.0.1:45226). Mar 7 02:34:28.138079 containerd[1560]: time="2026-03-07T02:34:28.124284688Z" level=info msg="Start subscribing containerd event" Mar 7 02:34:28.138079 containerd[1560]: time="2026-03-07T02:34:28.130621581Z" level=info msg="Start recovering state" Mar 7 02:34:28.140426 containerd[1560]: time="2026-03-07T02:34:28.140373757Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 7 02:34:28.141726 containerd[1560]: time="2026-03-07T02:34:28.141702558Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 7 02:34:28.157531 containerd[1560]: time="2026-03-07T02:34:28.141230767Z" level=info msg="Start event monitor" Mar 7 02:34:28.157929 containerd[1560]: time="2026-03-07T02:34:28.157864097Z" level=info msg="Start cni network conf syncer for default" Mar 7 02:34:28.158268 containerd[1560]: time="2026-03-07T02:34:28.158244508Z" level=info msg="Start streaming server" Mar 7 02:34:28.159242 containerd[1560]: time="2026-03-07T02:34:28.159219268Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 7 02:34:28.159477 containerd[1560]: time="2026-03-07T02:34:28.159454768Z" level=info msg="runtime interface starting up..." Mar 7 02:34:28.159719 containerd[1560]: time="2026-03-07T02:34:28.159695538Z" level=info msg="starting plugins..." Mar 7 02:34:28.159957 containerd[1560]: time="2026-03-07T02:34:28.159934183Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 7 02:34:28.163366 containerd[1560]: time="2026-03-07T02:34:28.163280782Z" level=info msg="containerd successfully booted in 2.949039s" Mar 7 02:34:28.165496 systemd[1]: Started containerd.service - containerd container runtime. Mar 7 02:34:28.312361 tar[1548]: linux-amd64/README.md Mar 7 02:34:28.640052 sshd[1662]: Accepted publickey for core from 10.0.0.1 port 45226 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:34:28.652519 sshd-session[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:34:28.662866 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 7 02:34:28.675650 systemd-logind[1539]: New session 2 of user core. Mar 7 02:34:28.695845 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 7 02:34:28.776403 sshd[1669]: Connection closed by 10.0.0.1 port 45226 Mar 7 02:34:28.777585 sshd-session[1662]: pam_unix(sshd:session): session closed for user core Mar 7 02:34:28.807217 systemd[1]: sshd@1-10.0.0.118:22-10.0.0.1:45226.service: Deactivated successfully. Mar 7 02:34:28.817177 systemd[1]: session-2.scope: Deactivated successfully. Mar 7 02:34:28.822542 systemd-logind[1539]: Session 2 logged out. Waiting for processes to exit. Mar 7 02:34:28.831789 systemd[1]: Started sshd@2-10.0.0.118:22-10.0.0.1:45232.service - OpenSSH per-connection server daemon (10.0.0.1:45232). Mar 7 02:34:28.839034 systemd-logind[1539]: Removed session 2. Mar 7 02:34:28.968704 sshd[1678]: Accepted publickey for core from 10.0.0.1 port 45232 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:34:28.971693 sshd-session[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:34:29.012560 systemd-logind[1539]: New session 3 of user core. Mar 7 02:34:29.027280 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 7 02:34:29.086506 sshd[1681]: Connection closed by 10.0.0.1 port 45232 Mar 7 02:34:29.088447 sshd-session[1678]: pam_unix(sshd:session): session closed for user core Mar 7 02:34:29.102120 systemd-logind[1539]: Session 3 logged out. Waiting for processes to exit. Mar 7 02:34:29.104529 systemd[1]: sshd@2-10.0.0.118:22-10.0.0.1:45232.service: Deactivated successfully. Mar 7 02:34:29.115049 systemd[1]: session-3.scope: Deactivated successfully. Mar 7 02:34:29.125002 systemd-logind[1539]: Removed session 3. Mar 7 02:34:31.614634 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:34:31.617764 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 7 02:34:31.618501 systemd[1]: Startup finished in 5.859s (kernel) + 25.459s (initrd) + 29.036s (userspace) = 1min 355ms. Mar 7 02:34:31.657661 (kubelet)[1691]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 02:34:35.208651 kubelet[1691]: E0307 02:34:35.204499 1691 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 02:34:35.218868 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 02:34:35.219217 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 02:34:35.220018 systemd[1]: kubelet.service: Consumed 7.502s CPU time, 257.2M memory peak. Mar 7 02:34:39.137991 systemd[1]: Started sshd@3-10.0.0.118:22-10.0.0.1:59718.service - OpenSSH per-connection server daemon (10.0.0.1:59718). Mar 7 02:34:39.350174 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 59718 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:34:39.355636 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:34:39.377094 systemd-logind[1539]: New session 4 of user core. Mar 7 02:34:39.421660 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 7 02:34:39.490028 sshd[1704]: Connection closed by 10.0.0.1 port 59718 Mar 7 02:34:39.486227 sshd-session[1701]: pam_unix(sshd:session): session closed for user core Mar 7 02:34:39.511792 systemd[1]: sshd@3-10.0.0.118:22-10.0.0.1:59718.service: Deactivated successfully. Mar 7 02:34:39.517804 systemd[1]: session-4.scope: Deactivated successfully. Mar 7 02:34:39.520877 systemd-logind[1539]: Session 4 logged out. Waiting for processes to exit. Mar 7 02:34:39.530595 systemd[1]: Started sshd@4-10.0.0.118:22-10.0.0.1:59724.service - OpenSSH per-connection server daemon (10.0.0.1:59724). Mar 7 02:34:39.538850 systemd-logind[1539]: Removed session 4. Mar 7 02:34:39.638759 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 59724 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:34:39.647216 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:34:39.680968 systemd-logind[1539]: New session 5 of user core. Mar 7 02:34:39.706114 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 7 02:34:39.777096 sshd[1713]: Connection closed by 10.0.0.1 port 59724 Mar 7 02:34:39.776430 sshd-session[1710]: pam_unix(sshd:session): session closed for user core Mar 7 02:34:39.807551 systemd[1]: sshd@4-10.0.0.118:22-10.0.0.1:59724.service: Deactivated successfully. Mar 7 02:34:39.819831 systemd[1]: session-5.scope: Deactivated successfully. Mar 7 02:34:39.828218 systemd-logind[1539]: Session 5 logged out. Waiting for processes to exit. Mar 7 02:34:39.838884 systemd[1]: Started sshd@5-10.0.0.118:22-10.0.0.1:59734.service - OpenSSH per-connection server daemon (10.0.0.1:59734). Mar 7 02:34:39.849790 systemd-logind[1539]: Removed session 5. Mar 7 02:34:39.984786 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 59734 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:34:39.988276 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:34:40.003235 systemd-logind[1539]: New session 6 of user core. Mar 7 02:34:40.021450 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 7 02:34:40.070783 sshd[1722]: Connection closed by 10.0.0.1 port 59734 Mar 7 02:34:40.076447 sshd-session[1719]: pam_unix(sshd:session): session closed for user core Mar 7 02:34:40.108057 systemd[1]: sshd@5-10.0.0.118:22-10.0.0.1:59734.service: Deactivated successfully. Mar 7 02:34:40.111582 systemd[1]: session-6.scope: Deactivated successfully. Mar 7 02:34:40.113082 systemd-logind[1539]: Session 6 logged out. Waiting for processes to exit. Mar 7 02:34:40.122430 systemd[1]: Started sshd@6-10.0.0.118:22-10.0.0.1:59736.service - OpenSSH per-connection server daemon (10.0.0.1:59736). Mar 7 02:34:40.130056 systemd-logind[1539]: Removed session 6. Mar 7 02:34:40.237976 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 59736 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:34:40.241000 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:34:40.255124 systemd-logind[1539]: New session 7 of user core. Mar 7 02:34:40.270438 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 7 02:34:40.324393 sudo[1732]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 7 02:34:40.324778 sudo[1732]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 02:34:40.357182 sudo[1732]: pam_unix(sudo:session): session closed for user root Mar 7 02:34:40.361372 sshd[1731]: Connection closed by 10.0.0.1 port 59736 Mar 7 02:34:40.362646 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Mar 7 02:34:40.383489 systemd[1]: sshd@6-10.0.0.118:22-10.0.0.1:59736.service: Deactivated successfully. Mar 7 02:34:40.387218 systemd[1]: session-7.scope: Deactivated successfully. Mar 7 02:34:40.388583 systemd-logind[1539]: Session 7 logged out. Waiting for processes to exit. Mar 7 02:34:40.396677 systemd[1]: Started sshd@7-10.0.0.118:22-10.0.0.1:59744.service - OpenSSH per-connection server daemon (10.0.0.1:59744). Mar 7 02:34:40.399063 systemd-logind[1539]: Removed session 7. Mar 7 02:34:40.521047 sshd[1738]: Accepted publickey for core from 10.0.0.1 port 59744 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:34:40.525159 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:34:40.557840 systemd-logind[1539]: New session 8 of user core. Mar 7 02:34:40.582639 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 7 02:34:40.628865 sudo[1744]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 7 02:34:40.629401 sudo[1744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 02:34:40.650492 sudo[1744]: pam_unix(sudo:session): session closed for user root Mar 7 02:34:40.672079 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 7 02:34:40.672572 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 02:34:40.704971 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 7 02:34:40.821290 augenrules[1766]: No rules Mar 7 02:34:40.823054 systemd[1]: audit-rules.service: Deactivated successfully. Mar 7 02:34:40.824521 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 7 02:34:40.832021 sudo[1743]: pam_unix(sudo:session): session closed for user root Mar 7 02:34:40.839087 sshd[1742]: Connection closed by 10.0.0.1 port 59744 Mar 7 02:34:40.842818 sshd-session[1738]: pam_unix(sshd:session): session closed for user core Mar 7 02:34:40.857157 systemd[1]: sshd@7-10.0.0.118:22-10.0.0.1:59744.service: Deactivated successfully. Mar 7 02:34:40.860817 systemd[1]: session-8.scope: Deactivated successfully. Mar 7 02:34:40.867426 systemd-logind[1539]: Session 8 logged out. Waiting for processes to exit. Mar 7 02:34:40.869469 systemd[1]: Started sshd@8-10.0.0.118:22-10.0.0.1:59750.service - OpenSSH per-connection server daemon (10.0.0.1:59750). Mar 7 02:34:40.877954 systemd-logind[1539]: Removed session 8. Mar 7 02:34:41.169650 sshd[1775]: Accepted publickey for core from 10.0.0.1 port 59750 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:34:41.411406 sshd-session[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:34:41.905199 systemd-logind[1539]: New session 9 of user core. Mar 7 02:34:42.114597 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 7 02:34:42.418280 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 7 02:34:42.421120 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 02:34:44.565232 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 7 02:34:44.822240 (dockerd)[1799]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 7 02:34:45.290650 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 7 02:34:45.295269 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 02:34:47.170364 dockerd[1799]: time="2026-03-07T02:34:47.168755023Z" level=info msg="Starting up" Mar 7 02:34:47.173749 dockerd[1799]: time="2026-03-07T02:34:47.173628005Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 7 02:34:47.665492 dockerd[1799]: time="2026-03-07T02:34:47.661969498Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 7 02:34:47.698164 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:34:47.727294 (kubelet)[1831]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 02:34:48.000136 dockerd[1799]: time="2026-03-07T02:34:47.999671332Z" level=info msg="Loading containers: start." Mar 7 02:34:48.019119 kubelet[1831]: E0307 02:34:48.016967 1831 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 02:34:48.037689 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 02:34:48.038006 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 02:34:48.038686 systemd[1]: kubelet.service: Consumed 1.625s CPU time, 110.8M memory peak. Mar 7 02:34:48.064656 kernel: Initializing XFRM netlink socket Mar 7 02:34:49.587606 systemd-networkd[1458]: docker0: Link UP Mar 7 02:34:49.609814 dockerd[1799]: time="2026-03-07T02:34:49.609678516Z" level=info msg="Loading containers: done." Mar 7 02:34:49.726643 dockerd[1799]: time="2026-03-07T02:34:49.725257012Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 7 02:34:49.726643 dockerd[1799]: time="2026-03-07T02:34:49.725498692Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 7 02:34:49.726643 dockerd[1799]: time="2026-03-07T02:34:49.725723943Z" level=info msg="Initializing buildkit" Mar 7 02:34:49.912165 dockerd[1799]: time="2026-03-07T02:34:49.909722012Z" level=info msg="Completed buildkit initialization" Mar 7 02:34:49.945816 dockerd[1799]: time="2026-03-07T02:34:49.944127547Z" level=info msg="Daemon has completed initialization" Mar 7 02:34:49.945816 dockerd[1799]: time="2026-03-07T02:34:49.945259390Z" level=info msg="API listen on /run/docker.sock" Mar 7 02:34:49.944704 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 7 02:34:52.717525 containerd[1560]: time="2026-03-07T02:34:52.713449588Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 7 02:34:54.300099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2499210685.mount: Deactivated successfully. Mar 7 02:34:58.307266 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 7 02:34:58.459580 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 02:35:01.079967 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:35:01.105991 (kubelet)[2102]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 02:35:01.389427 kubelet[2102]: E0307 02:35:01.389195 2102 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 02:35:01.399560 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 02:35:01.399788 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 02:35:01.401575 systemd[1]: kubelet.service: Consumed 1.031s CPU time, 110.2M memory peak. Mar 7 02:35:05.040666 containerd[1560]: time="2026-03-07T02:35:05.039225806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:35:05.053612 containerd[1560]: time="2026-03-07T02:35:05.053513802Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=27696467" Mar 7 02:35:05.066253 containerd[1560]: time="2026-03-07T02:35:05.064401775Z" level=info msg="ImageCreate event name:\"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:35:05.081798 containerd[1560]: time="2026-03-07T02:35:05.081610801Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:35:05.088009 containerd[1560]: time="2026-03-07T02:35:05.086432063Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"27693066\" in 12.372519421s" Mar 7 02:35:05.088009 containerd[1560]: time="2026-03-07T02:35:05.087578822Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\"" Mar 7 02:35:05.115454 containerd[1560]: time="2026-03-07T02:35:05.115179763Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 7 02:35:06.282515 update_engine[1544]: I20260307 02:35:06.282019 1544 update_attempter.cc:509] Updating boot flags... Mar 7 02:35:09.772422 containerd[1560]: time="2026-03-07T02:35:09.770018151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:35:09.773623 containerd[1560]: time="2026-03-07T02:35:09.772794148Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=21450700" Mar 7 02:35:09.778784 containerd[1560]: time="2026-03-07T02:35:09.778026379Z" level=info msg="ImageCreate event name:\"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:35:09.787701 containerd[1560]: time="2026-03-07T02:35:09.786953202Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:35:09.791382 containerd[1560]: time="2026-03-07T02:35:09.791012892Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"23142311\" in 4.675783766s" Mar 7 02:35:09.791382 containerd[1560]: time="2026-03-07T02:35:09.791081250Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\"" Mar 7 02:35:09.791382 containerd[1560]: time="2026-03-07T02:35:09.792089048Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 7 02:35:11.539688 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 7 02:35:11.555247 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 02:35:13.161275 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:35:13.173181 (kubelet)[2146]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 02:35:13.624732 kubelet[2146]: E0307 02:35:13.624593 2146 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 02:35:13.632023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 02:35:13.632415 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 02:35:13.634020 systemd[1]: kubelet.service: Consumed 919ms CPU time, 111.3M memory peak. Mar 7 02:35:14.241435 containerd[1560]: time="2026-03-07T02:35:14.239480984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:35:14.245613 containerd[1560]: time="2026-03-07T02:35:14.245533452Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=15548429" Mar 7 02:35:14.251273 containerd[1560]: time="2026-03-07T02:35:14.249445673Z" level=info msg="ImageCreate event name:\"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:35:14.263605 containerd[1560]: time="2026-03-07T02:35:14.260648281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:35:14.263605 containerd[1560]: time="2026-03-07T02:35:14.262040260Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"17240058\" in 4.46987047s" Mar 7 02:35:14.263605 containerd[1560]: time="2026-03-07T02:35:14.262077880Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\"" Mar 7 02:35:14.280758 containerd[1560]: time="2026-03-07T02:35:14.279411160Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 7 02:35:17.321276 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount635733942.mount: Deactivated successfully. Mar 7 02:35:18.329143 containerd[1560]: time="2026-03-07T02:35:18.328448980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:35:18.332166 containerd[1560]: time="2026-03-07T02:35:18.332048277Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=25685312" Mar 7 02:35:18.334540 containerd[1560]: time="2026-03-07T02:35:18.334393245Z" level=info msg="ImageCreate event name:\"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:35:18.344859 containerd[1560]: time="2026-03-07T02:35:18.340179415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:35:18.344859 containerd[1560]: time="2026-03-07T02:35:18.341102318Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"25684331\" in 4.06164955s" Mar 7 02:35:18.344859 containerd[1560]: time="2026-03-07T02:35:18.341141030Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\"" Mar 7 02:35:18.344859 containerd[1560]: time="2026-03-07T02:35:18.341702719Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 7 02:35:19.022823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2498561905.mount: Deactivated successfully. Mar 7 02:35:21.911614 containerd[1560]: time="2026-03-07T02:35:21.910998379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:35:21.917137 containerd[1560]: time="2026-03-07T02:35:21.917026551Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556542" Mar 7 02:35:21.922665 containerd[1560]: time="2026-03-07T02:35:21.922509136Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:35:21.933215 containerd[1560]: time="2026-03-07T02:35:21.932967748Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:35:21.943704 containerd[1560]: time="2026-03-07T02:35:21.943638488Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 3.601896625s" Mar 7 02:35:21.948462 containerd[1560]: time="2026-03-07T02:35:21.948208038Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Mar 7 02:35:21.955728 containerd[1560]: time="2026-03-07T02:35:21.955588385Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 7 02:35:22.572647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3302088696.mount: Deactivated successfully. Mar 7 02:35:22.597090 containerd[1560]: time="2026-03-07T02:35:22.596866324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:35:22.602231 containerd[1560]: time="2026-03-07T02:35:22.602029378Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 7 02:35:22.604977 containerd[1560]: time="2026-03-07T02:35:22.604772272Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:35:22.611159 containerd[1560]: time="2026-03-07T02:35:22.610073416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:35:22.612000 containerd[1560]: time="2026-03-07T02:35:22.611641786Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 655.950889ms" Mar 7 02:35:22.612000 containerd[1560]: time="2026-03-07T02:35:22.611719030Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 7 02:35:22.612931 containerd[1560]: time="2026-03-07T02:35:22.612798224Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 7 02:35:23.320724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1843285242.mount: Deactivated successfully. Mar 7 02:35:23.793213 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 7 02:35:23.819773 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 02:35:24.466207 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:35:24.506104 (kubelet)[2254]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 02:35:24.689802 kubelet[2254]: E0307 02:35:24.689626 2254 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 02:35:24.696675 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 02:35:24.697693 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 02:35:24.699983 systemd[1]: kubelet.service: Consumed 343ms CPU time, 110.2M memory peak. Mar 7 02:35:27.274581 containerd[1560]: time="2026-03-07T02:35:27.272508055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:35:27.275794 containerd[1560]: time="2026-03-07T02:35:27.275741953Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23630322" Mar 7 02:35:27.281803 containerd[1560]: time="2026-03-07T02:35:27.281714426Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:35:27.290989 containerd[1560]: time="2026-03-07T02:35:27.290610291Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:35:27.292031 containerd[1560]: time="2026-03-07T02:35:27.291936426Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 4.679073421s" Mar 7 02:35:27.292031 containerd[1560]: time="2026-03-07T02:35:27.291996438Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Mar 7 02:35:30.010218 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:35:30.015242 systemd[1]: kubelet.service: Consumed 343ms CPU time, 110.2M memory peak. Mar 7 02:35:30.028027 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 02:35:30.127462 systemd[1]: Reload requested from client PID 2332 ('systemctl') (unit session-9.scope)... Mar 7 02:35:30.127627 systemd[1]: Reloading... Mar 7 02:35:30.348421 zram_generator::config[2378]: No configuration found. Mar 7 02:35:30.854099 systemd[1]: Reloading finished in 725 ms. Mar 7 02:35:31.005439 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 7 02:35:31.005603 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 7 02:35:31.008233 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:35:31.008295 systemd[1]: kubelet.service: Consumed 201ms CPU time, 98.4M memory peak. Mar 7 02:35:31.023734 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 02:35:31.456002 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:35:31.473006 (kubelet)[2423]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 02:35:31.815981 kubelet[2423]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 02:35:32.025259 kubelet[2423]: I0307 02:35:32.025143 2423 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 7 02:35:32.025259 kubelet[2423]: I0307 02:35:32.025222 2423 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 02:35:32.025564 kubelet[2423]: I0307 02:35:32.025374 2423 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 7 02:35:32.025564 kubelet[2423]: I0307 02:35:32.025386 2423 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 02:35:32.025799 kubelet[2423]: I0307 02:35:32.025722 2423 server.go:951] "Client rotation is on, will bootstrap in background" Mar 7 02:35:32.143987 kubelet[2423]: E0307 02:35:32.143134 2423 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.118:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 02:35:32.147619 kubelet[2423]: I0307 02:35:32.146438 2423 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 02:35:32.170286 kubelet[2423]: I0307 02:35:32.169749 2423 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 7 02:35:32.181969 kubelet[2423]: I0307 02:35:32.181360 2423 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 7 02:35:32.183288 kubelet[2423]: I0307 02:35:32.183149 2423 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 02:35:32.183641 kubelet[2423]: I0307 02:35:32.183211 2423 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 02:35:32.183828 kubelet[2423]: I0307 02:35:32.183646 2423 topology_manager.go:143] "Creating topology manager with none policy" Mar 7 02:35:32.183828 kubelet[2423]: I0307 02:35:32.183661 2423 container_manager_linux.go:308] "Creating device plugin manager" Mar 7 02:35:32.183978 kubelet[2423]: I0307 02:35:32.183842 2423 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 7 02:35:32.189820 kubelet[2423]: I0307 02:35:32.189726 2423 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 7 02:35:32.190530 kubelet[2423]: I0307 02:35:32.190301 2423 kubelet.go:482] "Attempting to sync node with API server" Mar 7 02:35:32.190530 kubelet[2423]: I0307 02:35:32.190441 2423 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 02:35:32.190622 kubelet[2423]: I0307 02:35:32.190591 2423 kubelet.go:394] "Adding apiserver pod source" Mar 7 02:35:32.190653 kubelet[2423]: I0307 02:35:32.190640 2423 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 02:35:32.198495 kubelet[2423]: I0307 02:35:32.196974 2423 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 7 02:35:32.201174 kubelet[2423]: I0307 02:35:32.201073 2423 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 02:35:32.201236 kubelet[2423]: I0307 02:35:32.201213 2423 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 7 02:35:32.201570 kubelet[2423]: W0307 02:35:32.201522 2423 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 7 02:35:32.215201 kubelet[2423]: I0307 02:35:32.213976 2423 server.go:1257] "Started kubelet" Mar 7 02:35:32.216424 kubelet[2423]: I0307 02:35:32.215588 2423 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 02:35:32.216424 kubelet[2423]: I0307 02:35:32.215824 2423 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 7 02:35:32.216424 kubelet[2423]: I0307 02:35:32.216282 2423 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 02:35:32.216685 kubelet[2423]: I0307 02:35:32.216539 2423 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 02:35:32.221223 kubelet[2423]: I0307 02:35:32.219621 2423 server.go:317] "Adding debug handlers to kubelet server" Mar 7 02:35:32.227251 kubelet[2423]: I0307 02:35:32.227159 2423 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 7 02:35:32.230402 kubelet[2423]: I0307 02:35:32.228155 2423 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 02:35:32.239390 kubelet[2423]: I0307 02:35:32.235271 2423 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 7 02:35:32.239390 kubelet[2423]: E0307 02:35:32.235595 2423 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 02:35:32.240041 kubelet[2423]: I0307 02:35:32.240023 2423 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 7 02:35:32.240517 kubelet[2423]: I0307 02:35:32.240501 2423 reconciler.go:29] "Reconciler: start to sync state" Mar 7 02:35:32.245930 kubelet[2423]: I0307 02:35:32.244121 2423 factory.go:223] Registration of the systemd container factory successfully Mar 7 02:35:32.245930 kubelet[2423]: I0307 02:35:32.244244 2423 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 02:35:32.245930 kubelet[2423]: E0307 02:35:32.240413 2423 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.118:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.118:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189a6e87a6537551 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-07 02:35:32.213839185 +0000 UTC m=+0.730276709,LastTimestamp:2026-03-07 02:35:32.213839185 +0000 UTC m=+0.730276709,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 7 02:35:32.250000 kubelet[2423]: E0307 02:35:32.246645 2423 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="200ms" Mar 7 02:35:32.253938 kubelet[2423]: E0307 02:35:32.251283 2423 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 02:35:32.255353 kubelet[2423]: I0307 02:35:32.254621 2423 factory.go:223] Registration of the containerd container factory successfully Mar 7 02:35:32.309639 kubelet[2423]: I0307 02:35:32.307126 2423 cpu_manager.go:225] "Starting" policy="none" Mar 7 02:35:32.309639 kubelet[2423]: I0307 02:35:32.307170 2423 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 7 02:35:32.309639 kubelet[2423]: I0307 02:35:32.307196 2423 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 7 02:35:32.323511 kubelet[2423]: I0307 02:35:32.322839 2423 policy_none.go:50] "Start" Mar 7 02:35:32.323606 kubelet[2423]: I0307 02:35:32.323545 2423 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 7 02:35:32.323718 kubelet[2423]: I0307 02:35:32.323660 2423 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 7 02:35:32.328850 kubelet[2423]: I0307 02:35:32.328666 2423 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 7 02:35:32.333404 kubelet[2423]: I0307 02:35:32.331499 2423 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 7 02:35:32.333404 kubelet[2423]: I0307 02:35:32.331558 2423 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 7 02:35:32.333404 kubelet[2423]: I0307 02:35:32.331643 2423 kubelet.go:2501] "Starting kubelet main sync loop" Mar 7 02:35:32.333404 kubelet[2423]: E0307 02:35:32.331754 2423 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 02:35:32.339877 kubelet[2423]: I0307 02:35:32.335415 2423 policy_none.go:44] "Start" Mar 7 02:35:32.339877 kubelet[2423]: E0307 02:35:32.336597 2423 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 02:35:32.362968 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 7 02:35:32.404819 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 7 02:35:32.419241 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 7 02:35:32.435552 kubelet[2423]: E0307 02:35:32.434740 2423 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 02:35:32.435552 kubelet[2423]: I0307 02:35:32.435179 2423 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 7 02:35:32.435781 kubelet[2423]: E0307 02:35:32.435761 2423 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 7 02:35:32.438232 kubelet[2423]: I0307 02:35:32.435283 2423 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 02:35:32.438232 kubelet[2423]: E0307 02:35:32.436783 2423 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 02:35:32.438232 kubelet[2423]: I0307 02:35:32.437111 2423 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 7 02:35:32.440472 kubelet[2423]: E0307 02:35:32.440419 2423 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 02:35:32.440545 kubelet[2423]: E0307 02:35:32.440485 2423 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 7 02:35:32.448183 kubelet[2423]: E0307 02:35:32.448023 2423 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="400ms" Mar 7 02:35:32.542135 kubelet[2423]: I0307 02:35:32.541468 2423 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 7 02:35:32.542135 kubelet[2423]: E0307 02:35:32.541983 2423 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.118:6443/api/v1/nodes\": dial tcp 10.0.0.118:6443: connect: connection refused" node="localhost" Mar 7 02:35:32.652016 kubelet[2423]: I0307 02:35:32.649487 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/57ff640db504a5953ca85bc5087b1092-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"57ff640db504a5953ca85bc5087b1092\") " pod="kube-system/kube-apiserver-localhost" Mar 7 02:35:32.652016 kubelet[2423]: I0307 02:35:32.649541 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/57ff640db504a5953ca85bc5087b1092-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"57ff640db504a5953ca85bc5087b1092\") " pod="kube-system/kube-apiserver-localhost" Mar 7 02:35:32.652016 kubelet[2423]: I0307 02:35:32.649570 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/57ff640db504a5953ca85bc5087b1092-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"57ff640db504a5953ca85bc5087b1092\") " pod="kube-system/kube-apiserver-localhost" Mar 7 02:35:32.680144 systemd[1]: Created slice kubepods-burstable-pod57ff640db504a5953ca85bc5087b1092.slice - libcontainer container kubepods-burstable-pod57ff640db504a5953ca85bc5087b1092.slice. Mar 7 02:35:32.703845 kubelet[2423]: E0307 02:35:32.703269 2423 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 02:35:32.711851 systemd[1]: Created slice kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice - libcontainer container kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice. Mar 7 02:35:32.723933 kubelet[2423]: E0307 02:35:32.723790 2423 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 02:35:32.731758 systemd[1]: Created slice kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice - libcontainer container kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice. Mar 7 02:35:32.737291 kubelet[2423]: E0307 02:35:32.737206 2423 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 02:35:32.744692 kubelet[2423]: I0307 02:35:32.744568 2423 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 7 02:35:32.745697 kubelet[2423]: E0307 02:35:32.745299 2423 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.118:6443/api/v1/nodes\": dial tcp 10.0.0.118:6443: connect: connection refused" node="localhost" Mar 7 02:35:32.750679 kubelet[2423]: I0307 02:35:32.750580 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 7 02:35:32.752040 kubelet[2423]: I0307 02:35:32.750691 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:35:32.752040 kubelet[2423]: I0307 02:35:32.751647 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:35:32.752040 kubelet[2423]: I0307 02:35:32.751669 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:35:32.752040 kubelet[2423]: I0307 02:35:32.751690 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:35:32.752040 kubelet[2423]: I0307 02:35:32.751713 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:35:32.850680 kubelet[2423]: E0307 02:35:32.849632 2423 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="800ms" Mar 7 02:35:33.011947 kubelet[2423]: E0307 02:35:33.011607 2423 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:35:33.017099 containerd[1560]: time="2026-03-07T02:35:33.015819166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:57ff640db504a5953ca85bc5087b1092,Namespace:kube-system,Attempt:0,}" Mar 7 02:35:33.041587 kubelet[2423]: E0307 02:35:33.041252 2423 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:35:33.048034 kubelet[2423]: E0307 02:35:33.047817 2423 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:35:33.048589 containerd[1560]: time="2026-03-07T02:35:33.048502899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,}" Mar 7 02:35:33.050960 containerd[1560]: time="2026-03-07T02:35:33.050821238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,}" Mar 7 02:35:33.148140 kubelet[2423]: I0307 02:35:33.147870 2423 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 7 02:35:33.148402 kubelet[2423]: E0307 02:35:33.148299 2423 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.118:6443/api/v1/nodes\": dial tcp 10.0.0.118:6443: connect: connection refused" node="localhost" Mar 7 02:35:33.652091 kubelet[2423]: E0307 02:35:33.651535 2423 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="1.6s" Mar 7 02:35:33.859253 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount493481848.mount: Deactivated successfully. Mar 7 02:35:33.925728 containerd[1560]: time="2026-03-07T02:35:33.922812039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 02:35:33.938448 containerd[1560]: time="2026-03-07T02:35:33.937432422Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 7 02:35:33.953428 containerd[1560]: time="2026-03-07T02:35:33.949719402Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 02:35:33.956143 kubelet[2423]: I0307 02:35:33.955666 2423 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 7 02:35:33.956613 kubelet[2423]: E0307 02:35:33.956150 2423 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.118:6443/api/v1/nodes\": dial tcp 10.0.0.118:6443: connect: connection refused" node="localhost" Mar 7 02:35:33.968706 containerd[1560]: time="2026-03-07T02:35:33.966245297Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 02:35:33.973227 containerd[1560]: time="2026-03-07T02:35:33.972109176Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 02:35:33.981408 containerd[1560]: time="2026-03-07T02:35:33.981179561Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 7 02:35:33.995064 containerd[1560]: time="2026-03-07T02:35:33.992183886Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 02:35:33.998770 containerd[1560]: time="2026-03-07T02:35:33.996672998Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 929.185667ms" Mar 7 02:35:33.999718 containerd[1560]: time="2026-03-07T02:35:33.999477904Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 7 02:35:34.007855 containerd[1560]: time="2026-03-07T02:35:34.006698071Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 974.05321ms" Mar 7 02:35:34.043402 containerd[1560]: time="2026-03-07T02:35:34.043118108Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 989.579756ms" Mar 7 02:35:34.149397 containerd[1560]: time="2026-03-07T02:35:34.146302899Z" level=info msg="connecting to shim fd26256077d7e2290a466fa157a9c9ed4ef0a1e522a48e404a8a09a44c6fb11c" address="unix:///run/containerd/s/73e31db556c1990de36d9e9481865bb5d0ad87748c9281737d3ff4ba4c1c40a8" namespace=k8s.io protocol=ttrpc version=3 Mar 7 02:35:34.160427 containerd[1560]: time="2026-03-07T02:35:34.160290906Z" level=info msg="connecting to shim 3ec245df886760647656a35fa60dc093be19798abb93454f2f39b9834cb180d8" address="unix:///run/containerd/s/5ec92b74616d90455c415c7c84050d9565366f17081a74ea7047c1b4fba3c873" namespace=k8s.io protocol=ttrpc version=3 Mar 7 02:35:34.185126 containerd[1560]: time="2026-03-07T02:35:34.184528109Z" level=info msg="connecting to shim a5cd09a232f05f6c3781b30badb2a3769608085f54ee5ca0e00996102e92ff3a" address="unix:///run/containerd/s/8134bf8956515cf49eb336629257f4ff7d99100e4a5e50a77ad695c5f1895317" namespace=k8s.io protocol=ttrpc version=3 Mar 7 02:35:34.282013 kubelet[2423]: E0307 02:35:34.280977 2423 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.118:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 02:35:34.344685 systemd[1]: Started cri-containerd-3ec245df886760647656a35fa60dc093be19798abb93454f2f39b9834cb180d8.scope - libcontainer container 3ec245df886760647656a35fa60dc093be19798abb93454f2f39b9834cb180d8. Mar 7 02:35:34.364591 systemd[1]: Started cri-containerd-a5cd09a232f05f6c3781b30badb2a3769608085f54ee5ca0e00996102e92ff3a.scope - libcontainer container a5cd09a232f05f6c3781b30badb2a3769608085f54ee5ca0e00996102e92ff3a. Mar 7 02:35:34.371438 systemd[1]: Started cri-containerd-fd26256077d7e2290a466fa157a9c9ed4ef0a1e522a48e404a8a09a44c6fb11c.scope - libcontainer container fd26256077d7e2290a466fa157a9c9ed4ef0a1e522a48e404a8a09a44c6fb11c. Mar 7 02:35:34.602501 containerd[1560]: time="2026-03-07T02:35:34.602069214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:57ff640db504a5953ca85bc5087b1092,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd26256077d7e2290a466fa157a9c9ed4ef0a1e522a48e404a8a09a44c6fb11c\"" Mar 7 02:35:34.607387 kubelet[2423]: E0307 02:35:34.607014 2423 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:35:34.608006 containerd[1560]: time="2026-03-07T02:35:34.607973491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ec245df886760647656a35fa60dc093be19798abb93454f2f39b9834cb180d8\"" Mar 7 02:35:34.609938 kubelet[2423]: E0307 02:35:34.609865 2423 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:35:34.612123 containerd[1560]: time="2026-03-07T02:35:34.611976232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5cd09a232f05f6c3781b30badb2a3769608085f54ee5ca0e00996102e92ff3a\"" Mar 7 02:35:34.617491 kubelet[2423]: E0307 02:35:34.617263 2423 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:35:34.622068 containerd[1560]: time="2026-03-07T02:35:34.622036515Z" level=info msg="CreateContainer within sandbox \"fd26256077d7e2290a466fa157a9c9ed4ef0a1e522a48e404a8a09a44c6fb11c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 7 02:35:34.632258 containerd[1560]: time="2026-03-07T02:35:34.630569999Z" level=info msg="CreateContainer within sandbox \"3ec245df886760647656a35fa60dc093be19798abb93454f2f39b9834cb180d8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 7 02:35:34.638288 containerd[1560]: time="2026-03-07T02:35:34.637495766Z" level=info msg="CreateContainer within sandbox \"a5cd09a232f05f6c3781b30badb2a3769608085f54ee5ca0e00996102e92ff3a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 7 02:35:34.670222 containerd[1560]: time="2026-03-07T02:35:34.670098398Z" level=info msg="Container f8e73e6e706688cb7a7e928c9a7fc36c382337fdbb56d67c43c48b19779e6d30: CDI devices from CRI Config.CDIDevices: []" Mar 7 02:35:34.687523 containerd[1560]: time="2026-03-07T02:35:34.687404776Z" level=info msg="Container d9a191c3f9437fbb791acb8392d460187b627b83de2b58420e723c677be2c02b: CDI devices from CRI Config.CDIDevices: []" Mar 7 02:35:34.711841 containerd[1560]: time="2026-03-07T02:35:34.709528414Z" level=info msg="Container eee172c9098025bb9ca394b1974984e963c95a9e2a02807af8c56cbdc064e924: CDI devices from CRI Config.CDIDevices: []" Mar 7 02:35:34.800653 containerd[1560]: time="2026-03-07T02:35:34.800424998Z" level=info msg="CreateContainer within sandbox \"fd26256077d7e2290a466fa157a9c9ed4ef0a1e522a48e404a8a09a44c6fb11c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f8e73e6e706688cb7a7e928c9a7fc36c382337fdbb56d67c43c48b19779e6d30\"" Mar 7 02:35:34.802537 containerd[1560]: time="2026-03-07T02:35:34.802512756Z" level=info msg="StartContainer for \"f8e73e6e706688cb7a7e928c9a7fc36c382337fdbb56d67c43c48b19779e6d30\"" Mar 7 02:35:34.811652 containerd[1560]: time="2026-03-07T02:35:34.811620056Z" level=info msg="connecting to shim f8e73e6e706688cb7a7e928c9a7fc36c382337fdbb56d67c43c48b19779e6d30" address="unix:///run/containerd/s/73e31db556c1990de36d9e9481865bb5d0ad87748c9281737d3ff4ba4c1c40a8" protocol=ttrpc version=3 Mar 7 02:35:34.849079 containerd[1560]: time="2026-03-07T02:35:34.847834185Z" level=info msg="CreateContainer within sandbox \"a5cd09a232f05f6c3781b30badb2a3769608085f54ee5ca0e00996102e92ff3a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"eee172c9098025bb9ca394b1974984e963c95a9e2a02807af8c56cbdc064e924\"" Mar 7 02:35:34.850768 containerd[1560]: time="2026-03-07T02:35:34.850720541Z" level=info msg="StartContainer for \"eee172c9098025bb9ca394b1974984e963c95a9e2a02807af8c56cbdc064e924\"" Mar 7 02:35:34.853240 containerd[1560]: time="2026-03-07T02:35:34.852862704Z" level=info msg="CreateContainer within sandbox \"3ec245df886760647656a35fa60dc093be19798abb93454f2f39b9834cb180d8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d9a191c3f9437fbb791acb8392d460187b627b83de2b58420e723c677be2c02b\"" Mar 7 02:35:34.854028 containerd[1560]: time="2026-03-07T02:35:34.853924973Z" level=info msg="StartContainer for \"d9a191c3f9437fbb791acb8392d460187b627b83de2b58420e723c677be2c02b\"" Mar 7 02:35:34.854762 containerd[1560]: time="2026-03-07T02:35:34.854696274Z" level=info msg="connecting to shim eee172c9098025bb9ca394b1974984e963c95a9e2a02807af8c56cbdc064e924" address="unix:///run/containerd/s/8134bf8956515cf49eb336629257f4ff7d99100e4a5e50a77ad695c5f1895317" protocol=ttrpc version=3 Mar 7 02:35:34.859730 containerd[1560]: time="2026-03-07T02:35:34.859239908Z" level=info msg="connecting to shim d9a191c3f9437fbb791acb8392d460187b627b83de2b58420e723c677be2c02b" address="unix:///run/containerd/s/5ec92b74616d90455c415c7c84050d9565366f17081a74ea7047c1b4fba3c873" protocol=ttrpc version=3 Mar 7 02:35:34.876690 systemd[1]: Started cri-containerd-f8e73e6e706688cb7a7e928c9a7fc36c382337fdbb56d67c43c48b19779e6d30.scope - libcontainer container f8e73e6e706688cb7a7e928c9a7fc36c382337fdbb56d67c43c48b19779e6d30. Mar 7 02:35:34.926743 systemd[1]: Started cri-containerd-d9a191c3f9437fbb791acb8392d460187b627b83de2b58420e723c677be2c02b.scope - libcontainer container d9a191c3f9437fbb791acb8392d460187b627b83de2b58420e723c677be2c02b. Mar 7 02:35:34.961629 systemd[1]: Started cri-containerd-eee172c9098025bb9ca394b1974984e963c95a9e2a02807af8c56cbdc064e924.scope - libcontainer container eee172c9098025bb9ca394b1974984e963c95a9e2a02807af8c56cbdc064e924. Mar 7 02:35:35.065212 containerd[1560]: time="2026-03-07T02:35:35.064173690Z" level=info msg="StartContainer for \"f8e73e6e706688cb7a7e928c9a7fc36c382337fdbb56d67c43c48b19779e6d30\" returns successfully" Mar 7 02:35:35.106270 containerd[1560]: time="2026-03-07T02:35:35.105766618Z" level=info msg="StartContainer for \"d9a191c3f9437fbb791acb8392d460187b627b83de2b58420e723c677be2c02b\" returns successfully" Mar 7 02:35:35.137986 containerd[1560]: time="2026-03-07T02:35:35.137042503Z" level=info msg="StartContainer for \"eee172c9098025bb9ca394b1974984e963c95a9e2a02807af8c56cbdc064e924\" returns successfully" Mar 7 02:35:35.383072 kubelet[2423]: E0307 02:35:35.382864 2423 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 02:35:35.383722 kubelet[2423]: E0307 02:35:35.383114 2423 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:35:35.403999 kubelet[2423]: E0307 02:35:35.403268 2423 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 02:35:35.403999 kubelet[2423]: E0307 02:35:35.403497 2423 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:35:35.416377 kubelet[2423]: E0307 02:35:35.414038 2423 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 02:35:35.416377 kubelet[2423]: E0307 02:35:35.414539 2423 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:35:35.563855 kubelet[2423]: I0307 02:35:35.563668 2423 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 7 02:35:36.422999 kubelet[2423]: E0307 02:35:36.422813 2423 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 02:35:36.424054 kubelet[2423]: E0307 02:35:36.423028 2423 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:35:36.424543 kubelet[2423]: E0307 02:35:36.424466 2423 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 02:35:36.424610 kubelet[2423]: E0307 02:35:36.424595 2423 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:35:37.425794 kubelet[2423]: E0307 02:35:37.422655 2423 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 02:35:37.425794 kubelet[2423]: E0307 02:35:37.422835 2423 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:35:37.917389 kubelet[2423]: E0307 02:35:37.916617 2423 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 02:35:37.917389 kubelet[2423]: E0307 02:35:37.916838 2423 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:35:38.018726 kubelet[2423]: E0307 02:35:38.018631 2423 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 7 02:35:38.123492 kubelet[2423]: I0307 02:35:38.123272 2423 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 7 02:35:38.139395 kubelet[2423]: I0307 02:35:38.139177 2423 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 7 02:35:38.175826 kubelet[2423]: E0307 02:35:38.175729 2423 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 7 02:35:38.176236 kubelet[2423]: I0307 02:35:38.176032 2423 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 7 02:35:38.181232 kubelet[2423]: E0307 02:35:38.181148 2423 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 7 02:35:38.181232 kubelet[2423]: I0307 02:35:38.181171 2423 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 7 02:35:38.183514 kubelet[2423]: E0307 02:35:38.183297 2423 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 7 02:35:38.205829 kubelet[2423]: I0307 02:35:38.205290 2423 apiserver.go:52] "Watching apiserver" Mar 7 02:35:38.240383 kubelet[2423]: I0307 02:35:38.240173 2423 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 7 02:35:42.170051 kubelet[2423]: I0307 02:35:42.167976 2423 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 7 02:35:42.201385 kubelet[2423]: E0307 02:35:42.201145 2423 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:35:42.454239 kubelet[2423]: E0307 02:35:42.453548 2423 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:35:43.990630 systemd[1]: Reload requested from client PID 2717 ('systemctl') (unit session-9.scope)... Mar 7 02:35:43.990674 systemd[1]: Reloading... Mar 7 02:35:44.130525 zram_generator::config[2757]: No configuration found. Mar 7 02:35:44.622709 systemd[1]: Reloading finished in 631 ms. Mar 7 02:35:44.701604 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 02:35:44.726396 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 02:35:44.726754 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:35:44.726821 systemd[1]: kubelet.service: Consumed 1.654s CPU time, 130.2M memory peak. Mar 7 02:35:44.733260 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 02:35:45.190124 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:35:45.213442 (kubelet)[2804]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 02:35:45.338198 kubelet[2804]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 02:35:45.369117 kubelet[2804]: I0307 02:35:45.367545 2804 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 7 02:35:45.369117 kubelet[2804]: I0307 02:35:45.367595 2804 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 02:35:45.369117 kubelet[2804]: I0307 02:35:45.367617 2804 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 7 02:35:45.369117 kubelet[2804]: I0307 02:35:45.367625 2804 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 02:35:45.369117 kubelet[2804]: I0307 02:35:45.368015 2804 server.go:951] "Client rotation is on, will bootstrap in background" Mar 7 02:35:45.372979 kubelet[2804]: I0307 02:35:45.371815 2804 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 7 02:35:45.381213 kubelet[2804]: I0307 02:35:45.380859 2804 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 02:35:45.403602 kubelet[2804]: I0307 02:35:45.402261 2804 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 7 02:35:45.426424 kubelet[2804]: I0307 02:35:45.425700 2804 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 7 02:35:45.426424 kubelet[2804]: I0307 02:35:45.426247 2804 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 02:35:45.426673 kubelet[2804]: I0307 02:35:45.426285 2804 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 02:35:45.426673 kubelet[2804]: I0307 02:35:45.426611 2804 topology_manager.go:143] "Creating topology manager with none policy" Mar 7 02:35:45.426673 kubelet[2804]: I0307 02:35:45.426623 2804 container_manager_linux.go:308] "Creating device plugin manager" Mar 7 02:35:45.426673 kubelet[2804]: I0307 02:35:45.426652 2804 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 7 02:35:45.426880 kubelet[2804]: I0307 02:35:45.426864 2804 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 7 02:35:45.428934 kubelet[2804]: I0307 02:35:45.428821 2804 kubelet.go:482] "Attempting to sync node with API server" Mar 7 02:35:45.428934 kubelet[2804]: I0307 02:35:45.428886 2804 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 02:35:45.429021 kubelet[2804]: I0307 02:35:45.428962 2804 kubelet.go:394] "Adding apiserver pod source" Mar 7 02:35:45.429021 kubelet[2804]: I0307 02:35:45.428976 2804 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 02:35:45.435519 kubelet[2804]: I0307 02:35:45.434210 2804 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 7 02:35:45.442867 kubelet[2804]: I0307 02:35:45.441871 2804 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 02:35:45.442867 kubelet[2804]: I0307 02:35:45.441968 2804 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 7 02:35:45.463027 kubelet[2804]: I0307 02:35:45.462892 2804 server.go:1257] "Started kubelet" Mar 7 02:35:45.464687 kubelet[2804]: I0307 02:35:45.463595 2804 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 02:35:45.464687 kubelet[2804]: I0307 02:35:45.464031 2804 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 7 02:35:45.466275 kubelet[2804]: I0307 02:35:45.466255 2804 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 02:35:45.469121 kubelet[2804]: I0307 02:35:45.469104 2804 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 7 02:35:45.471750 kubelet[2804]: I0307 02:35:45.471556 2804 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 02:35:45.480388 kubelet[2804]: I0307 02:35:45.479500 2804 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 7 02:35:45.485067 kubelet[2804]: I0307 02:35:45.484810 2804 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 02:35:45.492796 kubelet[2804]: I0307 02:35:45.491809 2804 server.go:317] "Adding debug handlers to kubelet server" Mar 7 02:35:45.492796 kubelet[2804]: I0307 02:35:45.492046 2804 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 7 02:35:45.492796 kubelet[2804]: I0307 02:35:45.492608 2804 reconciler.go:29] "Reconciler: start to sync state" Mar 7 02:35:45.505299 kubelet[2804]: I0307 02:35:45.504775 2804 factory.go:223] Registration of the systemd container factory successfully Mar 7 02:35:45.514783 kubelet[2804]: I0307 02:35:45.514750 2804 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 02:35:45.521136 kubelet[2804]: E0307 02:35:45.521111 2804 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 02:35:45.523384 kubelet[2804]: I0307 02:35:45.521803 2804 factory.go:223] Registration of the containerd container factory successfully Mar 7 02:35:45.606727 kubelet[2804]: I0307 02:35:45.606538 2804 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 7 02:35:45.610757 kubelet[2804]: I0307 02:35:45.610701 2804 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 7 02:35:45.610875 kubelet[2804]: I0307 02:35:45.610841 2804 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 7 02:35:45.611040 kubelet[2804]: I0307 02:35:45.611006 2804 kubelet.go:2501] "Starting kubelet main sync loop" Mar 7 02:35:45.611452 kubelet[2804]: E0307 02:35:45.611092 2804 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 02:35:45.664386 kubelet[2804]: I0307 02:35:45.662982 2804 cpu_manager.go:225] "Starting" policy="none" Mar 7 02:35:45.664386 kubelet[2804]: I0307 02:35:45.663005 2804 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 7 02:35:45.664386 kubelet[2804]: I0307 02:35:45.663029 2804 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 7 02:35:45.664386 kubelet[2804]: I0307 02:35:45.663212 2804 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 7 02:35:45.664386 kubelet[2804]: I0307 02:35:45.663227 2804 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 7 02:35:45.664386 kubelet[2804]: I0307 02:35:45.663260 2804 policy_none.go:50] "Start" Mar 7 02:35:45.664386 kubelet[2804]: I0307 02:35:45.663274 2804 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 7 02:35:45.664386 kubelet[2804]: I0307 02:35:45.663287 2804 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 7 02:35:45.664386 kubelet[2804]: I0307 02:35:45.663549 2804 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 7 02:35:45.664386 kubelet[2804]: I0307 02:35:45.663610 2804 policy_none.go:44] "Start" Mar 7 02:35:45.680794 kubelet[2804]: E0307 02:35:45.678570 2804 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 02:35:45.680794 kubelet[2804]: I0307 02:35:45.678776 2804 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 7 02:35:45.680794 kubelet[2804]: I0307 02:35:45.678789 2804 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 02:35:45.680794 kubelet[2804]: I0307 02:35:45.680088 2804 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 7 02:35:45.690835 kubelet[2804]: E0307 02:35:45.690639 2804 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 02:35:45.714246 kubelet[2804]: I0307 02:35:45.712812 2804 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 7 02:35:45.714246 kubelet[2804]: I0307 02:35:45.713969 2804 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 7 02:35:45.717714 kubelet[2804]: I0307 02:35:45.717662 2804 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 7 02:35:45.767636 kubelet[2804]: E0307 02:35:45.767508 2804 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 7 02:35:45.814168 kubelet[2804]: I0307 02:35:45.813072 2804 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 7 02:35:45.869095 kubelet[2804]: I0307 02:35:45.868655 2804 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Mar 7 02:35:45.869095 kubelet[2804]: I0307 02:35:45.868803 2804 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 7 02:35:45.895495 kubelet[2804]: I0307 02:35:45.894742 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/57ff640db504a5953ca85bc5087b1092-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"57ff640db504a5953ca85bc5087b1092\") " pod="kube-system/kube-apiserver-localhost" Mar 7 02:35:45.895495 kubelet[2804]: I0307 02:35:45.894795 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/57ff640db504a5953ca85bc5087b1092-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"57ff640db504a5953ca85bc5087b1092\") " pod="kube-system/kube-apiserver-localhost" Mar 7 02:35:45.895495 kubelet[2804]: I0307 02:35:45.894823 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:35:45.895495 kubelet[2804]: I0307 02:35:45.894844 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:35:45.895495 kubelet[2804]: I0307 02:35:45.894867 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 7 02:35:45.895768 kubelet[2804]: I0307 02:35:45.894884 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/57ff640db504a5953ca85bc5087b1092-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"57ff640db504a5953ca85bc5087b1092\") " pod="kube-system/kube-apiserver-localhost" Mar 7 02:35:45.895768 kubelet[2804]: I0307 02:35:45.894955 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:35:45.895768 kubelet[2804]: I0307 02:35:45.894976 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:35:45.895768 kubelet[2804]: I0307 02:35:45.894995 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:35:46.071564 kubelet[2804]: E0307 02:35:46.068688 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:35:46.071564 kubelet[2804]: E0307 02:35:46.070032 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:35:46.071564 kubelet[2804]: E0307 02:35:46.070401 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:35:46.444850 kubelet[2804]: I0307 02:35:46.444074 2804 apiserver.go:52] "Watching apiserver" Mar 7 02:35:46.493537 kubelet[2804]: I0307 02:35:46.493277 2804 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 7 02:35:46.654030 kubelet[2804]: E0307 02:35:46.651629 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:35:46.654030 kubelet[2804]: E0307 02:35:46.651866 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:35:46.654576 kubelet[2804]: E0307 02:35:46.654554 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:35:47.083778 kubelet[2804]: I0307 02:35:47.083010 2804 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.082888042 podStartE2EDuration="2.082888042s" podCreationTimestamp="2026-03-07 02:35:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 02:35:47.078088967 +0000 UTC m=+1.850868308" watchObservedRunningTime="2026-03-07 02:35:47.082888042 +0000 UTC m=+1.855667374" Mar 7 02:35:47.173410 kubelet[2804]: I0307 02:35:47.172646 2804 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.172625414 podStartE2EDuration="2.172625414s" podCreationTimestamp="2026-03-07 02:35:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 02:35:47.115039876 +0000 UTC m=+1.887819238" watchObservedRunningTime="2026-03-07 02:35:47.172625414 +0000 UTC m=+1.945404744" Mar 7 02:35:47.656680 kubelet[2804]: E0307 02:35:47.656136 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:35:47.669821 kubelet[2804]: E0307 02:35:47.668840 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:35:48.692123 kubelet[2804]: E0307 02:35:48.691397 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:35:49.346711 kubelet[2804]: I0307 02:35:49.343222 2804 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 7 02:35:49.347721 containerd[1560]: time="2026-03-07T02:35:49.345540664Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 7 02:35:49.349616 kubelet[2804]: I0307 02:35:49.349494 2804 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 7 02:35:50.154668 systemd[1]: Created slice kubepods-besteffort-podef082a4a_9841_499a_a4ea_91a5500c0ec4.slice - libcontainer container kubepods-besteffort-podef082a4a_9841_499a_a4ea_91a5500c0ec4.slice. Mar 7 02:35:50.268735 kubelet[2804]: I0307 02:35:50.264112 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef082a4a-9841-499a-a4ea-91a5500c0ec4-xtables-lock\") pod \"kube-proxy-gr5vv\" (UID: \"ef082a4a-9841-499a-a4ea-91a5500c0ec4\") " pod="kube-system/kube-proxy-gr5vv" Mar 7 02:35:50.268735 kubelet[2804]: I0307 02:35:50.264200 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ef082a4a-9841-499a-a4ea-91a5500c0ec4-kube-proxy\") pod \"kube-proxy-gr5vv\" (UID: \"ef082a4a-9841-499a-a4ea-91a5500c0ec4\") " pod="kube-system/kube-proxy-gr5vv" Mar 7 02:35:50.268735 kubelet[2804]: I0307 02:35:50.264221 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef082a4a-9841-499a-a4ea-91a5500c0ec4-lib-modules\") pod \"kube-proxy-gr5vv\" (UID: \"ef082a4a-9841-499a-a4ea-91a5500c0ec4\") " pod="kube-system/kube-proxy-gr5vv" Mar 7 02:35:50.268735 kubelet[2804]: I0307 02:35:50.264239 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65wg9\" (UniqueName: \"kubernetes.io/projected/ef082a4a-9841-499a-a4ea-91a5500c0ec4-kube-api-access-65wg9\") pod \"kube-proxy-gr5vv\" (UID: \"ef082a4a-9841-499a-a4ea-91a5500c0ec4\") " pod="kube-system/kube-proxy-gr5vv" Mar 7 02:35:50.494169 kubelet[2804]: E0307 02:35:50.493895 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:35:50.504664 containerd[1560]: time="2026-03-07T02:35:50.504518231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gr5vv,Uid:ef082a4a-9841-499a-a4ea-91a5500c0ec4,Namespace:kube-system,Attempt:0,}" Mar 7 02:35:50.604048 containerd[1560]: time="2026-03-07T02:35:50.603550109Z" level=info msg="connecting to shim 67715bfe4611937eee10fbb759c82cf277d7e6b9c42cb941a9e8b6a2d223dc68" address="unix:///run/containerd/s/2c9aa4105c37c4a0ed995302749f321c3632ac4cb29b88a79406325cbe40f584" namespace=k8s.io protocol=ttrpc version=3 Mar 7 02:35:50.752867 systemd[1]: Created slice kubepods-besteffort-pod6ef883c3_de96_41e9_9ea8_07262915f585.slice - libcontainer container kubepods-besteffort-pod6ef883c3_de96_41e9_9ea8_07262915f585.slice. Mar 7 02:35:50.775376 kubelet[2804]: I0307 02:35:50.775243 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6ef883c3-de96-41e9-9ea8-07262915f585-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-hqscp\" (UID: \"6ef883c3-de96-41e9-9ea8-07262915f585\") " pod="tigera-operator/tigera-operator-6cf4cccc57-hqscp" Mar 7 02:35:50.775376 kubelet[2804]: I0307 02:35:50.775303 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgkgl\" (UniqueName: \"kubernetes.io/projected/6ef883c3-de96-41e9-9ea8-07262915f585-kube-api-access-xgkgl\") pod \"tigera-operator-6cf4cccc57-hqscp\" (UID: \"6ef883c3-de96-41e9-9ea8-07262915f585\") " pod="tigera-operator/tigera-operator-6cf4cccc57-hqscp" Mar 7 02:35:50.804487 systemd[1]: Started cri-containerd-67715bfe4611937eee10fbb759c82cf277d7e6b9c42cb941a9e8b6a2d223dc68.scope - libcontainer container 67715bfe4611937eee10fbb759c82cf277d7e6b9c42cb941a9e8b6a2d223dc68. Mar 7 02:35:50.983677 containerd[1560]: time="2026-03-07T02:35:50.982611229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gr5vv,Uid:ef082a4a-9841-499a-a4ea-91a5500c0ec4,Namespace:kube-system,Attempt:0,} returns sandbox id \"67715bfe4611937eee10fbb759c82cf277d7e6b9c42cb941a9e8b6a2d223dc68\"" Mar 7 02:35:50.985111 kubelet[2804]: E0307 02:35:50.984613 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:35:51.005525 containerd[1560]: time="2026-03-07T02:35:51.002955912Z" level=info msg="CreateContainer within sandbox \"67715bfe4611937eee10fbb759c82cf277d7e6b9c42cb941a9e8b6a2d223dc68\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 7 02:35:51.050014 containerd[1560]: time="2026-03-07T02:35:51.044521317Z" level=info msg="Container fa99962e7dd7e0bbd1613ddf1e65450742ad6d1bf31f06680122ad039f5743a5: CDI devices from CRI Config.CDIDevices: []" Mar 7 02:35:51.068628 containerd[1560]: time="2026-03-07T02:35:51.068507284Z" level=info msg="CreateContainer within sandbox \"67715bfe4611937eee10fbb759c82cf277d7e6b9c42cb941a9e8b6a2d223dc68\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fa99962e7dd7e0bbd1613ddf1e65450742ad6d1bf31f06680122ad039f5743a5\"" Mar 7 02:35:51.070674 containerd[1560]: time="2026-03-07T02:35:51.069518382Z" level=info msg="StartContainer for \"fa99962e7dd7e0bbd1613ddf1e65450742ad6d1bf31f06680122ad039f5743a5\"" Mar 7 02:35:51.083639 containerd[1560]: time="2026-03-07T02:35:51.076186544Z" level=info msg="connecting to shim fa99962e7dd7e0bbd1613ddf1e65450742ad6d1bf31f06680122ad039f5743a5" address="unix:///run/containerd/s/2c9aa4105c37c4a0ed995302749f321c3632ac4cb29b88a79406325cbe40f584" protocol=ttrpc version=3 Mar 7 02:35:51.093701 containerd[1560]: time="2026-03-07T02:35:51.091273174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-hqscp,Uid:6ef883c3-de96-41e9-9ea8-07262915f585,Namespace:tigera-operator,Attempt:0,}" Mar 7 02:35:51.156576 systemd[1]: Started cri-containerd-fa99962e7dd7e0bbd1613ddf1e65450742ad6d1bf31f06680122ad039f5743a5.scope - libcontainer container fa99962e7dd7e0bbd1613ddf1e65450742ad6d1bf31f06680122ad039f5743a5. Mar 7 02:35:51.189048 containerd[1560]: time="2026-03-07T02:35:51.187405129Z" level=info msg="connecting to shim 2c09ccc574fb2070d9d96ac1ee0149c5c159c4623ceb63ecbf1efd5b5738ef52" address="unix:///run/containerd/s/9f012037bcd5e59189bce05d728b603abd170b7447f7a75458e57097bed7d5f9" namespace=k8s.io protocol=ttrpc version=3 Mar 7 02:35:51.289282 systemd[1]: Started cri-containerd-2c09ccc574fb2070d9d96ac1ee0149c5c159c4623ceb63ecbf1efd5b5738ef52.scope - libcontainer container 2c09ccc574fb2070d9d96ac1ee0149c5c159c4623ceb63ecbf1efd5b5738ef52. Mar 7 02:35:51.430105 containerd[1560]: time="2026-03-07T02:35:51.430020094Z" level=info msg="StartContainer for \"fa99962e7dd7e0bbd1613ddf1e65450742ad6d1bf31f06680122ad039f5743a5\" returns successfully" Mar 7 02:35:51.494859 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2850992437.mount: Deactivated successfully. Mar 7 02:35:51.546090 containerd[1560]: time="2026-03-07T02:35:51.545821604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-hqscp,Uid:6ef883c3-de96-41e9-9ea8-07262915f585,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2c09ccc574fb2070d9d96ac1ee0149c5c159c4623ceb63ecbf1efd5b5738ef52\"" Mar 7 02:35:51.551667 containerd[1560]: time="2026-03-07T02:35:51.551602578Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 7 02:35:51.747649 kubelet[2804]: E0307 02:35:51.747052 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:35:51.791824 kubelet[2804]: I0307 02:35:51.788581 2804 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-gr5vv" podStartSLOduration=1.788561523 podStartE2EDuration="1.788561523s" podCreationTimestamp="2026-03-07 02:35:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 02:35:51.783564161 +0000 UTC m=+6.556343502" watchObservedRunningTime="2026-03-07 02:35:51.788561523 +0000 UTC m=+6.561340854" Mar 7 02:35:52.741297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2083598130.mount: Deactivated successfully. Mar 7 02:36:02.577829 kubelet[2804]: E0307 02:36:02.574381 2804 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.926s" Mar 7 02:36:03.187155 containerd[1560]: time="2026-03-07T02:36:03.178255437Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:36:03.199033 containerd[1560]: time="2026-03-07T02:36:03.190293964Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 7 02:36:03.204182 containerd[1560]: time="2026-03-07T02:36:03.203807515Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:36:03.310296 containerd[1560]: time="2026-03-07T02:36:03.309993131Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:36:03.342093 containerd[1560]: time="2026-03-07T02:36:03.341800286Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 11.789889803s" Mar 7 02:36:03.343722 containerd[1560]: time="2026-03-07T02:36:03.343159241Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 7 02:36:03.490669 containerd[1560]: time="2026-03-07T02:36:03.482152767Z" level=info msg="CreateContainer within sandbox \"2c09ccc574fb2070d9d96ac1ee0149c5c159c4623ceb63ecbf1efd5b5738ef52\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 7 02:36:03.878230 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1574914297.mount: Deactivated successfully. Mar 7 02:36:03.906250 containerd[1560]: time="2026-03-07T02:36:03.905714670Z" level=info msg="Container 7aabd3398edbf2b335b9787154da8f646243eceb3dcd018d599640adc2dc2cab: CDI devices from CRI Config.CDIDevices: []" Mar 7 02:36:04.027694 containerd[1560]: time="2026-03-07T02:36:04.027288089Z" level=info msg="CreateContainer within sandbox \"2c09ccc574fb2070d9d96ac1ee0149c5c159c4623ceb63ecbf1efd5b5738ef52\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7aabd3398edbf2b335b9787154da8f646243eceb3dcd018d599640adc2dc2cab\"" Mar 7 02:36:04.031968 containerd[1560]: time="2026-03-07T02:36:04.031823096Z" level=info msg="StartContainer for \"7aabd3398edbf2b335b9787154da8f646243eceb3dcd018d599640adc2dc2cab\"" Mar 7 02:36:04.075197 containerd[1560]: time="2026-03-07T02:36:04.074980852Z" level=info msg="connecting to shim 7aabd3398edbf2b335b9787154da8f646243eceb3dcd018d599640adc2dc2cab" address="unix:///run/containerd/s/9f012037bcd5e59189bce05d728b603abd170b7447f7a75458e57097bed7d5f9" protocol=ttrpc version=3 Mar 7 02:36:04.899398 systemd[1]: Started cri-containerd-7aabd3398edbf2b335b9787154da8f646243eceb3dcd018d599640adc2dc2cab.scope - libcontainer container 7aabd3398edbf2b335b9787154da8f646243eceb3dcd018d599640adc2dc2cab. Mar 7 02:36:07.336547 kubelet[2804]: E0307 02:36:07.328573 2804 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.505s" Mar 7 02:36:07.465702 containerd[1560]: time="2026-03-07T02:36:07.461857064Z" level=error msg="get state for 7aabd3398edbf2b335b9787154da8f646243eceb3dcd018d599640adc2dc2cab" error="context deadline exceeded" Mar 7 02:36:07.475630 containerd[1560]: time="2026-03-07T02:36:07.463288393Z" level=warning msg="unknown status" status=0 Mar 7 02:36:08.516723 containerd[1560]: time="2026-03-07T02:36:08.515738643Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Mar 7 02:36:09.140185 containerd[1560]: time="2026-03-07T02:36:09.139625158Z" level=info msg="StartContainer for \"7aabd3398edbf2b335b9787154da8f646243eceb3dcd018d599640adc2dc2cab\" returns successfully" Mar 7 02:36:09.689678 kubelet[2804]: I0307 02:36:09.689075 2804 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-hqscp" podStartSLOduration=7.866543955 podStartE2EDuration="19.688965788s" podCreationTimestamp="2026-03-07 02:35:50 +0000 UTC" firstStartedPulling="2026-03-07 02:35:51.548796969 +0000 UTC m=+6.321576310" lastFinishedPulling="2026-03-07 02:36:03.371218811 +0000 UTC m=+18.143998143" observedRunningTime="2026-03-07 02:36:09.687157236 +0000 UTC m=+24.459936567" watchObservedRunningTime="2026-03-07 02:36:09.688965788 +0000 UTC m=+24.461745120" Mar 7 02:36:14.312703 systemd[1]: cri-containerd-7aabd3398edbf2b335b9787154da8f646243eceb3dcd018d599640adc2dc2cab.scope: Deactivated successfully. Mar 7 02:36:14.327124 containerd[1560]: time="2026-03-07T02:36:14.324252138Z" level=info msg="received container exit event container_id:\"7aabd3398edbf2b335b9787154da8f646243eceb3dcd018d599640adc2dc2cab\" id:\"7aabd3398edbf2b335b9787154da8f646243eceb3dcd018d599640adc2dc2cab\" pid:3139 exit_status:1 exited_at:{seconds:1772850974 nanos:322027255}" Mar 7 02:36:14.532731 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7aabd3398edbf2b335b9787154da8f646243eceb3dcd018d599640adc2dc2cab-rootfs.mount: Deactivated successfully. Mar 7 02:36:15.713704 kubelet[2804]: I0307 02:36:15.713570 2804 scope.go:122] "RemoveContainer" containerID="7aabd3398edbf2b335b9787154da8f646243eceb3dcd018d599640adc2dc2cab" Mar 7 02:36:15.768109 containerd[1560]: time="2026-03-07T02:36:15.768008047Z" level=info msg="CreateContainer within sandbox \"2c09ccc574fb2070d9d96ac1ee0149c5c159c4623ceb63ecbf1efd5b5738ef52\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Mar 7 02:36:15.841240 containerd[1560]: time="2026-03-07T02:36:15.841135748Z" level=info msg="Container c68ccca666324c281d4587c92477654fc544ddd3f18fafc8670d8a58e54e8081: CDI devices from CRI Config.CDIDevices: []" Mar 7 02:36:15.852591 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount185913869.mount: Deactivated successfully. Mar 7 02:36:15.886460 containerd[1560]: time="2026-03-07T02:36:15.886067531Z" level=info msg="CreateContainer within sandbox \"2c09ccc574fb2070d9d96ac1ee0149c5c159c4623ceb63ecbf1efd5b5738ef52\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"c68ccca666324c281d4587c92477654fc544ddd3f18fafc8670d8a58e54e8081\"" Mar 7 02:36:15.894573 containerd[1560]: time="2026-03-07T02:36:15.891704340Z" level=info msg="StartContainer for \"c68ccca666324c281d4587c92477654fc544ddd3f18fafc8670d8a58e54e8081\"" Mar 7 02:36:15.895462 containerd[1560]: time="2026-03-07T02:36:15.895103681Z" level=info msg="connecting to shim c68ccca666324c281d4587c92477654fc544ddd3f18fafc8670d8a58e54e8081" address="unix:///run/containerd/s/9f012037bcd5e59189bce05d728b603abd170b7447f7a75458e57097bed7d5f9" protocol=ttrpc version=3 Mar 7 02:36:15.964529 systemd[1]: Started cri-containerd-c68ccca666324c281d4587c92477654fc544ddd3f18fafc8670d8a58e54e8081.scope - libcontainer container c68ccca666324c281d4587c92477654fc544ddd3f18fafc8670d8a58e54e8081. Mar 7 02:36:16.121406 containerd[1560]: time="2026-03-07T02:36:16.120829166Z" level=info msg="StartContainer for \"c68ccca666324c281d4587c92477654fc544ddd3f18fafc8670d8a58e54e8081\" returns successfully" Mar 7 02:36:19.323204 sudo[1779]: pam_unix(sudo:session): session closed for user root Mar 7 02:36:19.335137 sshd[1778]: Connection closed by 10.0.0.1 port 59750 Mar 7 02:36:19.344597 sshd-session[1775]: pam_unix(sshd:session): session closed for user core Mar 7 02:36:19.373118 systemd[1]: sshd@8-10.0.0.118:22-10.0.0.1:59750.service: Deactivated successfully. Mar 7 02:36:19.374723 systemd-logind[1539]: Session 9 logged out. Waiting for processes to exit. Mar 7 02:36:19.383380 systemd[1]: session-9.scope: Deactivated successfully. Mar 7 02:36:19.384812 systemd[1]: session-9.scope: Consumed 8.897s CPU time, 233.8M memory peak. Mar 7 02:36:19.397077 systemd-logind[1539]: Removed session 9. Mar 7 02:36:28.440572 systemd[1]: Created slice kubepods-besteffort-pod28a853bf_02c5_4abf_bb49_e610acf765e0.slice - libcontainer container kubepods-besteffort-pod28a853bf_02c5_4abf_bb49_e610acf765e0.slice. Mar 7 02:36:28.454493 kubelet[2804]: I0307 02:36:28.454452 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28a853bf-02c5-4abf-bb49-e610acf765e0-tigera-ca-bundle\") pod \"calico-typha-874d4cd96-mtjp8\" (UID: \"28a853bf-02c5-4abf-bb49-e610acf765e0\") " pod="calico-system/calico-typha-874d4cd96-mtjp8" Mar 7 02:36:28.455533 kubelet[2804]: I0307 02:36:28.455399 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/28a853bf-02c5-4abf-bb49-e610acf765e0-typha-certs\") pod \"calico-typha-874d4cd96-mtjp8\" (UID: \"28a853bf-02c5-4abf-bb49-e610acf765e0\") " pod="calico-system/calico-typha-874d4cd96-mtjp8" Mar 7 02:36:28.455533 kubelet[2804]: I0307 02:36:28.455466 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgpjh\" (UniqueName: \"kubernetes.io/projected/28a853bf-02c5-4abf-bb49-e610acf765e0-kube-api-access-kgpjh\") pod \"calico-typha-874d4cd96-mtjp8\" (UID: \"28a853bf-02c5-4abf-bb49-e610acf765e0\") " pod="calico-system/calico-typha-874d4cd96-mtjp8" Mar 7 02:36:28.768550 kubelet[2804]: E0307 02:36:28.765434 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:36:28.777812 containerd[1560]: time="2026-03-07T02:36:28.777768822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-874d4cd96-mtjp8,Uid:28a853bf-02c5-4abf-bb49-e610acf765e0,Namespace:calico-system,Attempt:0,}" Mar 7 02:36:29.001113 systemd[1]: Created slice kubepods-besteffort-pod5906d80d_3038_400b_ba70_b238cc5a588d.slice - libcontainer container kubepods-besteffort-pod5906d80d_3038_400b_ba70_b238cc5a588d.slice. Mar 7 02:36:29.007835 containerd[1560]: time="2026-03-07T02:36:29.002676815Z" level=info msg="connecting to shim 96c9bb200907a0b422ca5bd1870ac6e3cea2e019155c8c58da3180b44c99eea5" address="unix:///run/containerd/s/9b7821a127b77cefe0638392677d07dc1dcc95cff99615d42b0bc3aaff4986f9" namespace=k8s.io protocol=ttrpc version=3 Mar 7 02:36:29.082086 kubelet[2804]: I0307 02:36:29.080075 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/5906d80d-3038-400b-ba70-b238cc5a588d-nodeproc\") pod \"calico-node-fg5wh\" (UID: \"5906d80d-3038-400b-ba70-b238cc5a588d\") " pod="calico-system/calico-node-fg5wh" Mar 7 02:36:29.082086 kubelet[2804]: I0307 02:36:29.080175 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5906d80d-3038-400b-ba70-b238cc5a588d-var-run-calico\") pod \"calico-node-fg5wh\" (UID: \"5906d80d-3038-400b-ba70-b238cc5a588d\") " pod="calico-system/calico-node-fg5wh" Mar 7 02:36:29.082086 kubelet[2804]: I0307 02:36:29.080197 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5906d80d-3038-400b-ba70-b238cc5a588d-xtables-lock\") pod \"calico-node-fg5wh\" (UID: \"5906d80d-3038-400b-ba70-b238cc5a588d\") " pod="calico-system/calico-node-fg5wh" Mar 7 02:36:29.082086 kubelet[2804]: I0307 02:36:29.080223 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5906d80d-3038-400b-ba70-b238cc5a588d-tigera-ca-bundle\") pod \"calico-node-fg5wh\" (UID: \"5906d80d-3038-400b-ba70-b238cc5a588d\") " pod="calico-system/calico-node-fg5wh" Mar 7 02:36:29.082086 kubelet[2804]: I0307 02:36:29.080242 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5906d80d-3038-400b-ba70-b238cc5a588d-flexvol-driver-host\") pod \"calico-node-fg5wh\" (UID: \"5906d80d-3038-400b-ba70-b238cc5a588d\") " pod="calico-system/calico-node-fg5wh" Mar 7 02:36:29.082833 kubelet[2804]: I0307 02:36:29.080262 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5906d80d-3038-400b-ba70-b238cc5a588d-node-certs\") pod \"calico-node-fg5wh\" (UID: \"5906d80d-3038-400b-ba70-b238cc5a588d\") " pod="calico-system/calico-node-fg5wh" Mar 7 02:36:29.082833 kubelet[2804]: I0307 02:36:29.080285 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/5906d80d-3038-400b-ba70-b238cc5a588d-bpffs\") pod \"calico-node-fg5wh\" (UID: \"5906d80d-3038-400b-ba70-b238cc5a588d\") " pod="calico-system/calico-node-fg5wh" Mar 7 02:36:29.082833 kubelet[2804]: I0307 02:36:29.080800 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5906d80d-3038-400b-ba70-b238cc5a588d-cni-bin-dir\") pod \"calico-node-fg5wh\" (UID: \"5906d80d-3038-400b-ba70-b238cc5a588d\") " pod="calico-system/calico-node-fg5wh" Mar 7 02:36:29.082833 kubelet[2804]: I0307 02:36:29.080826 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5906d80d-3038-400b-ba70-b238cc5a588d-cni-log-dir\") pod \"calico-node-fg5wh\" (UID: \"5906d80d-3038-400b-ba70-b238cc5a588d\") " pod="calico-system/calico-node-fg5wh" Mar 7 02:36:29.082833 kubelet[2804]: I0307 02:36:29.080851 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5906d80d-3038-400b-ba70-b238cc5a588d-cni-net-dir\") pod \"calico-node-fg5wh\" (UID: \"5906d80d-3038-400b-ba70-b238cc5a588d\") " pod="calico-system/calico-node-fg5wh" Mar 7 02:36:29.083085 kubelet[2804]: I0307 02:36:29.080930 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnbjj\" (UniqueName: \"kubernetes.io/projected/5906d80d-3038-400b-ba70-b238cc5a588d-kube-api-access-mnbjj\") pod \"calico-node-fg5wh\" (UID: \"5906d80d-3038-400b-ba70-b238cc5a588d\") " pod="calico-system/calico-node-fg5wh" Mar 7 02:36:29.083085 kubelet[2804]: I0307 02:36:29.080955 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5906d80d-3038-400b-ba70-b238cc5a588d-lib-modules\") pod \"calico-node-fg5wh\" (UID: \"5906d80d-3038-400b-ba70-b238cc5a588d\") " pod="calico-system/calico-node-fg5wh" Mar 7 02:36:29.083085 kubelet[2804]: I0307 02:36:29.080973 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5906d80d-3038-400b-ba70-b238cc5a588d-var-lib-calico\") pod \"calico-node-fg5wh\" (UID: \"5906d80d-3038-400b-ba70-b238cc5a588d\") " pod="calico-system/calico-node-fg5wh" Mar 7 02:36:29.083085 kubelet[2804]: I0307 02:36:29.080991 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/5906d80d-3038-400b-ba70-b238cc5a588d-sys-fs\") pod \"calico-node-fg5wh\" (UID: \"5906d80d-3038-400b-ba70-b238cc5a588d\") " pod="calico-system/calico-node-fg5wh" Mar 7 02:36:29.083085 kubelet[2804]: I0307 02:36:29.081019 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5906d80d-3038-400b-ba70-b238cc5a588d-policysync\") pod \"calico-node-fg5wh\" (UID: \"5906d80d-3038-400b-ba70-b238cc5a588d\") " pod="calico-system/calico-node-fg5wh" Mar 7 02:36:29.103931 kubelet[2804]: E0307 02:36:29.103551 2804 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wphnt" podUID="b5e01c21-f499-422c-9fbe-c4c5a2c9ac71" Mar 7 02:36:29.182583 kubelet[2804]: I0307 02:36:29.182447 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b5e01c21-f499-422c-9fbe-c4c5a2c9ac71-varrun\") pod \"csi-node-driver-wphnt\" (UID: \"b5e01c21-f499-422c-9fbe-c4c5a2c9ac71\") " pod="calico-system/csi-node-driver-wphnt" Mar 7 02:36:29.182583 kubelet[2804]: I0307 02:36:29.182546 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b5e01c21-f499-422c-9fbe-c4c5a2c9ac71-kubelet-dir\") pod \"csi-node-driver-wphnt\" (UID: \"b5e01c21-f499-422c-9fbe-c4c5a2c9ac71\") " pod="calico-system/csi-node-driver-wphnt" Mar 7 02:36:29.184115 kubelet[2804]: I0307 02:36:29.182654 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b5e01c21-f499-422c-9fbe-c4c5a2c9ac71-socket-dir\") pod \"csi-node-driver-wphnt\" (UID: \"b5e01c21-f499-422c-9fbe-c4c5a2c9ac71\") " pod="calico-system/csi-node-driver-wphnt" Mar 7 02:36:29.184115 kubelet[2804]: I0307 02:36:29.182692 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b5e01c21-f499-422c-9fbe-c4c5a2c9ac71-registration-dir\") pod \"csi-node-driver-wphnt\" (UID: \"b5e01c21-f499-422c-9fbe-c4c5a2c9ac71\") " pod="calico-system/csi-node-driver-wphnt" Mar 7 02:36:29.184115 kubelet[2804]: I0307 02:36:29.182724 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tngbr\" (UniqueName: \"kubernetes.io/projected/b5e01c21-f499-422c-9fbe-c4c5a2c9ac71-kube-api-access-tngbr\") pod \"csi-node-driver-wphnt\" (UID: \"b5e01c21-f499-422c-9fbe-c4c5a2c9ac71\") " pod="calico-system/csi-node-driver-wphnt" Mar 7 02:36:29.186913 systemd[1]: Started cri-containerd-96c9bb200907a0b422ca5bd1870ac6e3cea2e019155c8c58da3180b44c99eea5.scope - libcontainer container 96c9bb200907a0b422ca5bd1870ac6e3cea2e019155c8c58da3180b44c99eea5. Mar 7 02:36:29.196244 kubelet[2804]: E0307 02:36:29.195576 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:29.196602 kubelet[2804]: W0307 02:36:29.196461 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:29.196983 kubelet[2804]: E0307 02:36:29.196960 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:29.208128 kubelet[2804]: E0307 02:36:29.208106 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:29.208233 kubelet[2804]: W0307 02:36:29.208214 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:29.208431 kubelet[2804]: E0307 02:36:29.208410 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:29.210556 kubelet[2804]: E0307 02:36:29.210276 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:29.211446 kubelet[2804]: W0307 02:36:29.211130 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:29.211446 kubelet[2804]: E0307 02:36:29.211197 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:29.228414 kubelet[2804]: E0307 02:36:29.226951 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:29.228414 kubelet[2804]: W0307 02:36:29.227014 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:29.228414 kubelet[2804]: E0307 02:36:29.227047 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:29.230105 kubelet[2804]: E0307 02:36:29.230038 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:29.230162 kubelet[2804]: W0307 02:36:29.230108 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:29.230162 kubelet[2804]: E0307 02:36:29.230131 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:29.260976 kubelet[2804]: E0307 02:36:29.260786 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:29.260976 kubelet[2804]: W0307 02:36:29.260837 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:29.260976 kubelet[2804]: E0307 02:36:29.260910 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:29.270558 kubelet[2804]: E0307 02:36:29.270225 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:29.270558 kubelet[2804]: W0307 02:36:29.270266 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:29.270558 kubelet[2804]: E0307 02:36:29.270291 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:29.283843 kubelet[2804]: E0307 02:36:29.283708 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:29.284073 kubelet[2804]: W0307 02:36:29.284023 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:29.284073 kubelet[2804]: E0307 02:36:29.284049 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:29.292097 kubelet[2804]: E0307 02:36:29.292075 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:29.294363 kubelet[2804]: W0307 02:36:29.292256 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:29.294363 kubelet[2804]: E0307 02:36:29.292285 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:29.296580 kubelet[2804]: E0307 02:36:29.296563 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:29.296671 kubelet[2804]: W0307 02:36:29.296653 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:29.296930 kubelet[2804]: E0307 02:36:29.296839 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:29.297689 kubelet[2804]: E0307 02:36:29.297670 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:29.297773 kubelet[2804]: W0307 02:36:29.297757 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:29.297919 kubelet[2804]: E0307 02:36:29.297844 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:29.299704 kubelet[2804]: E0307 02:36:29.299644 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:29.299843 kubelet[2804]: W0307 02:36:29.299825 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:29.300065 kubelet[2804]: E0307 02:36:29.300000 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:29.304120 kubelet[2804]: E0307 02:36:29.304012 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:29.304249 kubelet[2804]: W0307 02:36:29.304029 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:29.304249 kubelet[2804]: E0307 02:36:29.304480 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:29.307560 kubelet[2804]: E0307 02:36:29.307450 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:29.307560 kubelet[2804]: W0307 02:36:29.307465 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:29.307560 kubelet[2804]: E0307 02:36:29.307481 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:29.308421 kubelet[2804]: E0307 02:36:29.308139 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:29.308421 kubelet[2804]: W0307 02:36:29.308150 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:29.308421 kubelet[2804]: E0307 02:36:29.308164 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:29.309809 kubelet[2804]: E0307 02:36:29.309790 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:29.310095 kubelet[2804]: W0307 02:36:29.309992 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:29.310587 kubelet[2804]: E0307 02:36:29.310204 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:29.313611 kubelet[2804]: E0307 02:36:29.313596 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:29.314607 kubelet[2804]: W0307 02:36:29.313806 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:29.315044 kubelet[2804]: E0307 02:36:29.314741 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:29.316957 kubelet[2804]: E0307 02:36:29.316941 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:29.317261 kubelet[2804]: W0307 02:36:29.317160 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:29.317646 kubelet[2804]: E0307 02:36:29.317498 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:29.325955 kubelet[2804]: E0307 02:36:29.325615 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:29.326747 kubelet[2804]: W0307 02:36:29.326273 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:29.326747 kubelet[2804]: E0307 02:36:29.326294 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:29.333254 kubelet[2804]: E0307 02:36:29.333125 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:29.336237 kubelet[2804]: W0307 02:36:29.333564 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:29.339417 kubelet[2804]: E0307 02:36:29.339271 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:29.343178 kubelet[2804]: E0307 02:36:29.343160 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:29.343271 kubelet[2804]: W0307 02:36:29.343255 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:29.345958 kubelet[2804]: E0307 02:36:29.345057 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:29.345958 kubelet[2804]: E0307 02:36:29.345754 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:29.345958 kubelet[2804]: W0307 02:36:29.345766 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:29.345958 kubelet[2804]: E0307 02:36:29.345780 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:29.351189 kubelet[2804]: E0307 02:36:29.349737 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:29.351189 kubelet[2804]: W0307 02:36:29.349794 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:29.351189 kubelet[2804]: E0307 02:36:29.349827 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:29.352831 kubelet[2804]: E0307 02:36:29.352648 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:29.352831 kubelet[2804]: W0307 02:36:29.352695 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:29.352831 kubelet[2804]: E0307 02:36:29.352713 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:29.354454 kubelet[2804]: E0307 02:36:29.353451 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:29.354454 kubelet[2804]: W0307 02:36:29.353464 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:29.354454 kubelet[2804]: E0307 02:36:29.353476 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:29.354454 kubelet[2804]: E0307 02:36:29.354288 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:29.354454 kubelet[2804]: W0307 02:36:29.354299 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:29.354454 kubelet[2804]: E0307 02:36:29.354376 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:29.355124 kubelet[2804]: E0307 02:36:29.355066 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:29.355124 kubelet[2804]: W0307 02:36:29.355076 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:29.355124 kubelet[2804]: E0307 02:36:29.355088 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:29.356114 kubelet[2804]: E0307 02:36:29.355846 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:29.356114 kubelet[2804]: W0307 02:36:29.355935 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:29.356114 kubelet[2804]: E0307 02:36:29.356008 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:29.356114 kubelet[2804]: E0307 02:36:29.356488 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:29.356114 kubelet[2804]: W0307 02:36:29.356498 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:29.356114 kubelet[2804]: E0307 02:36:29.356511 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:29.358738 kubelet[2804]: E0307 02:36:29.358511 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:29.358738 kubelet[2804]: W0307 02:36:29.358557 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:29.358738 kubelet[2804]: E0307 02:36:29.358569 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:29.362785 kubelet[2804]: E0307 02:36:29.359581 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:29.362785 kubelet[2804]: W0307 02:36:29.359602 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:29.362785 kubelet[2804]: E0307 02:36:29.359676 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:29.362785 kubelet[2804]: E0307 02:36:29.361616 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:29.362785 kubelet[2804]: W0307 02:36:29.361629 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:29.362785 kubelet[2804]: E0307 02:36:29.361643 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:29.363234 containerd[1560]: time="2026-03-07T02:36:29.360458406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fg5wh,Uid:5906d80d-3038-400b-ba70-b238cc5a588d,Namespace:calico-system,Attempt:0,}" Mar 7 02:36:29.457637 kubelet[2804]: E0307 02:36:29.457532 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:29.457637 kubelet[2804]: W0307 02:36:29.457600 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:29.457637 kubelet[2804]: E0307 02:36:29.457631 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:29.549420 containerd[1560]: time="2026-03-07T02:36:29.548948827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-874d4cd96-mtjp8,Uid:28a853bf-02c5-4abf-bb49-e610acf765e0,Namespace:calico-system,Attempt:0,} returns sandbox id \"96c9bb200907a0b422ca5bd1870ac6e3cea2e019155c8c58da3180b44c99eea5\"" Mar 7 02:36:29.563976 kubelet[2804]: E0307 02:36:29.561625 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:36:29.579280 containerd[1560]: time="2026-03-07T02:36:29.578080979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 7 02:36:29.580071 containerd[1560]: time="2026-03-07T02:36:29.579586959Z" level=info msg="connecting to shim 032829de2a16d0ba9181a631a5fd5178bdfa463a4ac98172267884c6a13b442f" address="unix:///run/containerd/s/527f431209e132ae36d9803fdd9de9cb0e0d3854297b1ebe21ee9055d2e1464b" namespace=k8s.io protocol=ttrpc version=3 Mar 7 02:36:29.795251 systemd[1]: Started cri-containerd-032829de2a16d0ba9181a631a5fd5178bdfa463a4ac98172267884c6a13b442f.scope - libcontainer container 032829de2a16d0ba9181a631a5fd5178bdfa463a4ac98172267884c6a13b442f. Mar 7 02:36:29.918186 containerd[1560]: time="2026-03-07T02:36:29.918027388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fg5wh,Uid:5906d80d-3038-400b-ba70-b238cc5a588d,Namespace:calico-system,Attempt:0,} returns sandbox id \"032829de2a16d0ba9181a631a5fd5178bdfa463a4ac98172267884c6a13b442f\"" Mar 7 02:36:30.611993 kubelet[2804]: E0307 02:36:30.611734 2804 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wphnt" podUID="b5e01c21-f499-422c-9fbe-c4c5a2c9ac71" Mar 7 02:36:30.628820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1562977479.mount: Deactivated successfully. Mar 7 02:36:32.612608 kubelet[2804]: E0307 02:36:32.612458 2804 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wphnt" podUID="b5e01c21-f499-422c-9fbe-c4c5a2c9ac71" Mar 7 02:36:34.614687 kubelet[2804]: E0307 02:36:34.612038 2804 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wphnt" podUID="b5e01c21-f499-422c-9fbe-c4c5a2c9ac71" Mar 7 02:36:36.613941 kubelet[2804]: E0307 02:36:36.613835 2804 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wphnt" podUID="b5e01c21-f499-422c-9fbe-c4c5a2c9ac71" Mar 7 02:36:37.000670 containerd[1560]: time="2026-03-07T02:36:37.000420805Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:36:37.012511 containerd[1560]: time="2026-03-07T02:36:37.012474618Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 7 02:36:37.022029 containerd[1560]: time="2026-03-07T02:36:37.021721105Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:36:37.031262 containerd[1560]: time="2026-03-07T02:36:37.031126666Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:36:37.032580 containerd[1560]: time="2026-03-07T02:36:37.032445049Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 7.454269764s" Mar 7 02:36:37.032580 containerd[1560]: time="2026-03-07T02:36:37.032487779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 7 02:36:37.033837 containerd[1560]: time="2026-03-07T02:36:37.033766282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 7 02:36:37.122044 containerd[1560]: time="2026-03-07T02:36:37.116652075Z" level=info msg="CreateContainer within sandbox \"96c9bb200907a0b422ca5bd1870ac6e3cea2e019155c8c58da3180b44c99eea5\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 7 02:36:37.171765 containerd[1560]: time="2026-03-07T02:36:37.171631135Z" level=info msg="Container 1957b2791784cd267262e7b432a790200cf22ed89c4c2b80c090a1e25c3fbb4e: CDI devices from CRI Config.CDIDevices: []" Mar 7 02:36:37.234408 containerd[1560]: time="2026-03-07T02:36:37.234001619Z" level=info msg="CreateContainer within sandbox \"96c9bb200907a0b422ca5bd1870ac6e3cea2e019155c8c58da3180b44c99eea5\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1957b2791784cd267262e7b432a790200cf22ed89c4c2b80c090a1e25c3fbb4e\"" Mar 7 02:36:37.240964 containerd[1560]: time="2026-03-07T02:36:37.237252789Z" level=info msg="StartContainer for \"1957b2791784cd267262e7b432a790200cf22ed89c4c2b80c090a1e25c3fbb4e\"" Mar 7 02:36:37.247175 containerd[1560]: time="2026-03-07T02:36:37.244798547Z" level=info msg="connecting to shim 1957b2791784cd267262e7b432a790200cf22ed89c4c2b80c090a1e25c3fbb4e" address="unix:///run/containerd/s/9b7821a127b77cefe0638392677d07dc1dcc95cff99615d42b0bc3aaff4986f9" protocol=ttrpc version=3 Mar 7 02:36:37.396130 systemd[1]: Started cri-containerd-1957b2791784cd267262e7b432a790200cf22ed89c4c2b80c090a1e25c3fbb4e.scope - libcontainer container 1957b2791784cd267262e7b432a790200cf22ed89c4c2b80c090a1e25c3fbb4e. Mar 7 02:36:37.649927 containerd[1560]: time="2026-03-07T02:36:37.649692517Z" level=info msg="StartContainer for \"1957b2791784cd267262e7b432a790200cf22ed89c4c2b80c090a1e25c3fbb4e\" returns successfully" Mar 7 02:36:37.985404 kubelet[2804]: E0307 02:36:37.985058 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:36:37.989702 kubelet[2804]: E0307 02:36:37.987556 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:37.989702 kubelet[2804]: W0307 02:36:37.987569 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:37.989702 kubelet[2804]: E0307 02:36:37.987588 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:37.989702 kubelet[2804]: E0307 02:36:37.988072 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:37.989702 kubelet[2804]: W0307 02:36:37.988081 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:37.989702 kubelet[2804]: E0307 02:36:37.988093 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:37.989702 kubelet[2804]: E0307 02:36:37.988301 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:37.989702 kubelet[2804]: W0307 02:36:37.988377 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:37.989702 kubelet[2804]: E0307 02:36:37.988389 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:37.989702 kubelet[2804]: E0307 02:36:37.989068 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:37.993577 kubelet[2804]: W0307 02:36:37.989077 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:37.993577 kubelet[2804]: E0307 02:36:37.989088 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:37.993577 kubelet[2804]: E0307 02:36:37.990484 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:37.993577 kubelet[2804]: W0307 02:36:37.990495 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:37.993577 kubelet[2804]: E0307 02:36:37.990509 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:37.993577 kubelet[2804]: E0307 02:36:37.990771 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:37.993577 kubelet[2804]: W0307 02:36:37.990784 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:37.993577 kubelet[2804]: E0307 02:36:37.990797 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:37.993577 kubelet[2804]: E0307 02:36:37.993040 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:37.993577 kubelet[2804]: W0307 02:36:37.993052 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:37.998481 kubelet[2804]: E0307 02:36:37.993065 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:37.998481 kubelet[2804]: E0307 02:36:37.993510 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:37.998481 kubelet[2804]: W0307 02:36:37.993522 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:37.998481 kubelet[2804]: E0307 02:36:37.993533 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:37.998481 kubelet[2804]: E0307 02:36:37.996637 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:37.998481 kubelet[2804]: W0307 02:36:37.996650 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:37.998481 kubelet[2804]: E0307 02:36:37.996664 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:38.004440 kubelet[2804]: E0307 02:36:38.004191 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:38.004440 kubelet[2804]: W0307 02:36:38.004205 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:38.004440 kubelet[2804]: E0307 02:36:38.004221 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:38.012589 kubelet[2804]: E0307 02:36:38.012460 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:38.012589 kubelet[2804]: W0307 02:36:38.012527 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:38.012589 kubelet[2804]: E0307 02:36:38.012546 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:38.014659 kubelet[2804]: E0307 02:36:38.014467 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:38.014659 kubelet[2804]: W0307 02:36:38.014484 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:38.014659 kubelet[2804]: E0307 02:36:38.014500 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:38.034822 kubelet[2804]: E0307 02:36:38.033662 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:38.034822 kubelet[2804]: W0307 02:36:38.033761 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:38.034822 kubelet[2804]: E0307 02:36:38.033785 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:38.038552 kubelet[2804]: E0307 02:36:38.038273 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:38.038552 kubelet[2804]: W0307 02:36:38.038288 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:38.039841 kubelet[2804]: E0307 02:36:38.038304 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:38.044742 kubelet[2804]: E0307 02:36:38.044646 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:38.045085 kubelet[2804]: W0307 02:36:38.044997 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:38.045672 kubelet[2804]: E0307 02:36:38.045489 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:38.050213 kubelet[2804]: E0307 02:36:38.049840 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:38.050213 kubelet[2804]: W0307 02:36:38.049916 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:38.050213 kubelet[2804]: E0307 02:36:38.049935 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:38.054302 kubelet[2804]: E0307 02:36:38.050826 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:38.054302 kubelet[2804]: W0307 02:36:38.050924 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:38.054302 kubelet[2804]: E0307 02:36:38.050940 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:38.055116 kubelet[2804]: E0307 02:36:38.054797 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:38.055116 kubelet[2804]: W0307 02:36:38.054814 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:38.055116 kubelet[2804]: E0307 02:36:38.054826 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:38.059563 kubelet[2804]: E0307 02:36:38.059166 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:38.059563 kubelet[2804]: W0307 02:36:38.059179 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:38.059563 kubelet[2804]: E0307 02:36:38.059200 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:38.061972 kubelet[2804]: E0307 02:36:38.060936 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:38.061972 kubelet[2804]: W0307 02:36:38.060952 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:38.061972 kubelet[2804]: E0307 02:36:38.060966 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:38.063647 kubelet[2804]: E0307 02:36:38.062485 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:38.063647 kubelet[2804]: W0307 02:36:38.062496 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:38.063647 kubelet[2804]: E0307 02:36:38.062508 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:38.071766 kubelet[2804]: E0307 02:36:38.070249 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:38.071766 kubelet[2804]: W0307 02:36:38.070271 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:38.071766 kubelet[2804]: E0307 02:36:38.070291 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:38.076949 kubelet[2804]: E0307 02:36:38.072514 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:38.076949 kubelet[2804]: W0307 02:36:38.072528 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:38.076949 kubelet[2804]: E0307 02:36:38.072543 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:38.076949 kubelet[2804]: E0307 02:36:38.073552 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:38.076949 kubelet[2804]: W0307 02:36:38.073567 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:38.076949 kubelet[2804]: E0307 02:36:38.073583 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:38.076949 kubelet[2804]: E0307 02:36:38.074089 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:38.076949 kubelet[2804]: W0307 02:36:38.074100 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:38.076949 kubelet[2804]: E0307 02:36:38.074113 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:38.076949 kubelet[2804]: E0307 02:36:38.076451 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:38.077809 kubelet[2804]: W0307 02:36:38.076469 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:38.077809 kubelet[2804]: E0307 02:36:38.076482 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:38.080931 kubelet[2804]: E0307 02:36:38.078198 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:38.080931 kubelet[2804]: W0307 02:36:38.078209 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:38.080931 kubelet[2804]: E0307 02:36:38.078223 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:38.082378 kubelet[2804]: E0307 02:36:38.082038 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:38.082378 kubelet[2804]: W0307 02:36:38.082053 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:38.082378 kubelet[2804]: E0307 02:36:38.082067 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:38.084974 kubelet[2804]: E0307 02:36:38.084683 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:38.084974 kubelet[2804]: W0307 02:36:38.084755 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:38.084974 kubelet[2804]: E0307 02:36:38.084772 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:38.087541 kubelet[2804]: E0307 02:36:38.087523 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:38.088581 kubelet[2804]: W0307 02:36:38.088419 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:38.088581 kubelet[2804]: E0307 02:36:38.088466 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:38.090939 kubelet[2804]: E0307 02:36:38.090748 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:38.090939 kubelet[2804]: W0307 02:36:38.090801 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:38.090939 kubelet[2804]: E0307 02:36:38.090815 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:38.094737 kubelet[2804]: E0307 02:36:38.094689 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:38.094737 kubelet[2804]: W0307 02:36:38.094708 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:38.094737 kubelet[2804]: E0307 02:36:38.094724 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:38.099141 kubelet[2804]: E0307 02:36:38.098990 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:38.099141 kubelet[2804]: W0307 02:36:38.099043 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:38.099141 kubelet[2804]: E0307 02:36:38.099062 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:38.404127 containerd[1560]: time="2026-03-07T02:36:38.404047538Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:36:38.411612 containerd[1560]: time="2026-03-07T02:36:38.409762381Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 7 02:36:38.417625 containerd[1560]: time="2026-03-07T02:36:38.415176766Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:36:38.424286 containerd[1560]: time="2026-03-07T02:36:38.424217398Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:36:38.427542 containerd[1560]: time="2026-03-07T02:36:38.425721287Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.391873474s" Mar 7 02:36:38.427542 containerd[1560]: time="2026-03-07T02:36:38.425758377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 7 02:36:38.455923 containerd[1560]: time="2026-03-07T02:36:38.455775858Z" level=info msg="CreateContainer within sandbox \"032829de2a16d0ba9181a631a5fd5178bdfa463a4ac98172267884c6a13b442f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 7 02:36:38.529503 containerd[1560]: time="2026-03-07T02:36:38.529422493Z" level=info msg="Container ecbbe3525f6f506693a09e72057abd97d6459a5722c54b265cfabfcf060a171f: CDI devices from CRI Config.CDIDevices: []" Mar 7 02:36:38.589020 containerd[1560]: time="2026-03-07T02:36:38.585176668Z" level=info msg="CreateContainer within sandbox \"032829de2a16d0ba9181a631a5fd5178bdfa463a4ac98172267884c6a13b442f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ecbbe3525f6f506693a09e72057abd97d6459a5722c54b265cfabfcf060a171f\"" Mar 7 02:36:38.589020 containerd[1560]: time="2026-03-07T02:36:38.587986970Z" level=info msg="StartContainer for \"ecbbe3525f6f506693a09e72057abd97d6459a5722c54b265cfabfcf060a171f\"" Mar 7 02:36:38.595731 containerd[1560]: time="2026-03-07T02:36:38.594241785Z" level=info msg="connecting to shim ecbbe3525f6f506693a09e72057abd97d6459a5722c54b265cfabfcf060a171f" address="unix:///run/containerd/s/527f431209e132ae36d9803fdd9de9cb0e0d3854297b1ebe21ee9055d2e1464b" protocol=ttrpc version=3 Mar 7 02:36:38.613749 kubelet[2804]: E0307 02:36:38.613711 2804 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wphnt" podUID="b5e01c21-f499-422c-9fbe-c4c5a2c9ac71" Mar 7 02:36:38.715127 systemd[1]: Started cri-containerd-ecbbe3525f6f506693a09e72057abd97d6459a5722c54b265cfabfcf060a171f.scope - libcontainer container ecbbe3525f6f506693a09e72057abd97d6459a5722c54b265cfabfcf060a171f. Mar 7 02:36:39.014447 kubelet[2804]: E0307 02:36:39.014204 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:36:39.087671 kubelet[2804]: E0307 02:36:39.085827 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:39.087671 kubelet[2804]: W0307 02:36:39.086071 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:39.087671 kubelet[2804]: E0307 02:36:39.086245 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:39.097575 kubelet[2804]: E0307 02:36:39.094032 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:39.097575 kubelet[2804]: W0307 02:36:39.094057 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:39.097575 kubelet[2804]: E0307 02:36:39.094079 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:39.106999 kubelet[2804]: E0307 02:36:39.106046 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 02:36:39.106999 kubelet[2804]: W0307 02:36:39.106108 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 02:36:39.106999 kubelet[2804]: E0307 02:36:39.106902 2804 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 02:36:39.114648 containerd[1560]: time="2026-03-07T02:36:39.114086636Z" level=info msg="StartContainer for \"ecbbe3525f6f506693a09e72057abd97d6459a5722c54b265cfabfcf060a171f\" returns successfully" Mar 7 02:36:39.121567 systemd[1]: cri-containerd-ecbbe3525f6f506693a09e72057abd97d6459a5722c54b265cfabfcf060a171f.scope: Deactivated successfully. Mar 7 02:36:39.141206 containerd[1560]: time="2026-03-07T02:36:39.139796297Z" level=info msg="received container exit event container_id:\"ecbbe3525f6f506693a09e72057abd97d6459a5722c54b265cfabfcf060a171f\" id:\"ecbbe3525f6f506693a09e72057abd97d6459a5722c54b265cfabfcf060a171f\" pid:3507 exited_at:{seconds:1772850999 nanos:137664568}" Mar 7 02:36:39.256647 kubelet[2804]: I0307 02:36:39.253602 2804 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-874d4cd96-mtjp8" podStartSLOduration=3.796480497 podStartE2EDuration="11.25358429s" podCreationTimestamp="2026-03-07 02:36:28 +0000 UTC" firstStartedPulling="2026-03-07 02:36:29.576534312 +0000 UTC m=+44.349313643" lastFinishedPulling="2026-03-07 02:36:37.033638086 +0000 UTC m=+51.806417436" observedRunningTime="2026-03-07 02:36:38.078943629 +0000 UTC m=+52.851722970" watchObservedRunningTime="2026-03-07 02:36:39.25358429 +0000 UTC m=+54.026363641" Mar 7 02:36:39.331574 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ecbbe3525f6f506693a09e72057abd97d6459a5722c54b265cfabfcf060a171f-rootfs.mount: Deactivated successfully. Mar 7 02:36:40.044159 kubelet[2804]: E0307 02:36:40.039809 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:36:40.057396 containerd[1560]: time="2026-03-07T02:36:40.056486960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 7 02:36:40.618524 kubelet[2804]: E0307 02:36:40.616547 2804 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wphnt" podUID="b5e01c21-f499-422c-9fbe-c4c5a2c9ac71" Mar 7 02:36:42.617044 kubelet[2804]: E0307 02:36:42.616823 2804 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wphnt" podUID="b5e01c21-f499-422c-9fbe-c4c5a2c9ac71" Mar 7 02:36:44.614069 kubelet[2804]: E0307 02:36:44.613590 2804 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wphnt" podUID="b5e01c21-f499-422c-9fbe-c4c5a2c9ac71" Mar 7 02:36:46.611444 kubelet[2804]: E0307 02:36:46.611297 2804 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wphnt" podUID="b5e01c21-f499-422c-9fbe-c4c5a2c9ac71" Mar 7 02:36:48.613644 kubelet[2804]: E0307 02:36:48.612228 2804 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wphnt" podUID="b5e01c21-f499-422c-9fbe-c4c5a2c9ac71" Mar 7 02:36:50.613270 kubelet[2804]: E0307 02:36:50.612786 2804 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wphnt" podUID="b5e01c21-f499-422c-9fbe-c4c5a2c9ac71" Mar 7 02:36:52.613014 kubelet[2804]: E0307 02:36:52.612456 2804 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wphnt" podUID="b5e01c21-f499-422c-9fbe-c4c5a2c9ac71" Mar 7 02:36:54.613210 kubelet[2804]: E0307 02:36:54.611653 2804 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wphnt" podUID="b5e01c21-f499-422c-9fbe-c4c5a2c9ac71" Mar 7 02:36:56.615599 kubelet[2804]: E0307 02:36:56.615417 2804 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wphnt" podUID="b5e01c21-f499-422c-9fbe-c4c5a2c9ac71" Mar 7 02:36:57.624147 kubelet[2804]: E0307 02:36:57.619262 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:36:58.613001 kubelet[2804]: E0307 02:36:58.612532 2804 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wphnt" podUID="b5e01c21-f499-422c-9fbe-c4c5a2c9ac71" Mar 7 02:37:00.615615 kubelet[2804]: E0307 02:37:00.613093 2804 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wphnt" podUID="b5e01c21-f499-422c-9fbe-c4c5a2c9ac71" Mar 7 02:37:02.614986 kubelet[2804]: E0307 02:37:02.614462 2804 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wphnt" podUID="b5e01c21-f499-422c-9fbe-c4c5a2c9ac71" Mar 7 02:37:04.612726 kubelet[2804]: E0307 02:37:04.612667 2804 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wphnt" podUID="b5e01c21-f499-422c-9fbe-c4c5a2c9ac71" Mar 7 02:37:06.627476 kubelet[2804]: E0307 02:37:06.613848 2804 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wphnt" podUID="b5e01c21-f499-422c-9fbe-c4c5a2c9ac71" Mar 7 02:37:08.612555 kubelet[2804]: E0307 02:37:08.611630 2804 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wphnt" podUID="b5e01c21-f499-422c-9fbe-c4c5a2c9ac71" Mar 7 02:37:09.074792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1257100240.mount: Deactivated successfully. Mar 7 02:37:09.311917 containerd[1560]: time="2026-03-07T02:37:09.311717118Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:37:09.319766 containerd[1560]: time="2026-03-07T02:37:09.318381761Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 7 02:37:09.327596 containerd[1560]: time="2026-03-07T02:37:09.325576171Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:37:09.340100 containerd[1560]: time="2026-03-07T02:37:09.336744891Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:37:09.340100 containerd[1560]: time="2026-03-07T02:37:09.337690005Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 29.28115215s" Mar 7 02:37:09.340100 containerd[1560]: time="2026-03-07T02:37:09.337718268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 7 02:37:09.355706 containerd[1560]: time="2026-03-07T02:37:09.355110121Z" level=info msg="CreateContainer within sandbox \"032829de2a16d0ba9181a631a5fd5178bdfa463a4ac98172267884c6a13b442f\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 7 02:37:09.414247 containerd[1560]: time="2026-03-07T02:37:09.413602934Z" level=info msg="Container 86154f3da3da82db80ded1423f8c23a5d8fc077aaf75a87d75abc9863e08afe9: CDI devices from CRI Config.CDIDevices: []" Mar 7 02:37:09.416846 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2663727576.mount: Deactivated successfully. Mar 7 02:37:09.518018 containerd[1560]: time="2026-03-07T02:37:09.517812847Z" level=info msg="CreateContainer within sandbox \"032829de2a16d0ba9181a631a5fd5178bdfa463a4ac98172267884c6a13b442f\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"86154f3da3da82db80ded1423f8c23a5d8fc077aaf75a87d75abc9863e08afe9\"" Mar 7 02:37:09.524485 containerd[1560]: time="2026-03-07T02:37:09.520488081Z" level=info msg="StartContainer for \"86154f3da3da82db80ded1423f8c23a5d8fc077aaf75a87d75abc9863e08afe9\"" Mar 7 02:37:09.524485 containerd[1560]: time="2026-03-07T02:37:09.522534561Z" level=info msg="connecting to shim 86154f3da3da82db80ded1423f8c23a5d8fc077aaf75a87d75abc9863e08afe9" address="unix:///run/containerd/s/527f431209e132ae36d9803fdd9de9cb0e0d3854297b1ebe21ee9055d2e1464b" protocol=ttrpc version=3 Mar 7 02:37:09.614445 systemd[1]: Started cri-containerd-86154f3da3da82db80ded1423f8c23a5d8fc077aaf75a87d75abc9863e08afe9.scope - libcontainer container 86154f3da3da82db80ded1423f8c23a5d8fc077aaf75a87d75abc9863e08afe9. Mar 7 02:37:09.675080 kubelet[2804]: E0307 02:37:09.669755 2804 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wphnt" podUID="b5e01c21-f499-422c-9fbe-c4c5a2c9ac71" Mar 7 02:37:09.675080 kubelet[2804]: E0307 02:37:09.673077 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:37:09.931802 containerd[1560]: time="2026-03-07T02:37:09.930048686Z" level=info msg="StartContainer for \"86154f3da3da82db80ded1423f8c23a5d8fc077aaf75a87d75abc9863e08afe9\" returns successfully" Mar 7 02:37:10.101041 systemd[1]: cri-containerd-86154f3da3da82db80ded1423f8c23a5d8fc077aaf75a87d75abc9863e08afe9.scope: Deactivated successfully. Mar 7 02:37:10.167628 containerd[1560]: time="2026-03-07T02:37:10.167461740Z" level=info msg="received container exit event container_id:\"86154f3da3da82db80ded1423f8c23a5d8fc077aaf75a87d75abc9863e08afe9\" id:\"86154f3da3da82db80ded1423f8c23a5d8fc077aaf75a87d75abc9863e08afe9\" pid:3576 exited_at:{seconds:1772851030 nanos:165816542}" Mar 7 02:37:10.321168 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86154f3da3da82db80ded1423f8c23a5d8fc077aaf75a87d75abc9863e08afe9-rootfs.mount: Deactivated successfully. Mar 7 02:37:11.369163 containerd[1560]: time="2026-03-07T02:37:11.367808922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 7 02:37:11.615821 kubelet[2804]: E0307 02:37:11.613697 2804 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wphnt" podUID="b5e01c21-f499-422c-9fbe-c4c5a2c9ac71" Mar 7 02:37:13.633459 kubelet[2804]: E0307 02:37:13.633156 2804 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wphnt" podUID="b5e01c21-f499-422c-9fbe-c4c5a2c9ac71" Mar 7 02:37:13.637274 kubelet[2804]: E0307 02:37:13.634430 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:37:15.617824 kubelet[2804]: E0307 02:37:15.614724 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:37:15.617824 kubelet[2804]: E0307 02:37:15.616150 2804 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wphnt" podUID="b5e01c21-f499-422c-9fbe-c4c5a2c9ac71" Mar 7 02:37:17.611963 kubelet[2804]: E0307 02:37:17.611654 2804 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wphnt" podUID="b5e01c21-f499-422c-9fbe-c4c5a2c9ac71" Mar 7 02:37:19.617012 kubelet[2804]: E0307 02:37:19.615443 2804 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wphnt" podUID="b5e01c21-f499-422c-9fbe-c4c5a2c9ac71" Mar 7 02:37:21.613194 kubelet[2804]: E0307 02:37:21.613143 2804 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wphnt" podUID="b5e01c21-f499-422c-9fbe-c4c5a2c9ac71" Mar 7 02:37:23.612413 kubelet[2804]: E0307 02:37:23.612071 2804 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wphnt" podUID="b5e01c21-f499-422c-9fbe-c4c5a2c9ac71" Mar 7 02:37:25.037034 containerd[1560]: time="2026-03-07T02:37:25.036974897Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:37:25.042377 containerd[1560]: time="2026-03-07T02:37:25.042264923Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 7 02:37:25.053487 containerd[1560]: time="2026-03-07T02:37:25.052287579Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:37:25.059785 containerd[1560]: time="2026-03-07T02:37:25.059589900Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:37:25.061968 containerd[1560]: time="2026-03-07T02:37:25.061803899Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 13.693952588s" Mar 7 02:37:25.061968 containerd[1560]: time="2026-03-07T02:37:25.061839846Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 7 02:37:25.087225 containerd[1560]: time="2026-03-07T02:37:25.087045715Z" level=info msg="CreateContainer within sandbox \"032829de2a16d0ba9181a631a5fd5178bdfa463a4ac98172267884c6a13b442f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 7 02:37:25.124982 containerd[1560]: time="2026-03-07T02:37:25.122571347Z" level=info msg="Container f4cac8a2e0b6b2ce1f9975f7cee866f49f0d82e1d0c9acba6819bc7eed956db0: CDI devices from CRI Config.CDIDevices: []" Mar 7 02:37:25.173405 containerd[1560]: time="2026-03-07T02:37:25.173236182Z" level=info msg="CreateContainer within sandbox \"032829de2a16d0ba9181a631a5fd5178bdfa463a4ac98172267884c6a13b442f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f4cac8a2e0b6b2ce1f9975f7cee866f49f0d82e1d0c9acba6819bc7eed956db0\"" Mar 7 02:37:25.175403 containerd[1560]: time="2026-03-07T02:37:25.175240885Z" level=info msg="StartContainer for \"f4cac8a2e0b6b2ce1f9975f7cee866f49f0d82e1d0c9acba6819bc7eed956db0\"" Mar 7 02:37:25.183110 containerd[1560]: time="2026-03-07T02:37:25.183064400Z" level=info msg="connecting to shim f4cac8a2e0b6b2ce1f9975f7cee866f49f0d82e1d0c9acba6819bc7eed956db0" address="unix:///run/containerd/s/527f431209e132ae36d9803fdd9de9cb0e0d3854297b1ebe21ee9055d2e1464b" protocol=ttrpc version=3 Mar 7 02:37:25.249924 systemd[1]: Started cri-containerd-f4cac8a2e0b6b2ce1f9975f7cee866f49f0d82e1d0c9acba6819bc7eed956db0.scope - libcontainer container f4cac8a2e0b6b2ce1f9975f7cee866f49f0d82e1d0c9acba6819bc7eed956db0. Mar 7 02:37:25.547180 containerd[1560]: time="2026-03-07T02:37:25.547084575Z" level=info msg="StartContainer for \"f4cac8a2e0b6b2ce1f9975f7cee866f49f0d82e1d0c9acba6819bc7eed956db0\" returns successfully" Mar 7 02:37:25.619519 kubelet[2804]: E0307 02:37:25.619466 2804 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wphnt" podUID="b5e01c21-f499-422c-9fbe-c4c5a2c9ac71" Mar 7 02:37:26.267827 update_engine[1544]: I20260307 02:37:26.267226 1544 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 7 02:37:26.267827 update_engine[1544]: I20260307 02:37:26.267612 1544 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 7 02:37:26.268560 update_engine[1544]: I20260307 02:37:26.268426 1544 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 7 02:37:26.269794 update_engine[1544]: I20260307 02:37:26.269682 1544 omaha_request_params.cc:62] Current group set to stable Mar 7 02:37:26.275798 update_engine[1544]: I20260307 02:37:26.270722 1544 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 7 02:37:26.275798 update_engine[1544]: I20260307 02:37:26.270740 1544 update_attempter.cc:643] Scheduling an action processor start. Mar 7 02:37:26.275798 update_engine[1544]: I20260307 02:37:26.270805 1544 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 7 02:37:26.275798 update_engine[1544]: I20260307 02:37:26.270994 1544 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 7 02:37:26.275798 update_engine[1544]: I20260307 02:37:26.271224 1544 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 7 02:37:26.275798 update_engine[1544]: I20260307 02:37:26.271240 1544 omaha_request_action.cc:272] Request: Mar 7 02:37:26.275798 update_engine[1544]: Mar 7 02:37:26.275798 update_engine[1544]: Mar 7 02:37:26.275798 update_engine[1544]: Mar 7 02:37:26.275798 update_engine[1544]: Mar 7 02:37:26.275798 update_engine[1544]: Mar 7 02:37:26.275798 update_engine[1544]: Mar 7 02:37:26.275798 update_engine[1544]: Mar 7 02:37:26.275798 update_engine[1544]: Mar 7 02:37:26.275798 update_engine[1544]: I20260307 02:37:26.271381 1544 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 02:37:26.289848 update_engine[1544]: I20260307 02:37:26.288555 1544 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 02:37:26.291613 update_engine[1544]: I20260307 02:37:26.291556 1544 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 02:37:26.294195 locksmithd[1594]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 7 02:37:26.317809 update_engine[1544]: E20260307 02:37:26.317740 1544 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 02:37:26.319581 update_engine[1544]: I20260307 02:37:26.318183 1544 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 7 02:37:27.612402 kubelet[2804]: E0307 02:37:27.611703 2804 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wphnt" podUID="b5e01c21-f499-422c-9fbe-c4c5a2c9ac71" Mar 7 02:37:27.696542 systemd[1]: cri-containerd-f4cac8a2e0b6b2ce1f9975f7cee866f49f0d82e1d0c9acba6819bc7eed956db0.scope: Deactivated successfully. Mar 7 02:37:27.699862 systemd[1]: cri-containerd-f4cac8a2e0b6b2ce1f9975f7cee866f49f0d82e1d0c9acba6819bc7eed956db0.scope: Consumed 1.216s CPU time, 185.9M memory peak, 3.7M read from disk, 177M written to disk. Mar 7 02:37:27.713393 containerd[1560]: time="2026-03-07T02:37:27.713257994Z" level=info msg="received container exit event container_id:\"f4cac8a2e0b6b2ce1f9975f7cee866f49f0d82e1d0c9acba6819bc7eed956db0\" id:\"f4cac8a2e0b6b2ce1f9975f7cee866f49f0d82e1d0c9acba6819bc7eed956db0\" pid:3635 exited_at:{seconds:1772851047 nanos:708648414}" Mar 7 02:37:27.853468 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f4cac8a2e0b6b2ce1f9975f7cee866f49f0d82e1d0c9acba6819bc7eed956db0-rootfs.mount: Deactivated successfully. Mar 7 02:37:27.865717 kubelet[2804]: I0307 02:37:27.865415 2804 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 7 02:37:28.155466 systemd[1]: Created slice kubepods-burstable-pod4d19534f_dd8a_411c_bf09_f1512f0031bf.slice - libcontainer container kubepods-burstable-pod4d19534f_dd8a_411c_bf09_f1512f0031bf.slice. Mar 7 02:37:28.177061 kubelet[2804]: I0307 02:37:28.175981 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdm7z\" (UniqueName: \"kubernetes.io/projected/4d19534f-dd8a-411c-bf09-f1512f0031bf-kube-api-access-cdm7z\") pod \"coredns-7d764666f9-v7whr\" (UID: \"4d19534f-dd8a-411c-bf09-f1512f0031bf\") " pod="kube-system/coredns-7d764666f9-v7whr" Mar 7 02:37:28.177061 kubelet[2804]: I0307 02:37:28.176190 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d19534f-dd8a-411c-bf09-f1512f0031bf-config-volume\") pod \"coredns-7d764666f9-v7whr\" (UID: \"4d19534f-dd8a-411c-bf09-f1512f0031bf\") " pod="kube-system/coredns-7d764666f9-v7whr" Mar 7 02:37:28.177061 kubelet[2804]: I0307 02:37:28.176224 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0acb54c3-06f2-4b58-b339-52bad27423b3-tigera-ca-bundle\") pod \"calico-kube-controllers-745c6c977f-s7hmh\" (UID: \"0acb54c3-06f2-4b58-b339-52bad27423b3\") " pod="calico-system/calico-kube-controllers-745c6c977f-s7hmh" Mar 7 02:37:28.177061 kubelet[2804]: I0307 02:37:28.176249 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfsqv\" (UniqueName: \"kubernetes.io/projected/0acb54c3-06f2-4b58-b339-52bad27423b3-kube-api-access-jfsqv\") pod \"calico-kube-controllers-745c6c977f-s7hmh\" (UID: \"0acb54c3-06f2-4b58-b339-52bad27423b3\") " pod="calico-system/calico-kube-controllers-745c6c977f-s7hmh" Mar 7 02:37:28.184789 systemd[1]: Created slice kubepods-besteffort-pod0acb54c3_06f2_4b58_b339_52bad27423b3.slice - libcontainer container kubepods-besteffort-pod0acb54c3_06f2_4b58_b339_52bad27423b3.slice. Mar 7 02:37:28.211713 systemd[1]: Created slice kubepods-burstable-pod78cc04e9_c73f_4360_b338_165caff8aacb.slice - libcontainer container kubepods-burstable-pod78cc04e9_c73f_4360_b338_165caff8aacb.slice. Mar 7 02:37:28.243476 systemd[1]: Created slice kubepods-besteffort-pod3dba2c1b_1cbe_4de8_a6b5_7a6b3d05b110.slice - libcontainer container kubepods-besteffort-pod3dba2c1b_1cbe_4de8_a6b5_7a6b3d05b110.slice. Mar 7 02:37:28.270745 systemd[1]: Created slice kubepods-besteffort-podfc355c46_37cb_4d45_9184_9ae30d05953b.slice - libcontainer container kubepods-besteffort-podfc355c46_37cb_4d45_9184_9ae30d05953b.slice. Mar 7 02:37:28.280017 kubelet[2804]: I0307 02:37:28.279415 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc355c46-37cb-4d45-9184-9ae30d05953b-config\") pod \"goldmane-9f7667bb8-lwklk\" (UID: \"fc355c46-37cb-4d45-9184-9ae30d05953b\") " pod="calico-system/goldmane-9f7667bb8-lwklk" Mar 7 02:37:28.280017 kubelet[2804]: I0307 02:37:28.279502 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc355c46-37cb-4d45-9184-9ae30d05953b-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-lwklk\" (UID: \"fc355c46-37cb-4d45-9184-9ae30d05953b\") " pod="calico-system/goldmane-9f7667bb8-lwklk" Mar 7 02:37:28.280017 kubelet[2804]: I0307 02:37:28.279530 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/89eb1298-3a04-478f-a05e-924b3e06ed3b-nginx-config\") pod \"whisker-6cdc5fb799-78qss\" (UID: \"89eb1298-3a04-478f-a05e-924b3e06ed3b\") " pod="calico-system/whisker-6cdc5fb799-78qss" Mar 7 02:37:28.280017 kubelet[2804]: I0307 02:37:28.279556 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lhlj\" (UniqueName: \"kubernetes.io/projected/510194e6-ef76-4c44-9390-a72778675d85-kube-api-access-7lhlj\") pod \"calico-apiserver-6c7f9f5b5-g9jfr\" (UID: \"510194e6-ef76-4c44-9390-a72778675d85\") " pod="calico-system/calico-apiserver-6c7f9f5b5-g9jfr" Mar 7 02:37:28.280017 kubelet[2804]: I0307 02:37:28.279581 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89eb1298-3a04-478f-a05e-924b3e06ed3b-whisker-ca-bundle\") pod \"whisker-6cdc5fb799-78qss\" (UID: \"89eb1298-3a04-478f-a05e-924b3e06ed3b\") " pod="calico-system/whisker-6cdc5fb799-78qss" Mar 7 02:37:28.280420 kubelet[2804]: I0307 02:37:28.279602 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vpkz\" (UniqueName: \"kubernetes.io/projected/3dba2c1b-1cbe-4de8-a6b5-7a6b3d05b110-kube-api-access-5vpkz\") pod \"calico-apiserver-6c7f9f5b5-g9dhh\" (UID: \"3dba2c1b-1cbe-4de8-a6b5-7a6b3d05b110\") " pod="calico-system/calico-apiserver-6c7f9f5b5-g9dhh" Mar 7 02:37:28.280420 kubelet[2804]: I0307 02:37:28.279627 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/fc355c46-37cb-4d45-9184-9ae30d05953b-goldmane-key-pair\") pod \"goldmane-9f7667bb8-lwklk\" (UID: \"fc355c46-37cb-4d45-9184-9ae30d05953b\") " pod="calico-system/goldmane-9f7667bb8-lwklk" Mar 7 02:37:28.280420 kubelet[2804]: I0307 02:37:28.279652 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/510194e6-ef76-4c44-9390-a72778675d85-calico-apiserver-certs\") pod \"calico-apiserver-6c7f9f5b5-g9jfr\" (UID: \"510194e6-ef76-4c44-9390-a72778675d85\") " pod="calico-system/calico-apiserver-6c7f9f5b5-g9jfr" Mar 7 02:37:28.280420 kubelet[2804]: I0307 02:37:28.279679 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btrjx\" (UniqueName: \"kubernetes.io/projected/89eb1298-3a04-478f-a05e-924b3e06ed3b-kube-api-access-btrjx\") pod \"whisker-6cdc5fb799-78qss\" (UID: \"89eb1298-3a04-478f-a05e-924b3e06ed3b\") " pod="calico-system/whisker-6cdc5fb799-78qss" Mar 7 02:37:28.280420 kubelet[2804]: I0307 02:37:28.279727 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h66s\" (UniqueName: \"kubernetes.io/projected/fc355c46-37cb-4d45-9184-9ae30d05953b-kube-api-access-4h66s\") pod \"goldmane-9f7667bb8-lwklk\" (UID: \"fc355c46-37cb-4d45-9184-9ae30d05953b\") " pod="calico-system/goldmane-9f7667bb8-lwklk" Mar 7 02:37:28.280636 kubelet[2804]: I0307 02:37:28.279750 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/89eb1298-3a04-478f-a05e-924b3e06ed3b-whisker-backend-key-pair\") pod \"whisker-6cdc5fb799-78qss\" (UID: \"89eb1298-3a04-478f-a05e-924b3e06ed3b\") " pod="calico-system/whisker-6cdc5fb799-78qss" Mar 7 02:37:28.280636 kubelet[2804]: I0307 02:37:28.279772 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/78cc04e9-c73f-4360-b338-165caff8aacb-config-volume\") pod \"coredns-7d764666f9-nspbn\" (UID: \"78cc04e9-c73f-4360-b338-165caff8aacb\") " pod="kube-system/coredns-7d764666f9-nspbn" Mar 7 02:37:28.280636 kubelet[2804]: I0307 02:37:28.279793 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6jb9\" (UniqueName: \"kubernetes.io/projected/78cc04e9-c73f-4360-b338-165caff8aacb-kube-api-access-v6jb9\") pod \"coredns-7d764666f9-nspbn\" (UID: \"78cc04e9-c73f-4360-b338-165caff8aacb\") " pod="kube-system/coredns-7d764666f9-nspbn" Mar 7 02:37:28.280636 kubelet[2804]: I0307 02:37:28.279825 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3dba2c1b-1cbe-4de8-a6b5-7a6b3d05b110-calico-apiserver-certs\") pod \"calico-apiserver-6c7f9f5b5-g9dhh\" (UID: \"3dba2c1b-1cbe-4de8-a6b5-7a6b3d05b110\") " pod="calico-system/calico-apiserver-6c7f9f5b5-g9dhh" Mar 7 02:37:28.306794 systemd[1]: Created slice kubepods-besteffort-pod510194e6_ef76_4c44_9390_a72778675d85.slice - libcontainer container kubepods-besteffort-pod510194e6_ef76_4c44_9390_a72778675d85.slice. Mar 7 02:37:28.361245 systemd[1]: Created slice kubepods-besteffort-pod89eb1298_3a04_478f_a05e_924b3e06ed3b.slice - libcontainer container kubepods-besteffort-pod89eb1298_3a04_478f_a05e_924b3e06ed3b.slice. Mar 7 02:37:28.480444 kubelet[2804]: E0307 02:37:28.479832 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:37:28.484044 containerd[1560]: time="2026-03-07T02:37:28.481776303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-v7whr,Uid:4d19534f-dd8a-411c-bf09-f1512f0031bf,Namespace:kube-system,Attempt:0,}" Mar 7 02:37:28.507743 containerd[1560]: time="2026-03-07T02:37:28.507485678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-745c6c977f-s7hmh,Uid:0acb54c3-06f2-4b58-b339-52bad27423b3,Namespace:calico-system,Attempt:0,}" Mar 7 02:37:28.545252 kubelet[2804]: E0307 02:37:28.542583 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:37:28.552619 containerd[1560]: time="2026-03-07T02:37:28.552579037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-nspbn,Uid:78cc04e9-c73f-4360-b338-165caff8aacb,Namespace:kube-system,Attempt:0,}" Mar 7 02:37:28.579972 containerd[1560]: time="2026-03-07T02:37:28.579921602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c7f9f5b5-g9dhh,Uid:3dba2c1b-1cbe-4de8-a6b5-7a6b3d05b110,Namespace:calico-system,Attempt:0,}" Mar 7 02:37:28.616867 containerd[1560]: time="2026-03-07T02:37:28.616521030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-lwklk,Uid:fc355c46-37cb-4d45-9184-9ae30d05953b,Namespace:calico-system,Attempt:0,}" Mar 7 02:37:28.649259 containerd[1560]: time="2026-03-07T02:37:28.649174377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c7f9f5b5-g9jfr,Uid:510194e6-ef76-4c44-9390-a72778675d85,Namespace:calico-system,Attempt:0,}" Mar 7 02:37:28.723399 containerd[1560]: time="2026-03-07T02:37:28.722527030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cdc5fb799-78qss,Uid:89eb1298-3a04-478f-a05e-924b3e06ed3b,Namespace:calico-system,Attempt:0,}" Mar 7 02:37:28.767810 containerd[1560]: time="2026-03-07T02:37:28.767668040Z" level=info msg="CreateContainer within sandbox \"032829de2a16d0ba9181a631a5fd5178bdfa463a4ac98172267884c6a13b442f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 7 02:37:28.841067 containerd[1560]: time="2026-03-07T02:37:28.840450654Z" level=info msg="Container 80a2104ebb7ad068b38688d62c5309ccee19835342590e0c644fb2e5dee2ffb9: CDI devices from CRI Config.CDIDevices: []" Mar 7 02:37:28.883581 containerd[1560]: time="2026-03-07T02:37:28.883407288Z" level=info msg="CreateContainer within sandbox \"032829de2a16d0ba9181a631a5fd5178bdfa463a4ac98172267884c6a13b442f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"80a2104ebb7ad068b38688d62c5309ccee19835342590e0c644fb2e5dee2ffb9\"" Mar 7 02:37:28.884558 containerd[1560]: time="2026-03-07T02:37:28.884527480Z" level=info msg="StartContainer for \"80a2104ebb7ad068b38688d62c5309ccee19835342590e0c644fb2e5dee2ffb9\"" Mar 7 02:37:28.888502 containerd[1560]: time="2026-03-07T02:37:28.888368410Z" level=info msg="connecting to shim 80a2104ebb7ad068b38688d62c5309ccee19835342590e0c644fb2e5dee2ffb9" address="unix:///run/containerd/s/527f431209e132ae36d9803fdd9de9cb0e0d3854297b1ebe21ee9055d2e1464b" protocol=ttrpc version=3 Mar 7 02:37:28.959383 systemd[1]: Started cri-containerd-80a2104ebb7ad068b38688d62c5309ccee19835342590e0c644fb2e5dee2ffb9.scope - libcontainer container 80a2104ebb7ad068b38688d62c5309ccee19835342590e0c644fb2e5dee2ffb9. Mar 7 02:37:29.097198 containerd[1560]: time="2026-03-07T02:37:29.097137843Z" level=error msg="Failed to destroy network for sandbox \"d49131206e19a3e0ff45df4d33e51aee28fce077e6526e9f29d371e2d1f4a199\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:37:29.103663 containerd[1560]: time="2026-03-07T02:37:29.103142703Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-v7whr,Uid:4d19534f-dd8a-411c-bf09-f1512f0031bf,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d49131206e19a3e0ff45df4d33e51aee28fce077e6526e9f29d371e2d1f4a199\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:37:29.106854 systemd[1]: run-netns-cni\x2d0fccc3db\x2d4b81\x2dae9d\x2da56f\x2dbdfe06dc4d5d.mount: Deactivated successfully. Mar 7 02:37:29.130663 kubelet[2804]: E0307 02:37:29.130296 2804 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d49131206e19a3e0ff45df4d33e51aee28fce077e6526e9f29d371e2d1f4a199\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:37:29.130663 kubelet[2804]: E0307 02:37:29.130545 2804 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d49131206e19a3e0ff45df4d33e51aee28fce077e6526e9f29d371e2d1f4a199\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-v7whr" Mar 7 02:37:29.134529 kubelet[2804]: E0307 02:37:29.130647 2804 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d49131206e19a3e0ff45df4d33e51aee28fce077e6526e9f29d371e2d1f4a199\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-v7whr" Mar 7 02:37:29.134529 kubelet[2804]: E0307 02:37:29.130735 2804 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-v7whr_kube-system(4d19534f-dd8a-411c-bf09-f1512f0031bf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-v7whr_kube-system(4d19534f-dd8a-411c-bf09-f1512f0031bf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d49131206e19a3e0ff45df4d33e51aee28fce077e6526e9f29d371e2d1f4a199\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-v7whr" podUID="4d19534f-dd8a-411c-bf09-f1512f0031bf" Mar 7 02:37:29.155072 containerd[1560]: time="2026-03-07T02:37:29.152212402Z" level=error msg="Failed to destroy network for sandbox \"8c22269e6cf6be6835b0807368970cc1cfb937894c5d1b202e7f702e8585f8fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:37:29.156490 containerd[1560]: time="2026-03-07T02:37:29.156259300Z" level=error msg="Failed to destroy network for sandbox \"579448a76fa674e1a6b51ea4e619f487f7a18104df919b2093ffff3c508312a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:37:29.165268 systemd[1]: run-netns-cni\x2d2d341848\x2d8ec0\x2d08e4\x2d641a\x2dbdff9e26dfce.mount: Deactivated successfully. Mar 7 02:37:29.168520 containerd[1560]: time="2026-03-07T02:37:29.168423058Z" level=error msg="Failed to destroy network for sandbox \"26a2aa84fdcaa42ee128af1581d01a3ad3c12828eeeba0591909cb68f8a4ed59\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:37:29.173527 containerd[1560]: time="2026-03-07T02:37:29.173195464Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-nspbn,Uid:78cc04e9-c73f-4360-b338-165caff8aacb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c22269e6cf6be6835b0807368970cc1cfb937894c5d1b202e7f702e8585f8fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:37:29.176093 kubelet[2804]: E0307 02:37:29.175985 2804 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c22269e6cf6be6835b0807368970cc1cfb937894c5d1b202e7f702e8585f8fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:37:29.176184 kubelet[2804]: E0307 02:37:29.176097 2804 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c22269e6cf6be6835b0807368970cc1cfb937894c5d1b202e7f702e8585f8fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-nspbn" Mar 7 02:37:29.176184 kubelet[2804]: E0307 02:37:29.176128 2804 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c22269e6cf6be6835b0807368970cc1cfb937894c5d1b202e7f702e8585f8fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-nspbn" Mar 7 02:37:29.176275 kubelet[2804]: E0307 02:37:29.176239 2804 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-nspbn_kube-system(78cc04e9-c73f-4360-b338-165caff8aacb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-nspbn_kube-system(78cc04e9-c73f-4360-b338-165caff8aacb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c22269e6cf6be6835b0807368970cc1cfb937894c5d1b202e7f702e8585f8fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-nspbn" podUID="78cc04e9-c73f-4360-b338-165caff8aacb" Mar 7 02:37:29.182505 containerd[1560]: time="2026-03-07T02:37:29.182260848Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-lwklk,Uid:fc355c46-37cb-4d45-9184-9ae30d05953b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"579448a76fa674e1a6b51ea4e619f487f7a18104df919b2093ffff3c508312a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:37:29.187091 kubelet[2804]: E0307 02:37:29.184196 2804 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"579448a76fa674e1a6b51ea4e619f487f7a18104df919b2093ffff3c508312a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:37:29.187091 kubelet[2804]: E0307 02:37:29.185847 2804 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"579448a76fa674e1a6b51ea4e619f487f7a18104df919b2093ffff3c508312a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-lwklk" Mar 7 02:37:29.187091 kubelet[2804]: E0307 02:37:29.186205 2804 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"579448a76fa674e1a6b51ea4e619f487f7a18104df919b2093ffff3c508312a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-lwklk" Mar 7 02:37:29.185071 systemd[1]: run-netns-cni\x2d497a390c\x2d1e9a\x2d2e8d\x2d8678\x2d29f540307901.mount: Deactivated successfully. Mar 7 02:37:29.187639 kubelet[2804]: E0307 02:37:29.186823 2804 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-lwklk_calico-system(fc355c46-37cb-4d45-9184-9ae30d05953b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-lwklk_calico-system(fc355c46-37cb-4d45-9184-9ae30d05953b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"579448a76fa674e1a6b51ea4e619f487f7a18104df919b2093ffff3c508312a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-lwklk" podUID="fc355c46-37cb-4d45-9184-9ae30d05953b" Mar 7 02:37:29.185195 systemd[1]: run-netns-cni\x2d18a449e0\x2d17fd\x2d80f2\x2def10\x2d920ff86e844a.mount: Deactivated successfully. Mar 7 02:37:29.194165 containerd[1560]: time="2026-03-07T02:37:29.192275425Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c7f9f5b5-g9dhh,Uid:3dba2c1b-1cbe-4de8-a6b5-7a6b3d05b110,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"26a2aa84fdcaa42ee128af1581d01a3ad3c12828eeeba0591909cb68f8a4ed59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:37:29.194417 kubelet[2804]: E0307 02:37:29.192654 2804 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26a2aa84fdcaa42ee128af1581d01a3ad3c12828eeeba0591909cb68f8a4ed59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:37:29.194417 kubelet[2804]: E0307 02:37:29.192706 2804 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26a2aa84fdcaa42ee128af1581d01a3ad3c12828eeeba0591909cb68f8a4ed59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6c7f9f5b5-g9dhh" Mar 7 02:37:29.194417 kubelet[2804]: E0307 02:37:29.192729 2804 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26a2aa84fdcaa42ee128af1581d01a3ad3c12828eeeba0591909cb68f8a4ed59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6c7f9f5b5-g9dhh" Mar 7 02:37:29.194593 kubelet[2804]: E0307 02:37:29.192786 2804 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c7f9f5b5-g9dhh_calico-system(3dba2c1b-1cbe-4de8-a6b5-7a6b3d05b110)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c7f9f5b5-g9dhh_calico-system(3dba2c1b-1cbe-4de8-a6b5-7a6b3d05b110)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"26a2aa84fdcaa42ee128af1581d01a3ad3c12828eeeba0591909cb68f8a4ed59\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6c7f9f5b5-g9dhh" podUID="3dba2c1b-1cbe-4de8-a6b5-7a6b3d05b110" Mar 7 02:37:29.205801 containerd[1560]: time="2026-03-07T02:37:29.205748464Z" level=error msg="Failed to destroy network for sandbox \"592764afac1e5d8be8a94bcaaaab39601e50e5f0bfd4689c39c258f11956d235\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:37:29.221026 containerd[1560]: time="2026-03-07T02:37:29.219270771Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c7f9f5b5-g9jfr,Uid:510194e6-ef76-4c44-9390-a72778675d85,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"592764afac1e5d8be8a94bcaaaab39601e50e5f0bfd4689c39c258f11956d235\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:37:29.221268 kubelet[2804]: E0307 02:37:29.221179 2804 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"592764afac1e5d8be8a94bcaaaab39601e50e5f0bfd4689c39c258f11956d235\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:37:29.221268 kubelet[2804]: E0307 02:37:29.221257 2804 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"592764afac1e5d8be8a94bcaaaab39601e50e5f0bfd4689c39c258f11956d235\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6c7f9f5b5-g9jfr" Mar 7 02:37:29.221497 kubelet[2804]: E0307 02:37:29.221280 2804 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"592764afac1e5d8be8a94bcaaaab39601e50e5f0bfd4689c39c258f11956d235\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6c7f9f5b5-g9jfr" Mar 7 02:37:29.221497 kubelet[2804]: E0307 02:37:29.221423 2804 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c7f9f5b5-g9jfr_calico-system(510194e6-ef76-4c44-9390-a72778675d85)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c7f9f5b5-g9jfr_calico-system(510194e6-ef76-4c44-9390-a72778675d85)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"592764afac1e5d8be8a94bcaaaab39601e50e5f0bfd4689c39c258f11956d235\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6c7f9f5b5-g9jfr" podUID="510194e6-ef76-4c44-9390-a72778675d85" Mar 7 02:37:29.236856 containerd[1560]: time="2026-03-07T02:37:29.236752439Z" level=error msg="Failed to destroy network for sandbox \"61f303711fff083dfed0468a31ee4b6e4664819e7a7970b43e360f8e89ad3598\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:37:29.255728 containerd[1560]: time="2026-03-07T02:37:29.255520061Z" level=error msg="Failed to destroy network for sandbox \"09017fb7587479a58aa244538f4402f1b7bee92cf4f74cba78cc77dab4be66f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:37:29.256816 containerd[1560]: time="2026-03-07T02:37:29.256772239Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cdc5fb799-78qss,Uid:89eb1298-3a04-478f-a05e-924b3e06ed3b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"61f303711fff083dfed0468a31ee4b6e4664819e7a7970b43e360f8e89ad3598\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:37:29.258515 kubelet[2804]: E0307 02:37:29.258470 2804 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61f303711fff083dfed0468a31ee4b6e4664819e7a7970b43e360f8e89ad3598\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:37:29.259265 containerd[1560]: time="2026-03-07T02:37:29.259226029Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-745c6c977f-s7hmh,Uid:0acb54c3-06f2-4b58-b339-52bad27423b3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"09017fb7587479a58aa244538f4402f1b7bee92cf4f74cba78cc77dab4be66f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:37:29.259872 kubelet[2804]: E0307 02:37:29.259592 2804 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61f303711fff083dfed0468a31ee4b6e4664819e7a7970b43e360f8e89ad3598\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6cdc5fb799-78qss" Mar 7 02:37:29.259872 kubelet[2804]: E0307 02:37:29.262596 2804 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61f303711fff083dfed0468a31ee4b6e4664819e7a7970b43e360f8e89ad3598\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6cdc5fb799-78qss" Mar 7 02:37:29.259872 kubelet[2804]: E0307 02:37:29.262723 2804 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6cdc5fb799-78qss_calico-system(89eb1298-3a04-478f-a05e-924b3e06ed3b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6cdc5fb799-78qss_calico-system(89eb1298-3a04-478f-a05e-924b3e06ed3b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"61f303711fff083dfed0468a31ee4b6e4664819e7a7970b43e360f8e89ad3598\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6cdc5fb799-78qss" podUID="89eb1298-3a04-478f-a05e-924b3e06ed3b" Mar 7 02:37:29.265446 kubelet[2804]: E0307 02:37:29.263719 2804 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09017fb7587479a58aa244538f4402f1b7bee92cf4f74cba78cc77dab4be66f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:37:29.265446 kubelet[2804]: E0307 02:37:29.263866 2804 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09017fb7587479a58aa244538f4402f1b7bee92cf4f74cba78cc77dab4be66f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-745c6c977f-s7hmh" Mar 7 02:37:29.265446 kubelet[2804]: E0307 02:37:29.264029 2804 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09017fb7587479a58aa244538f4402f1b7bee92cf4f74cba78cc77dab4be66f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-745c6c977f-s7hmh" Mar 7 02:37:29.265711 kubelet[2804]: E0307 02:37:29.264283 2804 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-745c6c977f-s7hmh_calico-system(0acb54c3-06f2-4b58-b339-52bad27423b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-745c6c977f-s7hmh_calico-system(0acb54c3-06f2-4b58-b339-52bad27423b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"09017fb7587479a58aa244538f4402f1b7bee92cf4f74cba78cc77dab4be66f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-745c6c977f-s7hmh" podUID="0acb54c3-06f2-4b58-b339-52bad27423b3" Mar 7 02:37:29.314395 containerd[1560]: time="2026-03-07T02:37:29.313624292Z" level=info msg="StartContainer for \"80a2104ebb7ad068b38688d62c5309ccee19835342590e0c644fb2e5dee2ffb9\" returns successfully" Mar 7 02:37:29.651156 systemd[1]: Created slice kubepods-besteffort-podb5e01c21_f499_422c_9fbe_c4c5a2c9ac71.slice - libcontainer container kubepods-besteffort-podb5e01c21_f499_422c_9fbe_c4c5a2c9ac71.slice. Mar 7 02:37:29.680069 containerd[1560]: time="2026-03-07T02:37:29.679578747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wphnt,Uid:b5e01c21-f499-422c-9fbe-c4c5a2c9ac71,Namespace:calico-system,Attempt:0,}" Mar 7 02:37:29.841126 kubelet[2804]: I0307 02:37:29.840738 2804 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-fg5wh" podStartSLOduration=3.033803867 podStartE2EDuration="1m1.840616343s" podCreationTimestamp="2026-03-07 02:36:28 +0000 UTC" firstStartedPulling="2026-03-07 02:36:29.927977667 +0000 UTC m=+44.700757008" lastFinishedPulling="2026-03-07 02:37:28.734790153 +0000 UTC m=+103.507569484" observedRunningTime="2026-03-07 02:37:29.832136482 +0000 UTC m=+104.604915824" watchObservedRunningTime="2026-03-07 02:37:29.840616343 +0000 UTC m=+104.613395673" Mar 7 02:37:29.852128 systemd[1]: run-netns-cni\x2ddb705935\x2d3d7f\x2d14b0\x2d4455\x2dce7d843d8851.mount: Deactivated successfully. Mar 7 02:37:29.852256 systemd[1]: run-netns-cni\x2d89c0f60e\x2dd8ce\x2d7482\x2d37c4\x2db4e177c95f0a.mount: Deactivated successfully. Mar 7 02:37:29.852419 systemd[1]: run-netns-cni\x2dbd840dd9\x2db7aa\x2dca7f\x2d1fd5\x2d43392649471f.mount: Deactivated successfully. Mar 7 02:37:29.910139 containerd[1560]: time="2026-03-07T02:37:29.909250598Z" level=error msg="Failed to destroy network for sandbox \"81c60cafbce30f88632610f261605880544ffb1514815bef3fb0f7f5a3f1f703\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:37:29.916258 systemd[1]: run-netns-cni\x2d4fe9097a\x2d5a03\x2d8d6b\x2d9c6b\x2d35a232b696b2.mount: Deactivated successfully. Mar 7 02:37:29.921793 containerd[1560]: time="2026-03-07T02:37:29.921732135Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wphnt,Uid:b5e01c21-f499-422c-9fbe-c4c5a2c9ac71,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"81c60cafbce30f88632610f261605880544ffb1514815bef3fb0f7f5a3f1f703\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:37:29.924382 kubelet[2804]: E0307 02:37:29.924017 2804 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81c60cafbce30f88632610f261605880544ffb1514815bef3fb0f7f5a3f1f703\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 02:37:29.925376 kubelet[2804]: E0307 02:37:29.924671 2804 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81c60cafbce30f88632610f261605880544ffb1514815bef3fb0f7f5a3f1f703\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wphnt" Mar 7 02:37:29.925714 kubelet[2804]: E0307 02:37:29.925182 2804 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81c60cafbce30f88632610f261605880544ffb1514815bef3fb0f7f5a3f1f703\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wphnt" Mar 7 02:37:29.928816 kubelet[2804]: E0307 02:37:29.927461 2804 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wphnt_calico-system(b5e01c21-f499-422c-9fbe-c4c5a2c9ac71)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wphnt_calico-system(b5e01c21-f499-422c-9fbe-c4c5a2c9ac71)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"81c60cafbce30f88632610f261605880544ffb1514815bef3fb0f7f5a3f1f703\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wphnt" podUID="b5e01c21-f499-422c-9fbe-c4c5a2c9ac71" Mar 7 02:37:30.638069 kubelet[2804]: I0307 02:37:30.634436 2804 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/89eb1298-3a04-478f-a05e-924b3e06ed3b-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/89eb1298-3a04-478f-a05e-924b3e06ed3b-whisker-backend-key-pair\") pod \"89eb1298-3a04-478f-a05e-924b3e06ed3b\" (UID: \"89eb1298-3a04-478f-a05e-924b3e06ed3b\") " Mar 7 02:37:30.638069 kubelet[2804]: I0307 02:37:30.634544 2804 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/89eb1298-3a04-478f-a05e-924b3e06ed3b-nginx-config\" (UniqueName: \"kubernetes.io/configmap/89eb1298-3a04-478f-a05e-924b3e06ed3b-nginx-config\") pod \"89eb1298-3a04-478f-a05e-924b3e06ed3b\" (UID: \"89eb1298-3a04-478f-a05e-924b3e06ed3b\") " Mar 7 02:37:30.638069 kubelet[2804]: I0307 02:37:30.634605 2804 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/89eb1298-3a04-478f-a05e-924b3e06ed3b-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89eb1298-3a04-478f-a05e-924b3e06ed3b-whisker-ca-bundle\") pod \"89eb1298-3a04-478f-a05e-924b3e06ed3b\" (UID: \"89eb1298-3a04-478f-a05e-924b3e06ed3b\") " Mar 7 02:37:30.638069 kubelet[2804]: I0307 02:37:30.634633 2804 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/89eb1298-3a04-478f-a05e-924b3e06ed3b-kube-api-access-btrjx\" (UniqueName: \"kubernetes.io/projected/89eb1298-3a04-478f-a05e-924b3e06ed3b-kube-api-access-btrjx\") pod \"89eb1298-3a04-478f-a05e-924b3e06ed3b\" (UID: \"89eb1298-3a04-478f-a05e-924b3e06ed3b\") " Mar 7 02:37:30.638069 kubelet[2804]: I0307 02:37:30.636530 2804 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89eb1298-3a04-478f-a05e-924b3e06ed3b-whisker-ca-bundle" pod "89eb1298-3a04-478f-a05e-924b3e06ed3b" (UID: "89eb1298-3a04-478f-a05e-924b3e06ed3b"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 02:37:30.639156 kubelet[2804]: I0307 02:37:30.639085 2804 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89eb1298-3a04-478f-a05e-924b3e06ed3b-nginx-config" pod "89eb1298-3a04-478f-a05e-924b3e06ed3b" (UID: "89eb1298-3a04-478f-a05e-924b3e06ed3b"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 02:37:30.664751 systemd[1]: var-lib-kubelet-pods-89eb1298\x2d3a04\x2d478f\x2da05e\x2d924b3e06ed3b-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 7 02:37:30.670232 systemd[1]: var-lib-kubelet-pods-89eb1298\x2d3a04\x2d478f\x2da05e\x2d924b3e06ed3b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbtrjx.mount: Deactivated successfully. Mar 7 02:37:30.675428 kubelet[2804]: I0307 02:37:30.673861 2804 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89eb1298-3a04-478f-a05e-924b3e06ed3b-kube-api-access-btrjx" pod "89eb1298-3a04-478f-a05e-924b3e06ed3b" (UID: "89eb1298-3a04-478f-a05e-924b3e06ed3b"). InnerVolumeSpecName "kube-api-access-btrjx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 02:37:30.676253 kubelet[2804]: I0307 02:37:30.676012 2804 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89eb1298-3a04-478f-a05e-924b3e06ed3b-whisker-backend-key-pair" pod "89eb1298-3a04-478f-a05e-924b3e06ed3b" (UID: "89eb1298-3a04-478f-a05e-924b3e06ed3b"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 7 02:37:30.736273 kubelet[2804]: I0307 02:37:30.736171 2804 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89eb1298-3a04-478f-a05e-924b3e06ed3b-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 7 02:37:30.736591 kubelet[2804]: I0307 02:37:30.736533 2804 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-btrjx\" (UniqueName: \"kubernetes.io/projected/89eb1298-3a04-478f-a05e-924b3e06ed3b-kube-api-access-btrjx\") on node \"localhost\" DevicePath \"\"" Mar 7 02:37:30.736591 kubelet[2804]: I0307 02:37:30.736555 2804 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/89eb1298-3a04-478f-a05e-924b3e06ed3b-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 7 02:37:30.736591 kubelet[2804]: I0307 02:37:30.736568 2804 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/89eb1298-3a04-478f-a05e-924b3e06ed3b-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 7 02:37:30.805459 systemd[1]: Removed slice kubepods-besteffort-pod89eb1298_3a04_478f_a05e_924b3e06ed3b.slice - libcontainer container kubepods-besteffort-pod89eb1298_3a04_478f_a05e_924b3e06ed3b.slice. Mar 7 02:37:31.165563 systemd[1]: Created slice kubepods-besteffort-pode533d9c7_4529_4f65_9b38_5665eb0c9692.slice - libcontainer container kubepods-besteffort-pode533d9c7_4529_4f65_9b38_5665eb0c9692.slice. Mar 7 02:37:31.247630 kubelet[2804]: I0307 02:37:31.247586 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/e533d9c7-4529-4f65-9b38-5665eb0c9692-nginx-config\") pod \"whisker-574cdbd48d-t2sn2\" (UID: \"e533d9c7-4529-4f65-9b38-5665eb0c9692\") " pod="calico-system/whisker-574cdbd48d-t2sn2" Mar 7 02:37:31.248522 kubelet[2804]: I0307 02:37:31.248177 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59glp\" (UniqueName: \"kubernetes.io/projected/e533d9c7-4529-4f65-9b38-5665eb0c9692-kube-api-access-59glp\") pod \"whisker-574cdbd48d-t2sn2\" (UID: \"e533d9c7-4529-4f65-9b38-5665eb0c9692\") " pod="calico-system/whisker-574cdbd48d-t2sn2" Mar 7 02:37:31.248522 kubelet[2804]: I0307 02:37:31.248236 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e533d9c7-4529-4f65-9b38-5665eb0c9692-whisker-ca-bundle\") pod \"whisker-574cdbd48d-t2sn2\" (UID: \"e533d9c7-4529-4f65-9b38-5665eb0c9692\") " pod="calico-system/whisker-574cdbd48d-t2sn2" Mar 7 02:37:31.248522 kubelet[2804]: I0307 02:37:31.248263 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e533d9c7-4529-4f65-9b38-5665eb0c9692-whisker-backend-key-pair\") pod \"whisker-574cdbd48d-t2sn2\" (UID: \"e533d9c7-4529-4f65-9b38-5665eb0c9692\") " pod="calico-system/whisker-574cdbd48d-t2sn2" Mar 7 02:37:31.507864 containerd[1560]: time="2026-03-07T02:37:31.501599321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-574cdbd48d-t2sn2,Uid:e533d9c7-4529-4f65-9b38-5665eb0c9692,Namespace:calico-system,Attempt:0,}" Mar 7 02:37:31.628785 kubelet[2804]: I0307 02:37:31.628508 2804 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="89eb1298-3a04-478f-a05e-924b3e06ed3b" path="/var/lib/kubelet/pods/89eb1298-3a04-478f-a05e-924b3e06ed3b/volumes" Mar 7 02:37:32.080987 systemd-networkd[1458]: cali5a220eb21d5: Link UP Mar 7 02:37:32.081961 systemd-networkd[1458]: cali5a220eb21d5: Gained carrier Mar 7 02:37:32.167398 containerd[1560]: 2026-03-07 02:37:31.617 [ERROR][4041] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 02:37:32.167398 containerd[1560]: 2026-03-07 02:37:31.669 [INFO][4041] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--574cdbd48d--t2sn2-eth0 whisker-574cdbd48d- calico-system e533d9c7-4529-4f65-9b38-5665eb0c9692 1092 0 2026-03-07 02:37:31 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:574cdbd48d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-574cdbd48d-t2sn2 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5a220eb21d5 [] [] }} ContainerID="308ef2f7b7c2da67fcbc9feb6c5bb28154e1c5c01f29016de514aa5f0971e962" Namespace="calico-system" Pod="whisker-574cdbd48d-t2sn2" WorkloadEndpoint="localhost-k8s-whisker--574cdbd48d--t2sn2-" Mar 7 02:37:32.167398 containerd[1560]: 2026-03-07 02:37:31.669 [INFO][4041] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="308ef2f7b7c2da67fcbc9feb6c5bb28154e1c5c01f29016de514aa5f0971e962" Namespace="calico-system" Pod="whisker-574cdbd48d-t2sn2" WorkloadEndpoint="localhost-k8s-whisker--574cdbd48d--t2sn2-eth0" Mar 7 02:37:32.167398 containerd[1560]: 2026-03-07 02:37:31.780 [INFO][4067] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="308ef2f7b7c2da67fcbc9feb6c5bb28154e1c5c01f29016de514aa5f0971e962" HandleID="k8s-pod-network.308ef2f7b7c2da67fcbc9feb6c5bb28154e1c5c01f29016de514aa5f0971e962" Workload="localhost-k8s-whisker--574cdbd48d--t2sn2-eth0" Mar 7 02:37:32.167754 containerd[1560]: 2026-03-07 02:37:31.808 [INFO][4067] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="308ef2f7b7c2da67fcbc9feb6c5bb28154e1c5c01f29016de514aa5f0971e962" HandleID="k8s-pod-network.308ef2f7b7c2da67fcbc9feb6c5bb28154e1c5c01f29016de514aa5f0971e962" Workload="localhost-k8s-whisker--574cdbd48d--t2sn2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ef30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-574cdbd48d-t2sn2", "timestamp":"2026-03-07 02:37:31.780822881 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002f6dc0)} Mar 7 02:37:32.167754 containerd[1560]: 2026-03-07 02:37:31.809 [INFO][4067] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:37:32.167754 containerd[1560]: 2026-03-07 02:37:31.810 [INFO][4067] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:37:32.167754 containerd[1560]: 2026-03-07 02:37:31.810 [INFO][4067] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 02:37:32.167754 containerd[1560]: 2026-03-07 02:37:31.843 [INFO][4067] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.308ef2f7b7c2da67fcbc9feb6c5bb28154e1c5c01f29016de514aa5f0971e962" host="localhost" Mar 7 02:37:32.167754 containerd[1560]: 2026-03-07 02:37:31.863 [INFO][4067] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 02:37:32.167754 containerd[1560]: 2026-03-07 02:37:31.888 [INFO][4067] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 02:37:32.167754 containerd[1560]: 2026-03-07 02:37:31.896 [INFO][4067] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 02:37:32.167754 containerd[1560]: 2026-03-07 02:37:31.906 [INFO][4067] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 02:37:32.167754 containerd[1560]: 2026-03-07 02:37:31.906 [INFO][4067] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.308ef2f7b7c2da67fcbc9feb6c5bb28154e1c5c01f29016de514aa5f0971e962" host="localhost" Mar 7 02:37:32.168239 containerd[1560]: 2026-03-07 02:37:31.911 [INFO][4067] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.308ef2f7b7c2da67fcbc9feb6c5bb28154e1c5c01f29016de514aa5f0971e962 Mar 7 02:37:32.168239 containerd[1560]: 2026-03-07 02:37:31.929 [INFO][4067] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.308ef2f7b7c2da67fcbc9feb6c5bb28154e1c5c01f29016de514aa5f0971e962" host="localhost" Mar 7 02:37:32.168239 containerd[1560]: 2026-03-07 02:37:31.970 [INFO][4067] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.308ef2f7b7c2da67fcbc9feb6c5bb28154e1c5c01f29016de514aa5f0971e962" host="localhost" Mar 7 02:37:32.168239 containerd[1560]: 2026-03-07 02:37:31.972 [INFO][4067] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.308ef2f7b7c2da67fcbc9feb6c5bb28154e1c5c01f29016de514aa5f0971e962" host="localhost" Mar 7 02:37:32.168239 containerd[1560]: 2026-03-07 02:37:31.972 [INFO][4067] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:37:32.168239 containerd[1560]: 2026-03-07 02:37:31.972 [INFO][4067] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="308ef2f7b7c2da67fcbc9feb6c5bb28154e1c5c01f29016de514aa5f0971e962" HandleID="k8s-pod-network.308ef2f7b7c2da67fcbc9feb6c5bb28154e1c5c01f29016de514aa5f0971e962" Workload="localhost-k8s-whisker--574cdbd48d--t2sn2-eth0" Mar 7 02:37:32.170821 containerd[1560]: 2026-03-07 02:37:31.985 [INFO][4041] cni-plugin/k8s.go 418: Populated endpoint ContainerID="308ef2f7b7c2da67fcbc9feb6c5bb28154e1c5c01f29016de514aa5f0971e962" Namespace="calico-system" Pod="whisker-574cdbd48d-t2sn2" WorkloadEndpoint="localhost-k8s-whisker--574cdbd48d--t2sn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--574cdbd48d--t2sn2-eth0", GenerateName:"whisker-574cdbd48d-", Namespace:"calico-system", SelfLink:"", UID:"e533d9c7-4529-4f65-9b38-5665eb0c9692", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 37, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"574cdbd48d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-574cdbd48d-t2sn2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5a220eb21d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:37:32.170821 containerd[1560]: 2026-03-07 02:37:31.985 [INFO][4041] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="308ef2f7b7c2da67fcbc9feb6c5bb28154e1c5c01f29016de514aa5f0971e962" Namespace="calico-system" Pod="whisker-574cdbd48d-t2sn2" WorkloadEndpoint="localhost-k8s-whisker--574cdbd48d--t2sn2-eth0" Mar 7 02:37:32.171099 containerd[1560]: 2026-03-07 02:37:31.985 [INFO][4041] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5a220eb21d5 ContainerID="308ef2f7b7c2da67fcbc9feb6c5bb28154e1c5c01f29016de514aa5f0971e962" Namespace="calico-system" Pod="whisker-574cdbd48d-t2sn2" WorkloadEndpoint="localhost-k8s-whisker--574cdbd48d--t2sn2-eth0" Mar 7 02:37:32.171099 containerd[1560]: 2026-03-07 02:37:32.084 [INFO][4041] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="308ef2f7b7c2da67fcbc9feb6c5bb28154e1c5c01f29016de514aa5f0971e962" Namespace="calico-system" Pod="whisker-574cdbd48d-t2sn2" WorkloadEndpoint="localhost-k8s-whisker--574cdbd48d--t2sn2-eth0" Mar 7 02:37:32.171165 containerd[1560]: 2026-03-07 02:37:32.089 [INFO][4041] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="308ef2f7b7c2da67fcbc9feb6c5bb28154e1c5c01f29016de514aa5f0971e962" Namespace="calico-system" Pod="whisker-574cdbd48d-t2sn2" WorkloadEndpoint="localhost-k8s-whisker--574cdbd48d--t2sn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--574cdbd48d--t2sn2-eth0", GenerateName:"whisker-574cdbd48d-", Namespace:"calico-system", SelfLink:"", UID:"e533d9c7-4529-4f65-9b38-5665eb0c9692", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 37, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"574cdbd48d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"308ef2f7b7c2da67fcbc9feb6c5bb28154e1c5c01f29016de514aa5f0971e962", Pod:"whisker-574cdbd48d-t2sn2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5a220eb21d5", MAC:"ca:b5:ad:ae:9d:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:37:32.171409 containerd[1560]: 2026-03-07 02:37:32.154 [INFO][4041] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="308ef2f7b7c2da67fcbc9feb6c5bb28154e1c5c01f29016de514aa5f0971e962" Namespace="calico-system" Pod="whisker-574cdbd48d-t2sn2" WorkloadEndpoint="localhost-k8s-whisker--574cdbd48d--t2sn2-eth0" Mar 7 02:37:32.485186 containerd[1560]: time="2026-03-07T02:37:32.484985291Z" level=info msg="connecting to shim 308ef2f7b7c2da67fcbc9feb6c5bb28154e1c5c01f29016de514aa5f0971e962" address="unix:///run/containerd/s/8ec3cc7c03da8fe08a35b45fef510eb9f11a6461df0d2caadfc9d3dcdedee8a8" namespace=k8s.io protocol=ttrpc version=3 Mar 7 02:37:32.552300 systemd[1]: Started cri-containerd-308ef2f7b7c2da67fcbc9feb6c5bb28154e1c5c01f29016de514aa5f0971e962.scope - libcontainer container 308ef2f7b7c2da67fcbc9feb6c5bb28154e1c5c01f29016de514aa5f0971e962. Mar 7 02:37:32.596225 systemd-resolved[1462]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 02:37:32.746837 containerd[1560]: time="2026-03-07T02:37:32.739243043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-574cdbd48d-t2sn2,Uid:e533d9c7-4529-4f65-9b38-5665eb0c9692,Namespace:calico-system,Attempt:0,} returns sandbox id \"308ef2f7b7c2da67fcbc9feb6c5bb28154e1c5c01f29016de514aa5f0971e962\"" Mar 7 02:37:32.759422 containerd[1560]: time="2026-03-07T02:37:32.759289247Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 7 02:37:33.737845 systemd-networkd[1458]: cali5a220eb21d5: Gained IPv6LL Mar 7 02:37:33.891151 containerd[1560]: time="2026-03-07T02:37:33.891043340Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:37:33.918802 containerd[1560]: time="2026-03-07T02:37:33.918629660Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 7 02:37:33.931217 containerd[1560]: time="2026-03-07T02:37:33.927044495Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:37:33.937112 containerd[1560]: time="2026-03-07T02:37:33.936979912Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:37:33.937971 containerd[1560]: time="2026-03-07T02:37:33.937858949Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.178285391s" Mar 7 02:37:33.937971 containerd[1560]: time="2026-03-07T02:37:33.937958835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 7 02:37:33.953390 containerd[1560]: time="2026-03-07T02:37:33.953116056Z" level=info msg="CreateContainer within sandbox \"308ef2f7b7c2da67fcbc9feb6c5bb28154e1c5c01f29016de514aa5f0971e962\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 7 02:37:33.977642 containerd[1560]: time="2026-03-07T02:37:33.977537088Z" level=info msg="Container 7916705e07d83c943e5209b8f466520ee4533b275204eba94b2999b13186dbfd: CDI devices from CRI Config.CDIDevices: []" Mar 7 02:37:33.986551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount569753374.mount: Deactivated successfully. Mar 7 02:37:34.012704 containerd[1560]: time="2026-03-07T02:37:34.012087590Z" level=info msg="CreateContainer within sandbox \"308ef2f7b7c2da67fcbc9feb6c5bb28154e1c5c01f29016de514aa5f0971e962\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"7916705e07d83c943e5209b8f466520ee4533b275204eba94b2999b13186dbfd\"" Mar 7 02:37:34.019838 containerd[1560]: time="2026-03-07T02:37:34.018230488Z" level=info msg="StartContainer for \"7916705e07d83c943e5209b8f466520ee4533b275204eba94b2999b13186dbfd\"" Mar 7 02:37:34.031414 containerd[1560]: time="2026-03-07T02:37:34.031294071Z" level=info msg="connecting to shim 7916705e07d83c943e5209b8f466520ee4533b275204eba94b2999b13186dbfd" address="unix:///run/containerd/s/8ec3cc7c03da8fe08a35b45fef510eb9f11a6461df0d2caadfc9d3dcdedee8a8" protocol=ttrpc version=3 Mar 7 02:37:34.105758 systemd[1]: Started cri-containerd-7916705e07d83c943e5209b8f466520ee4533b275204eba94b2999b13186dbfd.scope - libcontainer container 7916705e07d83c943e5209b8f466520ee4533b275204eba94b2999b13186dbfd. Mar 7 02:37:34.310541 containerd[1560]: time="2026-03-07T02:37:34.310392457Z" level=info msg="StartContainer for \"7916705e07d83c943e5209b8f466520ee4533b275204eba94b2999b13186dbfd\" returns successfully" Mar 7 02:37:34.316203 containerd[1560]: time="2026-03-07T02:37:34.315997251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 7 02:37:34.935165 systemd-networkd[1458]: vxlan.calico: Link UP Mar 7 02:37:34.935177 systemd-networkd[1458]: vxlan.calico: Gained carrier Mar 7 02:37:36.271236 update_engine[1544]: I20260307 02:37:36.268162 1544 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 02:37:36.276052 update_engine[1544]: I20260307 02:37:36.275947 1544 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 02:37:36.297155 update_engine[1544]: I20260307 02:37:36.294967 1544 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 02:37:36.311475 update_engine[1544]: E20260307 02:37:36.307214 1544 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 02:37:36.311475 update_engine[1544]: I20260307 02:37:36.307386 1544 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 7 02:37:36.944663 systemd-networkd[1458]: vxlan.calico: Gained IPv6LL Mar 7 02:37:37.102165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3123238075.mount: Deactivated successfully. Mar 7 02:37:37.222012 containerd[1560]: time="2026-03-07T02:37:37.221720962Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:37:37.227637 containerd[1560]: time="2026-03-07T02:37:37.227162430Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 7 02:37:37.233921 containerd[1560]: time="2026-03-07T02:37:37.232058641Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:37:37.240979 containerd[1560]: time="2026-03-07T02:37:37.236986729Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:37:37.240979 containerd[1560]: time="2026-03-07T02:37:37.237293755Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 2.92125688s" Mar 7 02:37:37.240979 containerd[1560]: time="2026-03-07T02:37:37.237427253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 7 02:37:37.257434 containerd[1560]: time="2026-03-07T02:37:37.256930301Z" level=info msg="CreateContainer within sandbox \"308ef2f7b7c2da67fcbc9feb6c5bb28154e1c5c01f29016de514aa5f0971e962\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 7 02:37:37.296037 containerd[1560]: time="2026-03-07T02:37:37.295372140Z" level=info msg="Container 1e8c4b3d77cf1ef88f73f8e9741116c902d2b75e69abc04a278ec36a8e16d928: CDI devices from CRI Config.CDIDevices: []" Mar 7 02:37:37.328173 containerd[1560]: time="2026-03-07T02:37:37.328073931Z" level=info msg="CreateContainer within sandbox \"308ef2f7b7c2da67fcbc9feb6c5bb28154e1c5c01f29016de514aa5f0971e962\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"1e8c4b3d77cf1ef88f73f8e9741116c902d2b75e69abc04a278ec36a8e16d928\"" Mar 7 02:37:37.340810 containerd[1560]: time="2026-03-07T02:37:37.340723315Z" level=info msg="StartContainer for \"1e8c4b3d77cf1ef88f73f8e9741116c902d2b75e69abc04a278ec36a8e16d928\"" Mar 7 02:37:37.347131 containerd[1560]: time="2026-03-07T02:37:37.346394103Z" level=info msg="connecting to shim 1e8c4b3d77cf1ef88f73f8e9741116c902d2b75e69abc04a278ec36a8e16d928" address="unix:///run/containerd/s/8ec3cc7c03da8fe08a35b45fef510eb9f11a6461df0d2caadfc9d3dcdedee8a8" protocol=ttrpc version=3 Mar 7 02:37:37.392569 systemd[1]: Started cri-containerd-1e8c4b3d77cf1ef88f73f8e9741116c902d2b75e69abc04a278ec36a8e16d928.scope - libcontainer container 1e8c4b3d77cf1ef88f73f8e9741116c902d2b75e69abc04a278ec36a8e16d928. Mar 7 02:37:37.532677 containerd[1560]: time="2026-03-07T02:37:37.532246119Z" level=info msg="StartContainer for \"1e8c4b3d77cf1ef88f73f8e9741116c902d2b75e69abc04a278ec36a8e16d928\" returns successfully" Mar 7 02:37:37.911952 kubelet[2804]: I0307 02:37:37.911539 2804 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-574cdbd48d-t2sn2" podStartSLOduration=2.427572472 podStartE2EDuration="6.911520274s" podCreationTimestamp="2026-03-07 02:37:31 +0000 UTC" firstStartedPulling="2026-03-07 02:37:32.758500093 +0000 UTC m=+107.531279423" lastFinishedPulling="2026-03-07 02:37:37.242447894 +0000 UTC m=+112.015227225" observedRunningTime="2026-03-07 02:37:37.903704963 +0000 UTC m=+112.676484294" watchObservedRunningTime="2026-03-07 02:37:37.911520274 +0000 UTC m=+112.684299615" Mar 7 02:37:40.636646 containerd[1560]: time="2026-03-07T02:37:40.634975825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wphnt,Uid:b5e01c21-f499-422c-9fbe-c4c5a2c9ac71,Namespace:calico-system,Attempt:0,}" Mar 7 02:37:41.328159 systemd-networkd[1458]: cali2e06b433e60: Link UP Mar 7 02:37:41.329070 systemd-networkd[1458]: cali2e06b433e60: Gained carrier Mar 7 02:37:41.465469 containerd[1560]: 2026-03-07 02:37:40.859 [INFO][4443] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--wphnt-eth0 csi-node-driver- calico-system b5e01c21-f499-422c-9fbe-c4c5a2c9ac71 795 0 2026-03-07 02:36:29 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-wphnt eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2e06b433e60 [] [] }} ContainerID="0947e803909cb562a8df5815555d6ce10d643f0d236dd8367c6bb0b86a2ffa06" Namespace="calico-system" Pod="csi-node-driver-wphnt" WorkloadEndpoint="localhost-k8s-csi--node--driver--wphnt-" Mar 7 02:37:41.465469 containerd[1560]: 2026-03-07 02:37:40.864 [INFO][4443] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0947e803909cb562a8df5815555d6ce10d643f0d236dd8367c6bb0b86a2ffa06" Namespace="calico-system" Pod="csi-node-driver-wphnt" WorkloadEndpoint="localhost-k8s-csi--node--driver--wphnt-eth0" Mar 7 02:37:41.465469 containerd[1560]: 2026-03-07 02:37:41.022 [INFO][4458] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0947e803909cb562a8df5815555d6ce10d643f0d236dd8367c6bb0b86a2ffa06" HandleID="k8s-pod-network.0947e803909cb562a8df5815555d6ce10d643f0d236dd8367c6bb0b86a2ffa06" Workload="localhost-k8s-csi--node--driver--wphnt-eth0" Mar 7 02:37:41.465758 containerd[1560]: 2026-03-07 02:37:41.041 [INFO][4458] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0947e803909cb562a8df5815555d6ce10d643f0d236dd8367c6bb0b86a2ffa06" HandleID="k8s-pod-network.0947e803909cb562a8df5815555d6ce10d643f0d236dd8367c6bb0b86a2ffa06" Workload="localhost-k8s-csi--node--driver--wphnt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002774d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-wphnt", "timestamp":"2026-03-07 02:37:41.02251918 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00024b1e0)} Mar 7 02:37:41.465758 containerd[1560]: 2026-03-07 02:37:41.041 [INFO][4458] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:37:41.465758 containerd[1560]: 2026-03-07 02:37:41.041 [INFO][4458] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:37:41.465758 containerd[1560]: 2026-03-07 02:37:41.041 [INFO][4458] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 02:37:41.465758 containerd[1560]: 2026-03-07 02:37:41.066 [INFO][4458] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0947e803909cb562a8df5815555d6ce10d643f0d236dd8367c6bb0b86a2ffa06" host="localhost" Mar 7 02:37:41.465758 containerd[1560]: 2026-03-07 02:37:41.118 [INFO][4458] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 02:37:41.465758 containerd[1560]: 2026-03-07 02:37:41.148 [INFO][4458] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 02:37:41.465758 containerd[1560]: 2026-03-07 02:37:41.154 [INFO][4458] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 02:37:41.465758 containerd[1560]: 2026-03-07 02:37:41.179 [INFO][4458] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 02:37:41.465758 containerd[1560]: 2026-03-07 02:37:41.179 [INFO][4458] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0947e803909cb562a8df5815555d6ce10d643f0d236dd8367c6bb0b86a2ffa06" host="localhost" Mar 7 02:37:41.466184 containerd[1560]: 2026-03-07 02:37:41.206 [INFO][4458] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0947e803909cb562a8df5815555d6ce10d643f0d236dd8367c6bb0b86a2ffa06 Mar 7 02:37:41.466184 containerd[1560]: 2026-03-07 02:37:41.230 [INFO][4458] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0947e803909cb562a8df5815555d6ce10d643f0d236dd8367c6bb0b86a2ffa06" host="localhost" Mar 7 02:37:41.466184 containerd[1560]: 2026-03-07 02:37:41.287 [INFO][4458] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.0947e803909cb562a8df5815555d6ce10d643f0d236dd8367c6bb0b86a2ffa06" host="localhost" Mar 7 02:37:41.466184 containerd[1560]: 2026-03-07 02:37:41.288 [INFO][4458] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.0947e803909cb562a8df5815555d6ce10d643f0d236dd8367c6bb0b86a2ffa06" host="localhost" Mar 7 02:37:41.466184 containerd[1560]: 2026-03-07 02:37:41.288 [INFO][4458] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:37:41.466184 containerd[1560]: 2026-03-07 02:37:41.289 [INFO][4458] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="0947e803909cb562a8df5815555d6ce10d643f0d236dd8367c6bb0b86a2ffa06" HandleID="k8s-pod-network.0947e803909cb562a8df5815555d6ce10d643f0d236dd8367c6bb0b86a2ffa06" Workload="localhost-k8s-csi--node--driver--wphnt-eth0" Mar 7 02:37:41.466460 containerd[1560]: 2026-03-07 02:37:41.311 [INFO][4443] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0947e803909cb562a8df5815555d6ce10d643f0d236dd8367c6bb0b86a2ffa06" Namespace="calico-system" Pod="csi-node-driver-wphnt" WorkloadEndpoint="localhost-k8s-csi--node--driver--wphnt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wphnt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b5e01c21-f499-422c-9fbe-c4c5a2c9ac71", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 36, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-wphnt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2e06b433e60", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:37:41.466610 containerd[1560]: 2026-03-07 02:37:41.311 [INFO][4443] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="0947e803909cb562a8df5815555d6ce10d643f0d236dd8367c6bb0b86a2ffa06" Namespace="calico-system" Pod="csi-node-driver-wphnt" WorkloadEndpoint="localhost-k8s-csi--node--driver--wphnt-eth0" Mar 7 02:37:41.466610 containerd[1560]: 2026-03-07 02:37:41.311 [INFO][4443] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2e06b433e60 ContainerID="0947e803909cb562a8df5815555d6ce10d643f0d236dd8367c6bb0b86a2ffa06" Namespace="calico-system" Pod="csi-node-driver-wphnt" WorkloadEndpoint="localhost-k8s-csi--node--driver--wphnt-eth0" Mar 7 02:37:41.466610 containerd[1560]: 2026-03-07 02:37:41.342 [INFO][4443] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0947e803909cb562a8df5815555d6ce10d643f0d236dd8367c6bb0b86a2ffa06" Namespace="calico-system" Pod="csi-node-driver-wphnt" WorkloadEndpoint="localhost-k8s-csi--node--driver--wphnt-eth0" Mar 7 02:37:41.466711 containerd[1560]: 2026-03-07 02:37:41.347 [INFO][4443] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0947e803909cb562a8df5815555d6ce10d643f0d236dd8367c6bb0b86a2ffa06" Namespace="calico-system" Pod="csi-node-driver-wphnt" WorkloadEndpoint="localhost-k8s-csi--node--driver--wphnt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wphnt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b5e01c21-f499-422c-9fbe-c4c5a2c9ac71", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 36, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0947e803909cb562a8df5815555d6ce10d643f0d236dd8367c6bb0b86a2ffa06", Pod:"csi-node-driver-wphnt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2e06b433e60", MAC:"4e:d7:2f:66:45:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:37:41.466840 containerd[1560]: 2026-03-07 02:37:41.430 [INFO][4443] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0947e803909cb562a8df5815555d6ce10d643f0d236dd8367c6bb0b86a2ffa06" Namespace="calico-system" Pod="csi-node-driver-wphnt" WorkloadEndpoint="localhost-k8s-csi--node--driver--wphnt-eth0" Mar 7 02:37:41.643859 containerd[1560]: time="2026-03-07T02:37:41.641687203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c7f9f5b5-g9dhh,Uid:3dba2c1b-1cbe-4de8-a6b5-7a6b3d05b110,Namespace:calico-system,Attempt:0,}" Mar 7 02:37:41.721407 containerd[1560]: time="2026-03-07T02:37:41.721265739Z" level=info msg="connecting to shim 0947e803909cb562a8df5815555d6ce10d643f0d236dd8367c6bb0b86a2ffa06" address="unix:///run/containerd/s/797b5f1886f78f08bd75221066e8ea0fe685db1ded9c13f20bed663039b3ed87" namespace=k8s.io protocol=ttrpc version=3 Mar 7 02:37:41.874445 systemd[1]: Started cri-containerd-0947e803909cb562a8df5815555d6ce10d643f0d236dd8367c6bb0b86a2ffa06.scope - libcontainer container 0947e803909cb562a8df5815555d6ce10d643f0d236dd8367c6bb0b86a2ffa06. Mar 7 02:37:41.948444 systemd-resolved[1462]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 02:37:42.066812 containerd[1560]: time="2026-03-07T02:37:42.062996608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wphnt,Uid:b5e01c21-f499-422c-9fbe-c4c5a2c9ac71,Namespace:calico-system,Attempt:0,} returns sandbox id \"0947e803909cb562a8df5815555d6ce10d643f0d236dd8367c6bb0b86a2ffa06\"" Mar 7 02:37:42.073200 containerd[1560]: time="2026-03-07T02:37:42.072271323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 7 02:37:42.314803 systemd-networkd[1458]: cali753223fc6c6: Link UP Mar 7 02:37:42.316567 systemd-networkd[1458]: cali753223fc6c6: Gained carrier Mar 7 02:37:42.393632 containerd[1560]: 2026-03-07 02:37:41.832 [INFO][4503] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6c7f9f5b5--g9dhh-eth0 calico-apiserver-6c7f9f5b5- calico-system 3dba2c1b-1cbe-4de8-a6b5-7a6b3d05b110 1036 0 2026-03-07 02:36:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c7f9f5b5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6c7f9f5b5-g9dhh eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali753223fc6c6 [] [] }} ContainerID="f262cbf2cb997f9af4680b2e6ab54cca57e37434115953ecad44682d94aab5b0" Namespace="calico-system" Pod="calico-apiserver-6c7f9f5b5-g9dhh" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c7f9f5b5--g9dhh-" Mar 7 02:37:42.393632 containerd[1560]: 2026-03-07 02:37:41.832 [INFO][4503] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f262cbf2cb997f9af4680b2e6ab54cca57e37434115953ecad44682d94aab5b0" Namespace="calico-system" Pod="calico-apiserver-6c7f9f5b5-g9dhh" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c7f9f5b5--g9dhh-eth0" Mar 7 02:37:42.393632 containerd[1560]: 2026-03-07 02:37:41.973 [INFO][4548] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f262cbf2cb997f9af4680b2e6ab54cca57e37434115953ecad44682d94aab5b0" HandleID="k8s-pod-network.f262cbf2cb997f9af4680b2e6ab54cca57e37434115953ecad44682d94aab5b0" Workload="localhost-k8s-calico--apiserver--6c7f9f5b5--g9dhh-eth0" Mar 7 02:37:42.394043 containerd[1560]: 2026-03-07 02:37:42.010 [INFO][4548] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f262cbf2cb997f9af4680b2e6ab54cca57e37434115953ecad44682d94aab5b0" HandleID="k8s-pod-network.f262cbf2cb997f9af4680b2e6ab54cca57e37434115953ecad44682d94aab5b0" Workload="localhost-k8s-calico--apiserver--6c7f9f5b5--g9dhh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000384f20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-6c7f9f5b5-g9dhh", "timestamp":"2026-03-07 02:37:41.973164112 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00022c000)} Mar 7 02:37:42.394043 containerd[1560]: 2026-03-07 02:37:42.010 [INFO][4548] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:37:42.394043 containerd[1560]: 2026-03-07 02:37:42.011 [INFO][4548] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:37:42.394043 containerd[1560]: 2026-03-07 02:37:42.011 [INFO][4548] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 02:37:42.394043 containerd[1560]: 2026-03-07 02:37:42.047 [INFO][4548] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f262cbf2cb997f9af4680b2e6ab54cca57e37434115953ecad44682d94aab5b0" host="localhost" Mar 7 02:37:42.394043 containerd[1560]: 2026-03-07 02:37:42.080 [INFO][4548] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 02:37:42.394043 containerd[1560]: 2026-03-07 02:37:42.123 [INFO][4548] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 02:37:42.394043 containerd[1560]: 2026-03-07 02:37:42.158 [INFO][4548] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 02:37:42.394043 containerd[1560]: 2026-03-07 02:37:42.173 [INFO][4548] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 02:37:42.394043 containerd[1560]: 2026-03-07 02:37:42.173 [INFO][4548] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f262cbf2cb997f9af4680b2e6ab54cca57e37434115953ecad44682d94aab5b0" host="localhost" Mar 7 02:37:42.394518 containerd[1560]: 2026-03-07 02:37:42.182 [INFO][4548] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f262cbf2cb997f9af4680b2e6ab54cca57e37434115953ecad44682d94aab5b0 Mar 7 02:37:42.394518 containerd[1560]: 2026-03-07 02:37:42.231 [INFO][4548] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f262cbf2cb997f9af4680b2e6ab54cca57e37434115953ecad44682d94aab5b0" host="localhost" Mar 7 02:37:42.394518 containerd[1560]: 2026-03-07 02:37:42.283 [INFO][4548] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.f262cbf2cb997f9af4680b2e6ab54cca57e37434115953ecad44682d94aab5b0" host="localhost" Mar 7 02:37:42.394518 containerd[1560]: 2026-03-07 02:37:42.283 [INFO][4548] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.f262cbf2cb997f9af4680b2e6ab54cca57e37434115953ecad44682d94aab5b0" host="localhost" Mar 7 02:37:42.394518 containerd[1560]: 2026-03-07 02:37:42.283 [INFO][4548] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:37:42.394518 containerd[1560]: 2026-03-07 02:37:42.283 [INFO][4548] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="f262cbf2cb997f9af4680b2e6ab54cca57e37434115953ecad44682d94aab5b0" HandleID="k8s-pod-network.f262cbf2cb997f9af4680b2e6ab54cca57e37434115953ecad44682d94aab5b0" Workload="localhost-k8s-calico--apiserver--6c7f9f5b5--g9dhh-eth0" Mar 7 02:37:42.394694 containerd[1560]: 2026-03-07 02:37:42.296 [INFO][4503] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f262cbf2cb997f9af4680b2e6ab54cca57e37434115953ecad44682d94aab5b0" Namespace="calico-system" Pod="calico-apiserver-6c7f9f5b5-g9dhh" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c7f9f5b5--g9dhh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c7f9f5b5--g9dhh-eth0", GenerateName:"calico-apiserver-6c7f9f5b5-", Namespace:"calico-system", SelfLink:"", UID:"3dba2c1b-1cbe-4de8-a6b5-7a6b3d05b110", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 36, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c7f9f5b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6c7f9f5b5-g9dhh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali753223fc6c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:37:42.396999 containerd[1560]: 2026-03-07 02:37:42.297 [INFO][4503] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="f262cbf2cb997f9af4680b2e6ab54cca57e37434115953ecad44682d94aab5b0" Namespace="calico-system" Pod="calico-apiserver-6c7f9f5b5-g9dhh" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c7f9f5b5--g9dhh-eth0" Mar 7 02:37:42.396999 containerd[1560]: 2026-03-07 02:37:42.297 [INFO][4503] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali753223fc6c6 ContainerID="f262cbf2cb997f9af4680b2e6ab54cca57e37434115953ecad44682d94aab5b0" Namespace="calico-system" Pod="calico-apiserver-6c7f9f5b5-g9dhh" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c7f9f5b5--g9dhh-eth0" Mar 7 02:37:42.396999 containerd[1560]: 2026-03-07 02:37:42.315 [INFO][4503] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f262cbf2cb997f9af4680b2e6ab54cca57e37434115953ecad44682d94aab5b0" Namespace="calico-system" Pod="calico-apiserver-6c7f9f5b5-g9dhh" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c7f9f5b5--g9dhh-eth0" Mar 7 02:37:42.397118 containerd[1560]: 2026-03-07 02:37:42.317 [INFO][4503] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f262cbf2cb997f9af4680b2e6ab54cca57e37434115953ecad44682d94aab5b0" Namespace="calico-system" Pod="calico-apiserver-6c7f9f5b5-g9dhh" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c7f9f5b5--g9dhh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c7f9f5b5--g9dhh-eth0", GenerateName:"calico-apiserver-6c7f9f5b5-", Namespace:"calico-system", SelfLink:"", UID:"3dba2c1b-1cbe-4de8-a6b5-7a6b3d05b110", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 36, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c7f9f5b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f262cbf2cb997f9af4680b2e6ab54cca57e37434115953ecad44682d94aab5b0", Pod:"calico-apiserver-6c7f9f5b5-g9dhh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali753223fc6c6", MAC:"72:c9:ac:e9:8c:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:37:42.397260 containerd[1560]: 2026-03-07 02:37:42.375 [INFO][4503] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f262cbf2cb997f9af4680b2e6ab54cca57e37434115953ecad44682d94aab5b0" Namespace="calico-system" Pod="calico-apiserver-6c7f9f5b5-g9dhh" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c7f9f5b5--g9dhh-eth0" Mar 7 02:37:42.577449 systemd-networkd[1458]: cali2e06b433e60: Gained IPv6LL Mar 7 02:37:42.627246 containerd[1560]: time="2026-03-07T02:37:42.627001339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-lwklk,Uid:fc355c46-37cb-4d45-9184-9ae30d05953b,Namespace:calico-system,Attempt:0,}" Mar 7 02:37:42.643452 kubelet[2804]: E0307 02:37:42.642435 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:37:42.646131 containerd[1560]: time="2026-03-07T02:37:42.645661857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-nspbn,Uid:78cc04e9-c73f-4360-b338-165caff8aacb,Namespace:kube-system,Attempt:0,}" Mar 7 02:37:42.648253 containerd[1560]: time="2026-03-07T02:37:42.647005114Z" level=info msg="connecting to shim f262cbf2cb997f9af4680b2e6ab54cca57e37434115953ecad44682d94aab5b0" address="unix:///run/containerd/s/c4016a3673f9f864788e249ab44730d3aca8c1f24cff1c755869b83da4a45850" namespace=k8s.io protocol=ttrpc version=3 Mar 7 02:37:42.661403 kubelet[2804]: E0307 02:37:42.661288 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:37:42.665720 containerd[1560]: time="2026-03-07T02:37:42.662283648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-v7whr,Uid:4d19534f-dd8a-411c-bf09-f1512f0031bf,Namespace:kube-system,Attempt:0,}" Mar 7 02:37:42.665720 containerd[1560]: time="2026-03-07T02:37:42.665016650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c7f9f5b5-g9jfr,Uid:510194e6-ef76-4c44-9390-a72778675d85,Namespace:calico-system,Attempt:0,}" Mar 7 02:37:42.813230 systemd[1]: Started cri-containerd-f262cbf2cb997f9af4680b2e6ab54cca57e37434115953ecad44682d94aab5b0.scope - libcontainer container f262cbf2cb997f9af4680b2e6ab54cca57e37434115953ecad44682d94aab5b0. Mar 7 02:37:43.032440 systemd-resolved[1462]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 02:37:43.393293 containerd[1560]: time="2026-03-07T02:37:43.393116432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c7f9f5b5-g9dhh,Uid:3dba2c1b-1cbe-4de8-a6b5-7a6b3d05b110,Namespace:calico-system,Attempt:0,} returns sandbox id \"f262cbf2cb997f9af4680b2e6ab54cca57e37434115953ecad44682d94aab5b0\"" Mar 7 02:37:43.528947 systemd-networkd[1458]: califc6e772f468: Link UP Mar 7 02:37:43.529241 systemd-networkd[1458]: califc6e772f468: Gained carrier Mar 7 02:37:43.701689 containerd[1560]: 2026-03-07 02:37:42.942 [INFO][4637] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--nspbn-eth0 coredns-7d764666f9- kube-system 78cc04e9-c73f-4360-b338-165caff8aacb 1041 0 2026-03-07 02:35:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-nspbn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califc6e772f468 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="c9c87297fba7b7fc5acf4374546c55f52ccecaf1625a3bcd732f3aec65bc90fe" Namespace="kube-system" Pod="coredns-7d764666f9-nspbn" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--nspbn-" Mar 7 02:37:43.701689 containerd[1560]: 2026-03-07 02:37:42.961 [INFO][4637] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c9c87297fba7b7fc5acf4374546c55f52ccecaf1625a3bcd732f3aec65bc90fe" Namespace="kube-system" Pod="coredns-7d764666f9-nspbn" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--nspbn-eth0" Mar 7 02:37:43.701689 containerd[1560]: 2026-03-07 02:37:43.142 [INFO][4706] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c9c87297fba7b7fc5acf4374546c55f52ccecaf1625a3bcd732f3aec65bc90fe" HandleID="k8s-pod-network.c9c87297fba7b7fc5acf4374546c55f52ccecaf1625a3bcd732f3aec65bc90fe" Workload="localhost-k8s-coredns--7d764666f9--nspbn-eth0" Mar 7 02:37:43.707504 containerd[1560]: 2026-03-07 02:37:43.176 [INFO][4706] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c9c87297fba7b7fc5acf4374546c55f52ccecaf1625a3bcd732f3aec65bc90fe" HandleID="k8s-pod-network.c9c87297fba7b7fc5acf4374546c55f52ccecaf1625a3bcd732f3aec65bc90fe" Workload="localhost-k8s-coredns--7d764666f9--nspbn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf540), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-nspbn", "timestamp":"2026-03-07 02:37:43.142200637 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000442f20)} Mar 7 02:37:43.707504 containerd[1560]: 2026-03-07 02:37:43.177 [INFO][4706] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:37:43.707504 containerd[1560]: 2026-03-07 02:37:43.178 [INFO][4706] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:37:43.707504 containerd[1560]: 2026-03-07 02:37:43.179 [INFO][4706] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 02:37:43.707504 containerd[1560]: 2026-03-07 02:37:43.213 [INFO][4706] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c9c87297fba7b7fc5acf4374546c55f52ccecaf1625a3bcd732f3aec65bc90fe" host="localhost" Mar 7 02:37:43.707504 containerd[1560]: 2026-03-07 02:37:43.256 [INFO][4706] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 02:37:43.707504 containerd[1560]: 2026-03-07 02:37:43.306 [INFO][4706] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 02:37:43.707504 containerd[1560]: 2026-03-07 02:37:43.323 [INFO][4706] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 02:37:43.707504 containerd[1560]: 2026-03-07 02:37:43.356 [INFO][4706] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 02:37:43.707504 containerd[1560]: 2026-03-07 02:37:43.357 [INFO][4706] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c9c87297fba7b7fc5acf4374546c55f52ccecaf1625a3bcd732f3aec65bc90fe" host="localhost" Mar 7 02:37:43.708183 containerd[1560]: 2026-03-07 02:37:43.368 [INFO][4706] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c9c87297fba7b7fc5acf4374546c55f52ccecaf1625a3bcd732f3aec65bc90fe Mar 7 02:37:43.708183 containerd[1560]: 2026-03-07 02:37:43.415 [INFO][4706] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c9c87297fba7b7fc5acf4374546c55f52ccecaf1625a3bcd732f3aec65bc90fe" host="localhost" Mar 7 02:37:43.708183 containerd[1560]: 2026-03-07 02:37:43.487 [INFO][4706] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.c9c87297fba7b7fc5acf4374546c55f52ccecaf1625a3bcd732f3aec65bc90fe" host="localhost" Mar 7 02:37:43.708183 containerd[1560]: 2026-03-07 02:37:43.487 [INFO][4706] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.c9c87297fba7b7fc5acf4374546c55f52ccecaf1625a3bcd732f3aec65bc90fe" host="localhost" Mar 7 02:37:43.708183 containerd[1560]: 2026-03-07 02:37:43.488 [INFO][4706] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:37:43.708183 containerd[1560]: 2026-03-07 02:37:43.488 [INFO][4706] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="c9c87297fba7b7fc5acf4374546c55f52ccecaf1625a3bcd732f3aec65bc90fe" HandleID="k8s-pod-network.c9c87297fba7b7fc5acf4374546c55f52ccecaf1625a3bcd732f3aec65bc90fe" Workload="localhost-k8s-coredns--7d764666f9--nspbn-eth0" Mar 7 02:37:43.708497 containerd[1560]: 2026-03-07 02:37:43.506 [INFO][4637] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c9c87297fba7b7fc5acf4374546c55f52ccecaf1625a3bcd732f3aec65bc90fe" Namespace="kube-system" Pod="coredns-7d764666f9-nspbn" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--nspbn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--nspbn-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"78cc04e9-c73f-4360-b338-165caff8aacb", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 35, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-nspbn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califc6e772f468", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:37:43.708497 containerd[1560]: 2026-03-07 02:37:43.507 [INFO][4637] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="c9c87297fba7b7fc5acf4374546c55f52ccecaf1625a3bcd732f3aec65bc90fe" Namespace="kube-system" Pod="coredns-7d764666f9-nspbn" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--nspbn-eth0" Mar 7 02:37:43.708497 containerd[1560]: 2026-03-07 02:37:43.507 [INFO][4637] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califc6e772f468 ContainerID="c9c87297fba7b7fc5acf4374546c55f52ccecaf1625a3bcd732f3aec65bc90fe" Namespace="kube-system" Pod="coredns-7d764666f9-nspbn" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--nspbn-eth0" Mar 7 02:37:43.708497 containerd[1560]: 2026-03-07 02:37:43.524 [INFO][4637] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c9c87297fba7b7fc5acf4374546c55f52ccecaf1625a3bcd732f3aec65bc90fe" Namespace="kube-system" Pod="coredns-7d764666f9-nspbn" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--nspbn-eth0" Mar 7 02:37:43.708497 containerd[1560]: 2026-03-07 02:37:43.553 [INFO][4637] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c9c87297fba7b7fc5acf4374546c55f52ccecaf1625a3bcd732f3aec65bc90fe" Namespace="kube-system" Pod="coredns-7d764666f9-nspbn" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--nspbn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--nspbn-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"78cc04e9-c73f-4360-b338-165caff8aacb", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 35, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c9c87297fba7b7fc5acf4374546c55f52ccecaf1625a3bcd732f3aec65bc90fe", Pod:"coredns-7d764666f9-nspbn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califc6e772f468", MAC:"6a:ec:4f:6f:f0:11", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:37:43.708497 containerd[1560]: 2026-03-07 02:37:43.634 [INFO][4637] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c9c87297fba7b7fc5acf4374546c55f52ccecaf1625a3bcd732f3aec65bc90fe" Namespace="kube-system" Pod="coredns-7d764666f9-nspbn" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--nspbn-eth0" Mar 7 02:37:43.896088 containerd[1560]: time="2026-03-07T02:37:43.895712695Z" level=info msg="connecting to shim c9c87297fba7b7fc5acf4374546c55f52ccecaf1625a3bcd732f3aec65bc90fe" address="unix:///run/containerd/s/3d3b3b817c7f28c1fa6a4edde751f635fdfc487f6a1622897917c28134cdf5cb" namespace=k8s.io protocol=ttrpc version=3 Mar 7 02:37:43.917554 systemd-networkd[1458]: cali753223fc6c6: Gained IPv6LL Mar 7 02:37:43.990269 systemd-networkd[1458]: cali093fa694647: Link UP Mar 7 02:37:44.002853 systemd-networkd[1458]: cali093fa694647: Gained carrier Mar 7 02:37:44.114257 systemd[1]: Started cri-containerd-c9c87297fba7b7fc5acf4374546c55f52ccecaf1625a3bcd732f3aec65bc90fe.scope - libcontainer container c9c87297fba7b7fc5acf4374546c55f52ccecaf1625a3bcd732f3aec65bc90fe. Mar 7 02:37:44.122177 containerd[1560]: 2026-03-07 02:37:42.875 [INFO][4600] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--9f7667bb8--lwklk-eth0 goldmane-9f7667bb8- calico-system fc355c46-37cb-4d45-9184-9ae30d05953b 1038 0 2026-03-07 02:36:27 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-9f7667bb8-lwklk eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali093fa694647 [] [] }} ContainerID="bb3dada6fc3bfc1c1f3fd677131df17847214e5d2295da5aab1756961c137d9c" Namespace="calico-system" Pod="goldmane-9f7667bb8-lwklk" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--lwklk-" Mar 7 02:37:44.122177 containerd[1560]: 2026-03-07 02:37:42.875 [INFO][4600] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bb3dada6fc3bfc1c1f3fd677131df17847214e5d2295da5aab1756961c137d9c" Namespace="calico-system" Pod="goldmane-9f7667bb8-lwklk" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--lwklk-eth0" Mar 7 02:37:44.122177 containerd[1560]: 2026-03-07 02:37:43.168 [INFO][4690] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bb3dada6fc3bfc1c1f3fd677131df17847214e5d2295da5aab1756961c137d9c" HandleID="k8s-pod-network.bb3dada6fc3bfc1c1f3fd677131df17847214e5d2295da5aab1756961c137d9c" Workload="localhost-k8s-goldmane--9f7667bb8--lwklk-eth0" Mar 7 02:37:44.122177 containerd[1560]: 2026-03-07 02:37:43.185 [INFO][4690] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="bb3dada6fc3bfc1c1f3fd677131df17847214e5d2295da5aab1756961c137d9c" HandleID="k8s-pod-network.bb3dada6fc3bfc1c1f3fd677131df17847214e5d2295da5aab1756961c137d9c" Workload="localhost-k8s-goldmane--9f7667bb8--lwklk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f380), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-9f7667bb8-lwklk", "timestamp":"2026-03-07 02:37:43.16873737 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000225600)} Mar 7 02:37:44.122177 containerd[1560]: 2026-03-07 02:37:43.185 [INFO][4690] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:37:44.122177 containerd[1560]: 2026-03-07 02:37:43.487 [INFO][4690] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:37:44.122177 containerd[1560]: 2026-03-07 02:37:43.487 [INFO][4690] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 02:37:44.122177 containerd[1560]: 2026-03-07 02:37:43.516 [INFO][4690] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.bb3dada6fc3bfc1c1f3fd677131df17847214e5d2295da5aab1756961c137d9c" host="localhost" Mar 7 02:37:44.122177 containerd[1560]: 2026-03-07 02:37:43.577 [INFO][4690] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 02:37:44.122177 containerd[1560]: 2026-03-07 02:37:43.666 [INFO][4690] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 02:37:44.122177 containerd[1560]: 2026-03-07 02:37:43.726 [INFO][4690] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 02:37:44.122177 containerd[1560]: 2026-03-07 02:37:43.739 [INFO][4690] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 02:37:44.122177 containerd[1560]: 2026-03-07 02:37:43.740 [INFO][4690] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bb3dada6fc3bfc1c1f3fd677131df17847214e5d2295da5aab1756961c137d9c" host="localhost" Mar 7 02:37:44.122177 containerd[1560]: 2026-03-07 02:37:43.755 [INFO][4690] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.bb3dada6fc3bfc1c1f3fd677131df17847214e5d2295da5aab1756961c137d9c Mar 7 02:37:44.122177 containerd[1560]: 2026-03-07 02:37:43.820 [INFO][4690] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bb3dada6fc3bfc1c1f3fd677131df17847214e5d2295da5aab1756961c137d9c" host="localhost" Mar 7 02:37:44.122177 containerd[1560]: 2026-03-07 02:37:43.891 [INFO][4690] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.bb3dada6fc3bfc1c1f3fd677131df17847214e5d2295da5aab1756961c137d9c" host="localhost" Mar 7 02:37:44.122177 containerd[1560]: 2026-03-07 02:37:43.891 [INFO][4690] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.bb3dada6fc3bfc1c1f3fd677131df17847214e5d2295da5aab1756961c137d9c" host="localhost" Mar 7 02:37:44.122177 containerd[1560]: 2026-03-07 02:37:43.891 [INFO][4690] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:37:44.122177 containerd[1560]: 2026-03-07 02:37:43.891 [INFO][4690] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="bb3dada6fc3bfc1c1f3fd677131df17847214e5d2295da5aab1756961c137d9c" HandleID="k8s-pod-network.bb3dada6fc3bfc1c1f3fd677131df17847214e5d2295da5aab1756961c137d9c" Workload="localhost-k8s-goldmane--9f7667bb8--lwklk-eth0" Mar 7 02:37:44.135779 containerd[1560]: 2026-03-07 02:37:43.922 [INFO][4600] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bb3dada6fc3bfc1c1f3fd677131df17847214e5d2295da5aab1756961c137d9c" Namespace="calico-system" Pod="goldmane-9f7667bb8-lwklk" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--lwklk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--lwklk-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"fc355c46-37cb-4d45-9184-9ae30d05953b", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 36, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-9f7667bb8-lwklk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali093fa694647", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:37:44.135779 containerd[1560]: 2026-03-07 02:37:43.922 [INFO][4600] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="bb3dada6fc3bfc1c1f3fd677131df17847214e5d2295da5aab1756961c137d9c" Namespace="calico-system" Pod="goldmane-9f7667bb8-lwklk" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--lwklk-eth0" Mar 7 02:37:44.135779 containerd[1560]: 2026-03-07 02:37:43.922 [INFO][4600] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali093fa694647 ContainerID="bb3dada6fc3bfc1c1f3fd677131df17847214e5d2295da5aab1756961c137d9c" Namespace="calico-system" Pod="goldmane-9f7667bb8-lwklk" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--lwklk-eth0" Mar 7 02:37:44.135779 containerd[1560]: 2026-03-07 02:37:43.981 [INFO][4600] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bb3dada6fc3bfc1c1f3fd677131df17847214e5d2295da5aab1756961c137d9c" Namespace="calico-system" Pod="goldmane-9f7667bb8-lwklk" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--lwklk-eth0" Mar 7 02:37:44.135779 containerd[1560]: 2026-03-07 02:37:43.982 [INFO][4600] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bb3dada6fc3bfc1c1f3fd677131df17847214e5d2295da5aab1756961c137d9c" Namespace="calico-system" Pod="goldmane-9f7667bb8-lwklk" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--lwklk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--lwklk-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"fc355c46-37cb-4d45-9184-9ae30d05953b", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 36, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bb3dada6fc3bfc1c1f3fd677131df17847214e5d2295da5aab1756961c137d9c", Pod:"goldmane-9f7667bb8-lwklk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali093fa694647", MAC:"ca:2f:fa:3f:c2:8f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:37:44.135779 containerd[1560]: 2026-03-07 02:37:44.087 [INFO][4600] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bb3dada6fc3bfc1c1f3fd677131df17847214e5d2295da5aab1756961c137d9c" Namespace="calico-system" Pod="goldmane-9f7667bb8-lwklk" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--lwklk-eth0" Mar 7 02:37:44.278428 systemd-resolved[1462]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 02:37:44.310620 containerd[1560]: time="2026-03-07T02:37:44.309303496Z" level=info msg="connecting to shim bb3dada6fc3bfc1c1f3fd677131df17847214e5d2295da5aab1756961c137d9c" address="unix:///run/containerd/s/ead2b23a642a5dfd7605ef4beaa270cf06f955189396ebe592807d40699a2077" namespace=k8s.io protocol=ttrpc version=3 Mar 7 02:37:44.315589 systemd-networkd[1458]: cali2da30cda1ef: Link UP Mar 7 02:37:44.318467 systemd-networkd[1458]: cali2da30cda1ef: Gained carrier Mar 7 02:37:44.462388 containerd[1560]: 2026-03-07 02:37:42.956 [INFO][4622] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--v7whr-eth0 coredns-7d764666f9- kube-system 4d19534f-dd8a-411c-bf09-f1512f0031bf 1030 0 2026-03-07 02:35:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-v7whr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2da30cda1ef [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="a295e1132625d967a7e5d6f0311759bdfbd3428eb6f7aad6903b86ab64e6bc37" Namespace="kube-system" Pod="coredns-7d764666f9-v7whr" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--v7whr-" Mar 7 02:37:44.462388 containerd[1560]: 2026-03-07 02:37:42.958 [INFO][4622] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a295e1132625d967a7e5d6f0311759bdfbd3428eb6f7aad6903b86ab64e6bc37" Namespace="kube-system" Pod="coredns-7d764666f9-v7whr" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--v7whr-eth0" Mar 7 02:37:44.462388 containerd[1560]: 2026-03-07 02:37:43.214 [INFO][4699] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a295e1132625d967a7e5d6f0311759bdfbd3428eb6f7aad6903b86ab64e6bc37" HandleID="k8s-pod-network.a295e1132625d967a7e5d6f0311759bdfbd3428eb6f7aad6903b86ab64e6bc37" Workload="localhost-k8s-coredns--7d764666f9--v7whr-eth0" Mar 7 02:37:44.462388 containerd[1560]: 2026-03-07 02:37:43.252 [INFO][4699] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a295e1132625d967a7e5d6f0311759bdfbd3428eb6f7aad6903b86ab64e6bc37" HandleID="k8s-pod-network.a295e1132625d967a7e5d6f0311759bdfbd3428eb6f7aad6903b86ab64e6bc37" Workload="localhost-k8s-coredns--7d764666f9--v7whr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003262c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-v7whr", "timestamp":"2026-03-07 02:37:43.214062957 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0005042c0)} Mar 7 02:37:44.462388 containerd[1560]: 2026-03-07 02:37:43.253 [INFO][4699] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:37:44.462388 containerd[1560]: 2026-03-07 02:37:43.897 [INFO][4699] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:37:44.462388 containerd[1560]: 2026-03-07 02:37:43.897 [INFO][4699] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 02:37:44.462388 containerd[1560]: 2026-03-07 02:37:43.965 [INFO][4699] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a295e1132625d967a7e5d6f0311759bdfbd3428eb6f7aad6903b86ab64e6bc37" host="localhost" Mar 7 02:37:44.462388 containerd[1560]: 2026-03-07 02:37:44.006 [INFO][4699] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 02:37:44.462388 containerd[1560]: 2026-03-07 02:37:44.048 [INFO][4699] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 02:37:44.462388 containerd[1560]: 2026-03-07 02:37:44.080 [INFO][4699] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 02:37:44.462388 containerd[1560]: 2026-03-07 02:37:44.105 [INFO][4699] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 02:37:44.462388 containerd[1560]: 2026-03-07 02:37:44.105 [INFO][4699] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a295e1132625d967a7e5d6f0311759bdfbd3428eb6f7aad6903b86ab64e6bc37" host="localhost" Mar 7 02:37:44.462388 containerd[1560]: 2026-03-07 02:37:44.141 [INFO][4699] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a295e1132625d967a7e5d6f0311759bdfbd3428eb6f7aad6903b86ab64e6bc37 Mar 7 02:37:44.462388 containerd[1560]: 2026-03-07 02:37:44.181 [INFO][4699] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a295e1132625d967a7e5d6f0311759bdfbd3428eb6f7aad6903b86ab64e6bc37" host="localhost" Mar 7 02:37:44.462388 containerd[1560]: 2026-03-07 02:37:44.241 [INFO][4699] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.a295e1132625d967a7e5d6f0311759bdfbd3428eb6f7aad6903b86ab64e6bc37" host="localhost" Mar 7 02:37:44.462388 containerd[1560]: 2026-03-07 02:37:44.241 [INFO][4699] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.a295e1132625d967a7e5d6f0311759bdfbd3428eb6f7aad6903b86ab64e6bc37" host="localhost" Mar 7 02:37:44.462388 containerd[1560]: 2026-03-07 02:37:44.242 [INFO][4699] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:37:44.462388 containerd[1560]: 2026-03-07 02:37:44.242 [INFO][4699] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="a295e1132625d967a7e5d6f0311759bdfbd3428eb6f7aad6903b86ab64e6bc37" HandleID="k8s-pod-network.a295e1132625d967a7e5d6f0311759bdfbd3428eb6f7aad6903b86ab64e6bc37" Workload="localhost-k8s-coredns--7d764666f9--v7whr-eth0" Mar 7 02:37:44.463611 containerd[1560]: 2026-03-07 02:37:44.291 [INFO][4622] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a295e1132625d967a7e5d6f0311759bdfbd3428eb6f7aad6903b86ab64e6bc37" Namespace="kube-system" Pod="coredns-7d764666f9-v7whr" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--v7whr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--v7whr-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"4d19534f-dd8a-411c-bf09-f1512f0031bf", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 35, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-v7whr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2da30cda1ef", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:37:44.463611 containerd[1560]: 2026-03-07 02:37:44.291 [INFO][4622] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="a295e1132625d967a7e5d6f0311759bdfbd3428eb6f7aad6903b86ab64e6bc37" Namespace="kube-system" Pod="coredns-7d764666f9-v7whr" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--v7whr-eth0" Mar 7 02:37:44.463611 containerd[1560]: 2026-03-07 02:37:44.291 [INFO][4622] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2da30cda1ef ContainerID="a295e1132625d967a7e5d6f0311759bdfbd3428eb6f7aad6903b86ab64e6bc37" Namespace="kube-system" Pod="coredns-7d764666f9-v7whr" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--v7whr-eth0" Mar 7 02:37:44.463611 containerd[1560]: 2026-03-07 02:37:44.320 [INFO][4622] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a295e1132625d967a7e5d6f0311759bdfbd3428eb6f7aad6903b86ab64e6bc37" Namespace="kube-system" Pod="coredns-7d764666f9-v7whr" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--v7whr-eth0" Mar 7 02:37:44.463611 containerd[1560]: 2026-03-07 02:37:44.324 [INFO][4622] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a295e1132625d967a7e5d6f0311759bdfbd3428eb6f7aad6903b86ab64e6bc37" Namespace="kube-system" Pod="coredns-7d764666f9-v7whr" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--v7whr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--v7whr-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"4d19534f-dd8a-411c-bf09-f1512f0031bf", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 35, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a295e1132625d967a7e5d6f0311759bdfbd3428eb6f7aad6903b86ab64e6bc37", Pod:"coredns-7d764666f9-v7whr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2da30cda1ef", MAC:"12:f2:66:d6:4a:c2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:37:44.463611 containerd[1560]: 2026-03-07 02:37:44.447 [INFO][4622] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a295e1132625d967a7e5d6f0311759bdfbd3428eb6f7aad6903b86ab64e6bc37" Namespace="kube-system" Pod="coredns-7d764666f9-v7whr" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--v7whr-eth0" Mar 7 02:37:44.496549 systemd[1]: Started cri-containerd-bb3dada6fc3bfc1c1f3fd677131df17847214e5d2295da5aab1756961c137d9c.scope - libcontainer container bb3dada6fc3bfc1c1f3fd677131df17847214e5d2295da5aab1756961c137d9c. Mar 7 02:37:44.528026 systemd-resolved[1462]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 02:37:44.540457 containerd[1560]: time="2026-03-07T02:37:44.538070455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-nspbn,Uid:78cc04e9-c73f-4360-b338-165caff8aacb,Namespace:kube-system,Attempt:0,} returns sandbox id \"c9c87297fba7b7fc5acf4374546c55f52ccecaf1625a3bcd732f3aec65bc90fe\"" Mar 7 02:37:44.542393 kubelet[2804]: E0307 02:37:44.540650 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:37:44.577719 containerd[1560]: time="2026-03-07T02:37:44.577676990Z" level=info msg="CreateContainer within sandbox \"c9c87297fba7b7fc5acf4374546c55f52ccecaf1625a3bcd732f3aec65bc90fe\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 02:37:44.620200 containerd[1560]: time="2026-03-07T02:37:44.619931330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-745c6c977f-s7hmh,Uid:0acb54c3-06f2-4b58-b339-52bad27423b3,Namespace:calico-system,Attempt:0,}" Mar 7 02:37:44.644416 containerd[1560]: time="2026-03-07T02:37:44.644256415Z" level=info msg="connecting to shim a295e1132625d967a7e5d6f0311759bdfbd3428eb6f7aad6903b86ab64e6bc37" address="unix:///run/containerd/s/df8562308b456cbd998c3c910008ac9bb47f6aec086c99dc0f7b5c247bd41c9d" namespace=k8s.io protocol=ttrpc version=3 Mar 7 02:37:44.676047 systemd-networkd[1458]: califaa7b2227f7: Link UP Mar 7 02:37:44.684268 systemd-networkd[1458]: califaa7b2227f7: Gained carrier Mar 7 02:37:44.732446 containerd[1560]: time="2026-03-07T02:37:44.732394096Z" level=info msg="Container b9eac1e36193786b0d289ef673643058047222e073e273704dabdaa0955c5c81: CDI devices from CRI Config.CDIDevices: []" Mar 7 02:37:44.776224 containerd[1560]: time="2026-03-07T02:37:44.776095918Z" level=info msg="CreateContainer within sandbox \"c9c87297fba7b7fc5acf4374546c55f52ccecaf1625a3bcd732f3aec65bc90fe\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b9eac1e36193786b0d289ef673643058047222e073e273704dabdaa0955c5c81\"" Mar 7 02:37:44.779166 containerd[1560]: 2026-03-07 02:37:43.392 [INFO][4660] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6c7f9f5b5--g9jfr-eth0 calico-apiserver-6c7f9f5b5- calico-system 510194e6-ef76-4c44-9390-a72778675d85 1033 0 2026-03-07 02:36:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c7f9f5b5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6c7f9f5b5-g9jfr eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] califaa7b2227f7 [] [] }} ContainerID="2719a19b13e863e20c8492ce3d3faa1ce52850912b8cb236bb2628f99e47cff0" Namespace="calico-system" Pod="calico-apiserver-6c7f9f5b5-g9jfr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c7f9f5b5--g9jfr-" Mar 7 02:37:44.779166 containerd[1560]: 2026-03-07 02:37:43.392 [INFO][4660] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2719a19b13e863e20c8492ce3d3faa1ce52850912b8cb236bb2628f99e47cff0" Namespace="calico-system" Pod="calico-apiserver-6c7f9f5b5-g9jfr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c7f9f5b5--g9jfr-eth0" Mar 7 02:37:44.779166 containerd[1560]: 2026-03-07 02:37:43.761 [INFO][4729] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2719a19b13e863e20c8492ce3d3faa1ce52850912b8cb236bb2628f99e47cff0" HandleID="k8s-pod-network.2719a19b13e863e20c8492ce3d3faa1ce52850912b8cb236bb2628f99e47cff0" Workload="localhost-k8s-calico--apiserver--6c7f9f5b5--g9jfr-eth0" Mar 7 02:37:44.779166 containerd[1560]: 2026-03-07 02:37:43.822 [INFO][4729] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="2719a19b13e863e20c8492ce3d3faa1ce52850912b8cb236bb2628f99e47cff0" HandleID="k8s-pod-network.2719a19b13e863e20c8492ce3d3faa1ce52850912b8cb236bb2628f99e47cff0" Workload="localhost-k8s-calico--apiserver--6c7f9f5b5--g9jfr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f5a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-6c7f9f5b5-g9jfr", "timestamp":"2026-03-07 02:37:43.761178618 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003ec2c0)} Mar 7 02:37:44.779166 containerd[1560]: 2026-03-07 02:37:43.823 [INFO][4729] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:37:44.779166 containerd[1560]: 2026-03-07 02:37:44.251 [INFO][4729] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:37:44.779166 containerd[1560]: 2026-03-07 02:37:44.251 [INFO][4729] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 02:37:44.779166 containerd[1560]: 2026-03-07 02:37:44.286 [INFO][4729] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.2719a19b13e863e20c8492ce3d3faa1ce52850912b8cb236bb2628f99e47cff0" host="localhost" Mar 7 02:37:44.779166 containerd[1560]: 2026-03-07 02:37:44.333 [INFO][4729] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 02:37:44.779166 containerd[1560]: 2026-03-07 02:37:44.444 [INFO][4729] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 02:37:44.779166 containerd[1560]: 2026-03-07 02:37:44.459 [INFO][4729] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 02:37:44.779166 containerd[1560]: 2026-03-07 02:37:44.489 [INFO][4729] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 02:37:44.779166 containerd[1560]: 2026-03-07 02:37:44.489 [INFO][4729] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2719a19b13e863e20c8492ce3d3faa1ce52850912b8cb236bb2628f99e47cff0" host="localhost" Mar 7 02:37:44.779166 containerd[1560]: 2026-03-07 02:37:44.505 [INFO][4729] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.2719a19b13e863e20c8492ce3d3faa1ce52850912b8cb236bb2628f99e47cff0 Mar 7 02:37:44.779166 containerd[1560]: 2026-03-07 02:37:44.548 [INFO][4729] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2719a19b13e863e20c8492ce3d3faa1ce52850912b8cb236bb2628f99e47cff0" host="localhost" Mar 7 02:37:44.779166 containerd[1560]: 2026-03-07 02:37:44.645 [INFO][4729] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.2719a19b13e863e20c8492ce3d3faa1ce52850912b8cb236bb2628f99e47cff0" host="localhost" Mar 7 02:37:44.779166 containerd[1560]: 2026-03-07 02:37:44.645 [INFO][4729] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.2719a19b13e863e20c8492ce3d3faa1ce52850912b8cb236bb2628f99e47cff0" host="localhost" Mar 7 02:37:44.779166 containerd[1560]: 2026-03-07 02:37:44.645 [INFO][4729] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:37:44.779166 containerd[1560]: 2026-03-07 02:37:44.645 [INFO][4729] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="2719a19b13e863e20c8492ce3d3faa1ce52850912b8cb236bb2628f99e47cff0" HandleID="k8s-pod-network.2719a19b13e863e20c8492ce3d3faa1ce52850912b8cb236bb2628f99e47cff0" Workload="localhost-k8s-calico--apiserver--6c7f9f5b5--g9jfr-eth0" Mar 7 02:37:44.780092 containerd[1560]: 2026-03-07 02:37:44.661 [INFO][4660] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2719a19b13e863e20c8492ce3d3faa1ce52850912b8cb236bb2628f99e47cff0" Namespace="calico-system" Pod="calico-apiserver-6c7f9f5b5-g9jfr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c7f9f5b5--g9jfr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c7f9f5b5--g9jfr-eth0", GenerateName:"calico-apiserver-6c7f9f5b5-", Namespace:"calico-system", SelfLink:"", UID:"510194e6-ef76-4c44-9390-a72778675d85", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 36, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c7f9f5b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6c7f9f5b5-g9jfr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"califaa7b2227f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:37:44.780092 containerd[1560]: 2026-03-07 02:37:44.664 [INFO][4660] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="2719a19b13e863e20c8492ce3d3faa1ce52850912b8cb236bb2628f99e47cff0" Namespace="calico-system" Pod="calico-apiserver-6c7f9f5b5-g9jfr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c7f9f5b5--g9jfr-eth0" Mar 7 02:37:44.780092 containerd[1560]: 2026-03-07 02:37:44.664 [INFO][4660] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califaa7b2227f7 ContainerID="2719a19b13e863e20c8492ce3d3faa1ce52850912b8cb236bb2628f99e47cff0" Namespace="calico-system" Pod="calico-apiserver-6c7f9f5b5-g9jfr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c7f9f5b5--g9jfr-eth0" Mar 7 02:37:44.780092 containerd[1560]: 2026-03-07 02:37:44.694 [INFO][4660] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2719a19b13e863e20c8492ce3d3faa1ce52850912b8cb236bb2628f99e47cff0" Namespace="calico-system" Pod="calico-apiserver-6c7f9f5b5-g9jfr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c7f9f5b5--g9jfr-eth0" Mar 7 02:37:44.780092 containerd[1560]: 2026-03-07 02:37:44.699 [INFO][4660] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2719a19b13e863e20c8492ce3d3faa1ce52850912b8cb236bb2628f99e47cff0" Namespace="calico-system" Pod="calico-apiserver-6c7f9f5b5-g9jfr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c7f9f5b5--g9jfr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c7f9f5b5--g9jfr-eth0", GenerateName:"calico-apiserver-6c7f9f5b5-", Namespace:"calico-system", SelfLink:"", UID:"510194e6-ef76-4c44-9390-a72778675d85", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 36, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c7f9f5b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2719a19b13e863e20c8492ce3d3faa1ce52850912b8cb236bb2628f99e47cff0", Pod:"calico-apiserver-6c7f9f5b5-g9jfr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"califaa7b2227f7", MAC:"7a:79:0a:bb:63:7b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:37:44.780092 containerd[1560]: 2026-03-07 02:37:44.757 [INFO][4660] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2719a19b13e863e20c8492ce3d3faa1ce52850912b8cb236bb2628f99e47cff0" Namespace="calico-system" Pod="calico-apiserver-6c7f9f5b5-g9jfr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c7f9f5b5--g9jfr-eth0" Mar 7 02:37:44.789850 containerd[1560]: time="2026-03-07T02:37:44.789614919Z" level=info msg="StartContainer for \"b9eac1e36193786b0d289ef673643058047222e073e273704dabdaa0955c5c81\"" Mar 7 02:37:44.794112 containerd[1560]: time="2026-03-07T02:37:44.793987723Z" level=info msg="connecting to shim b9eac1e36193786b0d289ef673643058047222e073e273704dabdaa0955c5c81" address="unix:///run/containerd/s/3d3b3b817c7f28c1fa6a4edde751f635fdfc487f6a1622897917c28134cdf5cb" protocol=ttrpc version=3 Mar 7 02:37:44.799611 containerd[1560]: time="2026-03-07T02:37:44.799191496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-lwklk,Uid:fc355c46-37cb-4d45-9184-9ae30d05953b,Namespace:calico-system,Attempt:0,} returns sandbox id \"bb3dada6fc3bfc1c1f3fd677131df17847214e5d2295da5aab1756961c137d9c\"" Mar 7 02:37:44.919585 systemd[1]: Started cri-containerd-a295e1132625d967a7e5d6f0311759bdfbd3428eb6f7aad6903b86ab64e6bc37.scope - libcontainer container a295e1132625d967a7e5d6f0311759bdfbd3428eb6f7aad6903b86ab64e6bc37. Mar 7 02:37:44.987219 systemd[1]: Started cri-containerd-b9eac1e36193786b0d289ef673643058047222e073e273704dabdaa0955c5c81.scope - libcontainer container b9eac1e36193786b0d289ef673643058047222e073e273704dabdaa0955c5c81. Mar 7 02:37:45.067155 systemd-resolved[1462]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 02:37:45.108554 containerd[1560]: time="2026-03-07T02:37:45.108507990Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:37:45.117231 containerd[1560]: time="2026-03-07T02:37:45.113520432Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 7 02:37:45.124410 containerd[1560]: time="2026-03-07T02:37:45.122472604Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:37:45.126698 containerd[1560]: time="2026-03-07T02:37:45.126666350Z" level=info msg="connecting to shim 2719a19b13e863e20c8492ce3d3faa1ce52850912b8cb236bb2628f99e47cff0" address="unix:///run/containerd/s/813df4097ff33f2c6efc0338531ea8a400beee18cb68593838ed3ca1922eee54" namespace=k8s.io protocol=ttrpc version=3 Mar 7 02:37:45.135461 systemd-networkd[1458]: califc6e772f468: Gained IPv6LL Mar 7 02:37:45.166539 containerd[1560]: time="2026-03-07T02:37:45.166478567Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:37:45.186174 containerd[1560]: time="2026-03-07T02:37:45.170078337Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 3.09602155s" Mar 7 02:37:45.205819 containerd[1560]: time="2026-03-07T02:37:45.195409501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 7 02:37:45.213745 containerd[1560]: time="2026-03-07T02:37:45.213706721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 02:37:45.264742 containerd[1560]: time="2026-03-07T02:37:45.264678849Z" level=info msg="CreateContainer within sandbox \"0947e803909cb562a8df5815555d6ce10d643f0d236dd8367c6bb0b86a2ffa06\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 7 02:37:45.313145 systemd[1]: Started cri-containerd-2719a19b13e863e20c8492ce3d3faa1ce52850912b8cb236bb2628f99e47cff0.scope - libcontainer container 2719a19b13e863e20c8492ce3d3faa1ce52850912b8cb236bb2628f99e47cff0. Mar 7 02:37:45.341523 containerd[1560]: time="2026-03-07T02:37:45.340781015Z" level=info msg="Container da6a73fb0951ef5b525e558370341bfffd009f57e99f762e139cb541ffaecb9d: CDI devices from CRI Config.CDIDevices: []" Mar 7 02:37:45.378090 containerd[1560]: time="2026-03-07T02:37:45.378049029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-v7whr,Uid:4d19534f-dd8a-411c-bf09-f1512f0031bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"a295e1132625d967a7e5d6f0311759bdfbd3428eb6f7aad6903b86ab64e6bc37\"" Mar 7 02:37:45.382932 kubelet[2804]: E0307 02:37:45.379508 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:37:45.406518 systemd-resolved[1462]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 02:37:45.408043 containerd[1560]: time="2026-03-07T02:37:45.406238667Z" level=info msg="CreateContainer within sandbox \"0947e803909cb562a8df5815555d6ce10d643f0d236dd8367c6bb0b86a2ffa06\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"da6a73fb0951ef5b525e558370341bfffd009f57e99f762e139cb541ffaecb9d\"" Mar 7 02:37:45.408725 containerd[1560]: time="2026-03-07T02:37:45.408537060Z" level=info msg="CreateContainer within sandbox \"a295e1132625d967a7e5d6f0311759bdfbd3428eb6f7aad6903b86ab64e6bc37\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 02:37:45.412182 containerd[1560]: time="2026-03-07T02:37:45.412077838Z" level=info msg="StartContainer for \"da6a73fb0951ef5b525e558370341bfffd009f57e99f762e139cb541ffaecb9d\"" Mar 7 02:37:45.423981 containerd[1560]: time="2026-03-07T02:37:45.423410996Z" level=info msg="connecting to shim da6a73fb0951ef5b525e558370341bfffd009f57e99f762e139cb541ffaecb9d" address="unix:///run/containerd/s/797b5f1886f78f08bd75221066e8ea0fe685db1ded9c13f20bed663039b3ed87" protocol=ttrpc version=3 Mar 7 02:37:45.437737 containerd[1560]: time="2026-03-07T02:37:45.437703344Z" level=info msg="StartContainer for \"b9eac1e36193786b0d289ef673643058047222e073e273704dabdaa0955c5c81\" returns successfully" Mar 7 02:37:45.501214 containerd[1560]: time="2026-03-07T02:37:45.501172643Z" level=info msg="Container d210d46f61a2ac895df3acd9c8c2b2b0d35b174a0df11140949feb567b1918c1: CDI devices from CRI Config.CDIDevices: []" Mar 7 02:37:45.551210 containerd[1560]: time="2026-03-07T02:37:45.551112191Z" level=info msg="CreateContainer within sandbox \"a295e1132625d967a7e5d6f0311759bdfbd3428eb6f7aad6903b86ab64e6bc37\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d210d46f61a2ac895df3acd9c8c2b2b0d35b174a0df11140949feb567b1918c1\"" Mar 7 02:37:45.561051 containerd[1560]: time="2026-03-07T02:37:45.561021345Z" level=info msg="StartContainer for \"d210d46f61a2ac895df3acd9c8c2b2b0d35b174a0df11140949feb567b1918c1\"" Mar 7 02:37:45.573629 containerd[1560]: time="2026-03-07T02:37:45.573590732Z" level=info msg="connecting to shim d210d46f61a2ac895df3acd9c8c2b2b0d35b174a0df11140949feb567b1918c1" address="unix:///run/containerd/s/df8562308b456cbd998c3c910008ac9bb47f6aec086c99dc0f7b5c247bd41c9d" protocol=ttrpc version=3 Mar 7 02:37:45.585493 systemd-networkd[1458]: cali2da30cda1ef: Gained IPv6LL Mar 7 02:37:45.606123 systemd[1]: Started cri-containerd-da6a73fb0951ef5b525e558370341bfffd009f57e99f762e139cb541ffaecb9d.scope - libcontainer container da6a73fb0951ef5b525e558370341bfffd009f57e99f762e139cb541ffaecb9d. Mar 7 02:37:45.677052 systemd[1]: Started cri-containerd-d210d46f61a2ac895df3acd9c8c2b2b0d35b174a0df11140949feb567b1918c1.scope - libcontainer container d210d46f61a2ac895df3acd9c8c2b2b0d35b174a0df11140949feb567b1918c1. Mar 7 02:37:45.770688 systemd-networkd[1458]: cali093fa694647: Gained IPv6LL Mar 7 02:37:45.797731 systemd-networkd[1458]: cali743f7b8064a: Link UP Mar 7 02:37:45.799400 systemd-networkd[1458]: cali743f7b8064a: Gained carrier Mar 7 02:37:45.836111 systemd-networkd[1458]: califaa7b2227f7: Gained IPv6LL Mar 7 02:37:45.838401 containerd[1560]: time="2026-03-07T02:37:45.837232809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c7f9f5b5-g9jfr,Uid:510194e6-ef76-4c44-9390-a72778675d85,Namespace:calico-system,Attempt:0,} returns sandbox id \"2719a19b13e863e20c8492ce3d3faa1ce52850912b8cb236bb2628f99e47cff0\"" Mar 7 02:37:45.898750 containerd[1560]: 2026-03-07 02:37:44.912 [INFO][4889] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--745c6c977f--s7hmh-eth0 calico-kube-controllers-745c6c977f- calico-system 0acb54c3-06f2-4b58-b339-52bad27423b3 1035 0 2026-03-07 02:36:29 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:745c6c977f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-745c6c977f-s7hmh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali743f7b8064a [] [] }} ContainerID="3897d62b534d79bc9eb7eeb39e0f2b147aab3b9c722737fa1bfa5a6b792a8b5e" Namespace="calico-system" Pod="calico-kube-controllers-745c6c977f-s7hmh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--745c6c977f--s7hmh-" Mar 7 02:37:45.898750 containerd[1560]: 2026-03-07 02:37:44.915 [INFO][4889] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3897d62b534d79bc9eb7eeb39e0f2b147aab3b9c722737fa1bfa5a6b792a8b5e" Namespace="calico-system" Pod="calico-kube-controllers-745c6c977f-s7hmh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--745c6c977f--s7hmh-eth0" Mar 7 02:37:45.898750 containerd[1560]: 2026-03-07 02:37:45.270 [INFO][4957] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3897d62b534d79bc9eb7eeb39e0f2b147aab3b9c722737fa1bfa5a6b792a8b5e" HandleID="k8s-pod-network.3897d62b534d79bc9eb7eeb39e0f2b147aab3b9c722737fa1bfa5a6b792a8b5e" Workload="localhost-k8s-calico--kube--controllers--745c6c977f--s7hmh-eth0" Mar 7 02:37:45.898750 containerd[1560]: 2026-03-07 02:37:45.310 [INFO][4957] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3897d62b534d79bc9eb7eeb39e0f2b147aab3b9c722737fa1bfa5a6b792a8b5e" HandleID="k8s-pod-network.3897d62b534d79bc9eb7eeb39e0f2b147aab3b9c722737fa1bfa5a6b792a8b5e" Workload="localhost-k8s-calico--kube--controllers--745c6c977f--s7hmh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f580), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-745c6c977f-s7hmh", "timestamp":"2026-03-07 02:37:45.27076846 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004da2c0)} Mar 7 02:37:45.898750 containerd[1560]: 2026-03-07 02:37:45.316 [INFO][4957] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 02:37:45.898750 containerd[1560]: 2026-03-07 02:37:45.316 [INFO][4957] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 02:37:45.898750 containerd[1560]: 2026-03-07 02:37:45.316 [INFO][4957] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 02:37:45.898750 containerd[1560]: 2026-03-07 02:37:45.355 [INFO][4957] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3897d62b534d79bc9eb7eeb39e0f2b147aab3b9c722737fa1bfa5a6b792a8b5e" host="localhost" Mar 7 02:37:45.898750 containerd[1560]: 2026-03-07 02:37:45.422 [INFO][4957] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 02:37:45.898750 containerd[1560]: 2026-03-07 02:37:45.516 [INFO][4957] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 02:37:45.898750 containerd[1560]: 2026-03-07 02:37:45.542 [INFO][4957] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 02:37:45.898750 containerd[1560]: 2026-03-07 02:37:45.605 [INFO][4957] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 02:37:45.898750 containerd[1560]: 2026-03-07 02:37:45.606 [INFO][4957] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3897d62b534d79bc9eb7eeb39e0f2b147aab3b9c722737fa1bfa5a6b792a8b5e" host="localhost" Mar 7 02:37:45.898750 containerd[1560]: 2026-03-07 02:37:45.628 [INFO][4957] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3897d62b534d79bc9eb7eeb39e0f2b147aab3b9c722737fa1bfa5a6b792a8b5e Mar 7 02:37:45.898750 containerd[1560]: 2026-03-07 02:37:45.676 [INFO][4957] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3897d62b534d79bc9eb7eeb39e0f2b147aab3b9c722737fa1bfa5a6b792a8b5e" host="localhost" Mar 7 02:37:45.898750 containerd[1560]: 2026-03-07 02:37:45.762 [INFO][4957] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.3897d62b534d79bc9eb7eeb39e0f2b147aab3b9c722737fa1bfa5a6b792a8b5e" host="localhost" Mar 7 02:37:45.898750 containerd[1560]: 2026-03-07 02:37:45.763 [INFO][4957] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.3897d62b534d79bc9eb7eeb39e0f2b147aab3b9c722737fa1bfa5a6b792a8b5e" host="localhost" Mar 7 02:37:45.898750 containerd[1560]: 2026-03-07 02:37:45.764 [INFO][4957] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 02:37:45.898750 containerd[1560]: 2026-03-07 02:37:45.764 [INFO][4957] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="3897d62b534d79bc9eb7eeb39e0f2b147aab3b9c722737fa1bfa5a6b792a8b5e" HandleID="k8s-pod-network.3897d62b534d79bc9eb7eeb39e0f2b147aab3b9c722737fa1bfa5a6b792a8b5e" Workload="localhost-k8s-calico--kube--controllers--745c6c977f--s7hmh-eth0" Mar 7 02:37:45.904276 containerd[1560]: 2026-03-07 02:37:45.780 [INFO][4889] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3897d62b534d79bc9eb7eeb39e0f2b147aab3b9c722737fa1bfa5a6b792a8b5e" Namespace="calico-system" Pod="calico-kube-controllers-745c6c977f-s7hmh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--745c6c977f--s7hmh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--745c6c977f--s7hmh-eth0", GenerateName:"calico-kube-controllers-745c6c977f-", Namespace:"calico-system", SelfLink:"", UID:"0acb54c3-06f2-4b58-b339-52bad27423b3", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 36, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"745c6c977f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-745c6c977f-s7hmh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali743f7b8064a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:37:45.904276 containerd[1560]: 2026-03-07 02:37:45.781 [INFO][4889] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="3897d62b534d79bc9eb7eeb39e0f2b147aab3b9c722737fa1bfa5a6b792a8b5e" Namespace="calico-system" Pod="calico-kube-controllers-745c6c977f-s7hmh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--745c6c977f--s7hmh-eth0" Mar 7 02:37:45.904276 containerd[1560]: 2026-03-07 02:37:45.781 [INFO][4889] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali743f7b8064a ContainerID="3897d62b534d79bc9eb7eeb39e0f2b147aab3b9c722737fa1bfa5a6b792a8b5e" Namespace="calico-system" Pod="calico-kube-controllers-745c6c977f-s7hmh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--745c6c977f--s7hmh-eth0" Mar 7 02:37:45.904276 containerd[1560]: 2026-03-07 02:37:45.818 [INFO][4889] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3897d62b534d79bc9eb7eeb39e0f2b147aab3b9c722737fa1bfa5a6b792a8b5e" Namespace="calico-system" Pod="calico-kube-controllers-745c6c977f-s7hmh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--745c6c977f--s7hmh-eth0" Mar 7 02:37:45.904276 containerd[1560]: 2026-03-07 02:37:45.819 [INFO][4889] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3897d62b534d79bc9eb7eeb39e0f2b147aab3b9c722737fa1bfa5a6b792a8b5e" Namespace="calico-system" Pod="calico-kube-controllers-745c6c977f-s7hmh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--745c6c977f--s7hmh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--745c6c977f--s7hmh-eth0", GenerateName:"calico-kube-controllers-745c6c977f-", Namespace:"calico-system", SelfLink:"", UID:"0acb54c3-06f2-4b58-b339-52bad27423b3", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 2, 36, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"745c6c977f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3897d62b534d79bc9eb7eeb39e0f2b147aab3b9c722737fa1bfa5a6b792a8b5e", Pod:"calico-kube-controllers-745c6c977f-s7hmh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali743f7b8064a", MAC:"26:73:cd:1b:af:24", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 02:37:45.904276 containerd[1560]: 2026-03-07 02:37:45.874 [INFO][4889] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3897d62b534d79bc9eb7eeb39e0f2b147aab3b9c722737fa1bfa5a6b792a8b5e" Namespace="calico-system" Pod="calico-kube-controllers-745c6c977f-s7hmh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--745c6c977f--s7hmh-eth0" Mar 7 02:37:45.915672 containerd[1560]: time="2026-03-07T02:37:45.915498884Z" level=info msg="StartContainer for \"d210d46f61a2ac895df3acd9c8c2b2b0d35b174a0df11140949feb567b1918c1\" returns successfully" Mar 7 02:37:46.231737 containerd[1560]: time="2026-03-07T02:37:46.229056165Z" level=info msg="connecting to shim 3897d62b534d79bc9eb7eeb39e0f2b147aab3b9c722737fa1bfa5a6b792a8b5e" address="unix:///run/containerd/s/3240009c1b75068d133c52f17a25321328af84f902caf88fe9e506401cac2e03" namespace=k8s.io protocol=ttrpc version=3 Mar 7 02:37:46.232224 kubelet[2804]: E0307 02:37:46.231851 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:37:46.266933 update_engine[1544]: I20260307 02:37:46.264206 1544 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 02:37:46.266933 update_engine[1544]: I20260307 02:37:46.264919 1544 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 02:37:46.266933 update_engine[1544]: I20260307 02:37:46.266296 1544 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 02:37:46.303189 update_engine[1544]: E20260307 02:37:46.298723 1544 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 02:37:46.303189 update_engine[1544]: I20260307 02:37:46.298938 1544 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 7 02:37:46.347590 kubelet[2804]: E0307 02:37:46.346944 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:37:46.374408 systemd[1]: Started cri-containerd-3897d62b534d79bc9eb7eeb39e0f2b147aab3b9c722737fa1bfa5a6b792a8b5e.scope - libcontainer container 3897d62b534d79bc9eb7eeb39e0f2b147aab3b9c722737fa1bfa5a6b792a8b5e. Mar 7 02:37:46.386998 kubelet[2804]: I0307 02:37:46.386148 2804 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-v7whr" podStartSLOduration=116.386130547 podStartE2EDuration="1m56.386130547s" podCreationTimestamp="2026-03-07 02:35:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 02:37:46.381491809 +0000 UTC m=+121.154271151" watchObservedRunningTime="2026-03-07 02:37:46.386130547 +0000 UTC m=+121.158909878" Mar 7 02:37:46.509292 kubelet[2804]: I0307 02:37:46.501801 2804 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-nspbn" podStartSLOduration=116.501783565 podStartE2EDuration="1m56.501783565s" podCreationTimestamp="2026-03-07 02:35:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 02:37:46.495657158 +0000 UTC m=+121.268436489" watchObservedRunningTime="2026-03-07 02:37:46.501783565 +0000 UTC m=+121.274562926" Mar 7 02:37:46.576630 systemd-resolved[1462]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 02:37:46.766686 containerd[1560]: time="2026-03-07T02:37:46.761747165Z" level=info msg="StartContainer for \"da6a73fb0951ef5b525e558370341bfffd009f57e99f762e139cb541ffaecb9d\" returns successfully" Mar 7 02:37:47.023667 containerd[1560]: time="2026-03-07T02:37:47.020400536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-745c6c977f-s7hmh,Uid:0acb54c3-06f2-4b58-b339-52bad27423b3,Namespace:calico-system,Attempt:0,} returns sandbox id \"3897d62b534d79bc9eb7eeb39e0f2b147aab3b9c722737fa1bfa5a6b792a8b5e\"" Mar 7 02:37:47.415448 kubelet[2804]: E0307 02:37:47.414658 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:37:47.419851 kubelet[2804]: E0307 02:37:47.419826 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:37:47.564008 systemd-networkd[1458]: cali743f7b8064a: Gained IPv6LL Mar 7 02:37:48.418100 kubelet[2804]: E0307 02:37:48.417537 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:37:48.422752 kubelet[2804]: E0307 02:37:48.419615 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:37:54.228709 containerd[1560]: time="2026-03-07T02:37:54.228465878Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:37:54.235115 containerd[1560]: time="2026-03-07T02:37:54.234998366Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 7 02:37:54.244102 containerd[1560]: time="2026-03-07T02:37:54.243689290Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:37:54.269173 containerd[1560]: time="2026-03-07T02:37:54.262735878Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:37:54.271670 containerd[1560]: time="2026-03-07T02:37:54.270402211Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 9.031270153s" Mar 7 02:37:54.271670 containerd[1560]: time="2026-03-07T02:37:54.270489765Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 7 02:37:54.279681 containerd[1560]: time="2026-03-07T02:37:54.279010051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 7 02:37:54.301402 containerd[1560]: time="2026-03-07T02:37:54.300627427Z" level=info msg="CreateContainer within sandbox \"f262cbf2cb997f9af4680b2e6ab54cca57e37434115953ecad44682d94aab5b0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 02:37:54.356074 containerd[1560]: time="2026-03-07T02:37:54.353721510Z" level=info msg="Container 2959c3e27bc0fe4ac81c0c16acb5461e810e45a90afd6bb83013a7cb8bfcf2ba: CDI devices from CRI Config.CDIDevices: []" Mar 7 02:37:54.397716 containerd[1560]: time="2026-03-07T02:37:54.397469117Z" level=info msg="CreateContainer within sandbox \"f262cbf2cb997f9af4680b2e6ab54cca57e37434115953ecad44682d94aab5b0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2959c3e27bc0fe4ac81c0c16acb5461e810e45a90afd6bb83013a7cb8bfcf2ba\"" Mar 7 02:37:54.398508 containerd[1560]: time="2026-03-07T02:37:54.398477585Z" level=info msg="StartContainer for \"2959c3e27bc0fe4ac81c0c16acb5461e810e45a90afd6bb83013a7cb8bfcf2ba\"" Mar 7 02:37:54.400576 containerd[1560]: time="2026-03-07T02:37:54.400497235Z" level=info msg="connecting to shim 2959c3e27bc0fe4ac81c0c16acb5461e810e45a90afd6bb83013a7cb8bfcf2ba" address="unix:///run/containerd/s/c4016a3673f9f864788e249ab44730d3aca8c1f24cff1c755869b83da4a45850" protocol=ttrpc version=3 Mar 7 02:37:54.514749 systemd[1]: Started cri-containerd-2959c3e27bc0fe4ac81c0c16acb5461e810e45a90afd6bb83013a7cb8bfcf2ba.scope - libcontainer container 2959c3e27bc0fe4ac81c0c16acb5461e810e45a90afd6bb83013a7cb8bfcf2ba. Mar 7 02:37:54.616855 kubelet[2804]: E0307 02:37:54.616712 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:37:54.740290 containerd[1560]: time="2026-03-07T02:37:54.739468717Z" level=info msg="StartContainer for \"2959c3e27bc0fe4ac81c0c16acb5461e810e45a90afd6bb83013a7cb8bfcf2ba\" returns successfully" Mar 7 02:37:55.602850 kubelet[2804]: I0307 02:37:55.602595 2804 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-6c7f9f5b5-g9dhh" podStartSLOduration=77.723718416 podStartE2EDuration="1m28.602579917s" podCreationTimestamp="2026-03-07 02:36:27 +0000 UTC" firstStartedPulling="2026-03-07 02:37:43.396651942 +0000 UTC m=+118.169431283" lastFinishedPulling="2026-03-07 02:37:54.275513453 +0000 UTC m=+129.048292784" observedRunningTime="2026-03-07 02:37:55.602062936 +0000 UTC m=+130.374842267" watchObservedRunningTime="2026-03-07 02:37:55.602579917 +0000 UTC m=+130.375359267" Mar 7 02:37:56.265429 update_engine[1544]: I20260307 02:37:56.265158 1544 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 02:37:56.265429 update_engine[1544]: I20260307 02:37:56.265244 1544 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 02:37:56.267102 update_engine[1544]: I20260307 02:37:56.266477 1544 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 02:37:56.290469 update_engine[1544]: E20260307 02:37:56.288296 1544 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 02:37:56.290469 update_engine[1544]: I20260307 02:37:56.288435 1544 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 7 02:37:56.290469 update_engine[1544]: I20260307 02:37:56.288446 1544 omaha_request_action.cc:617] Omaha request response: Mar 7 02:37:56.290469 update_engine[1544]: E20260307 02:37:56.288533 1544 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 7 02:37:56.314659 update_engine[1544]: I20260307 02:37:56.314129 1544 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 7 02:37:56.314659 update_engine[1544]: I20260307 02:37:56.314162 1544 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 7 02:37:56.314659 update_engine[1544]: I20260307 02:37:56.314172 1544 update_attempter.cc:306] Processing Done. Mar 7 02:37:56.314659 update_engine[1544]: E20260307 02:37:56.314191 1544 update_attempter.cc:619] Update failed. Mar 7 02:37:56.314659 update_engine[1544]: I20260307 02:37:56.314239 1544 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 7 02:37:56.314659 update_engine[1544]: I20260307 02:37:56.314249 1544 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 7 02:37:56.314659 update_engine[1544]: I20260307 02:37:56.314258 1544 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 7 02:37:56.316625 update_engine[1544]: I20260307 02:37:56.316547 1544 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 7 02:37:56.316830 update_engine[1544]: I20260307 02:37:56.316787 1544 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 7 02:37:56.316830 update_engine[1544]: I20260307 02:37:56.316803 1544 omaha_request_action.cc:272] Request: Mar 7 02:37:56.316830 update_engine[1544]: Mar 7 02:37:56.316830 update_engine[1544]: Mar 7 02:37:56.316830 update_engine[1544]: Mar 7 02:37:56.316830 update_engine[1544]: Mar 7 02:37:56.316830 update_engine[1544]: Mar 7 02:37:56.316830 update_engine[1544]: Mar 7 02:37:56.316830 update_engine[1544]: I20260307 02:37:56.316813 1544 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 02:37:56.317252 update_engine[1544]: I20260307 02:37:56.316844 1544 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 02:37:56.317286 locksmithd[1594]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 7 02:37:56.318641 update_engine[1544]: I20260307 02:37:56.317672 1544 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 02:37:56.337563 update_engine[1544]: E20260307 02:37:56.337466 1544 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 02:37:56.337618 update_engine[1544]: I20260307 02:37:56.337559 1544 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 7 02:37:56.337618 update_engine[1544]: I20260307 02:37:56.337572 1544 omaha_request_action.cc:617] Omaha request response: Mar 7 02:37:56.337692 update_engine[1544]: I20260307 02:37:56.337581 1544 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 7 02:37:56.337692 update_engine[1544]: I20260307 02:37:56.337657 1544 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 7 02:37:56.337692 update_engine[1544]: I20260307 02:37:56.337666 1544 update_attempter.cc:306] Processing Done. Mar 7 02:37:56.337692 update_engine[1544]: I20260307 02:37:56.337674 1544 update_attempter.cc:310] Error event sent. Mar 7 02:37:56.337805 update_engine[1544]: I20260307 02:37:56.337687 1544 update_check_scheduler.cc:74] Next update check in 44m52s Mar 7 02:37:56.340687 locksmithd[1594]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 7 02:37:57.577995 kubelet[2804]: I0307 02:37:57.577963 2804 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 7 02:37:58.614118 kubelet[2804]: E0307 02:37:58.613798 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:38:01.086651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1778100269.mount: Deactivated successfully. Mar 7 02:38:04.566394 containerd[1560]: time="2026-03-07T02:38:04.566009157Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:38:04.576658 containerd[1560]: time="2026-03-07T02:38:04.576493701Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 7 02:38:04.590040 containerd[1560]: time="2026-03-07T02:38:04.589150078Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:38:04.607234 containerd[1560]: time="2026-03-07T02:38:04.603530005Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:38:04.607234 containerd[1560]: time="2026-03-07T02:38:04.604640140Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 10.325593981s" Mar 7 02:38:04.607234 containerd[1560]: time="2026-03-07T02:38:04.604674484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 7 02:38:04.620634 containerd[1560]: time="2026-03-07T02:38:04.615978445Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 02:38:04.669932 containerd[1560]: time="2026-03-07T02:38:04.667198277Z" level=info msg="CreateContainer within sandbox \"bb3dada6fc3bfc1c1f3fd677131df17847214e5d2295da5aab1756961c137d9c\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 7 02:38:04.769276 containerd[1560]: time="2026-03-07T02:38:04.769154375Z" level=info msg="Container 5f0502672d99b63a88240e750a153edd873c25b26c4f315c215f7230254cfb3e: CDI devices from CRI Config.CDIDevices: []" Mar 7 02:38:04.776190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1261760323.mount: Deactivated successfully. Mar 7 02:38:04.897578 containerd[1560]: time="2026-03-07T02:38:04.880469349Z" level=info msg="CreateContainer within sandbox \"bb3dada6fc3bfc1c1f3fd677131df17847214e5d2295da5aab1756961c137d9c\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"5f0502672d99b63a88240e750a153edd873c25b26c4f315c215f7230254cfb3e\"" Mar 7 02:38:04.897578 containerd[1560]: time="2026-03-07T02:38:04.889851626Z" level=info msg="StartContainer for \"5f0502672d99b63a88240e750a153edd873c25b26c4f315c215f7230254cfb3e\"" Mar 7 02:38:04.904099 containerd[1560]: time="2026-03-07T02:38:04.902626533Z" level=info msg="connecting to shim 5f0502672d99b63a88240e750a153edd873c25b26c4f315c215f7230254cfb3e" address="unix:///run/containerd/s/ead2b23a642a5dfd7605ef4beaa270cf06f955189396ebe592807d40699a2077" protocol=ttrpc version=3 Mar 7 02:38:04.969551 containerd[1560]: time="2026-03-07T02:38:04.967651962Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:38:04.975457 containerd[1560]: time="2026-03-07T02:38:04.975081724Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 7 02:38:04.984406 containerd[1560]: time="2026-03-07T02:38:04.983633128Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 367.617184ms" Mar 7 02:38:04.984406 containerd[1560]: time="2026-03-07T02:38:04.983675608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 7 02:38:04.990790 containerd[1560]: time="2026-03-07T02:38:04.990563298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 7 02:38:05.006013 containerd[1560]: time="2026-03-07T02:38:05.005977235Z" level=info msg="CreateContainer within sandbox \"2719a19b13e863e20c8492ce3d3faa1ce52850912b8cb236bb2628f99e47cff0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 02:38:05.079754 systemd[1]: Started cri-containerd-5f0502672d99b63a88240e750a153edd873c25b26c4f315c215f7230254cfb3e.scope - libcontainer container 5f0502672d99b63a88240e750a153edd873c25b26c4f315c215f7230254cfb3e. Mar 7 02:38:05.140781 containerd[1560]: time="2026-03-07T02:38:05.139840295Z" level=info msg="Container 673774c1213ee1937c41b008c811a531fc665db48b7ef4e13d7f00d44bf321a4: CDI devices from CRI Config.CDIDevices: []" Mar 7 02:38:05.173153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2994497518.mount: Deactivated successfully. Mar 7 02:38:05.253024 containerd[1560]: time="2026-03-07T02:38:05.251800921Z" level=info msg="CreateContainer within sandbox \"2719a19b13e863e20c8492ce3d3faa1ce52850912b8cb236bb2628f99e47cff0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"673774c1213ee1937c41b008c811a531fc665db48b7ef4e13d7f00d44bf321a4\"" Mar 7 02:38:05.258122 containerd[1560]: time="2026-03-07T02:38:05.257229045Z" level=info msg="StartContainer for \"673774c1213ee1937c41b008c811a531fc665db48b7ef4e13d7f00d44bf321a4\"" Mar 7 02:38:05.266096 containerd[1560]: time="2026-03-07T02:38:05.265299822Z" level=info msg="connecting to shim 673774c1213ee1937c41b008c811a531fc665db48b7ef4e13d7f00d44bf321a4" address="unix:///run/containerd/s/813df4097ff33f2c6efc0338531ea8a400beee18cb68593838ed3ca1922eee54" protocol=ttrpc version=3 Mar 7 02:38:05.441140 systemd[1]: Started cri-containerd-673774c1213ee1937c41b008c811a531fc665db48b7ef4e13d7f00d44bf321a4.scope - libcontainer container 673774c1213ee1937c41b008c811a531fc665db48b7ef4e13d7f00d44bf321a4. Mar 7 02:38:05.563979 containerd[1560]: time="2026-03-07T02:38:05.563663699Z" level=info msg="StartContainer for \"5f0502672d99b63a88240e750a153edd873c25b26c4f315c215f7230254cfb3e\" returns successfully" Mar 7 02:38:05.992822 kubelet[2804]: I0307 02:38:05.990240 2804 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-lwklk" podStartSLOduration=79.194013992 podStartE2EDuration="1m38.99022074s" podCreationTimestamp="2026-03-07 02:36:27 +0000 UTC" firstStartedPulling="2026-03-07 02:37:44.8187931 +0000 UTC m=+119.591572441" lastFinishedPulling="2026-03-07 02:38:04.614999848 +0000 UTC m=+139.387779189" observedRunningTime="2026-03-07 02:38:05.947651164 +0000 UTC m=+140.720430505" watchObservedRunningTime="2026-03-07 02:38:05.99022074 +0000 UTC m=+140.763000071" Mar 7 02:38:06.098986 containerd[1560]: time="2026-03-07T02:38:06.098818169Z" level=info msg="StartContainer for \"673774c1213ee1937c41b008c811a531fc665db48b7ef4e13d7f00d44bf321a4\" returns successfully" Mar 7 02:38:06.805472 kubelet[2804]: I0307 02:38:06.801717 2804 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-6c7f9f5b5-g9jfr" podStartSLOduration=81.667070925 podStartE2EDuration="1m40.801574414s" podCreationTimestamp="2026-03-07 02:36:26 +0000 UTC" firstStartedPulling="2026-03-07 02:37:45.853497253 +0000 UTC m=+120.626276595" lastFinishedPulling="2026-03-07 02:38:04.988000753 +0000 UTC m=+139.760780084" observedRunningTime="2026-03-07 02:38:06.800652365 +0000 UTC m=+141.573431717" watchObservedRunningTime="2026-03-07 02:38:06.801574414 +0000 UTC m=+141.574353766" Mar 7 02:38:08.018277 containerd[1560]: time="2026-03-07T02:38:08.016207758Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:38:08.022194 containerd[1560]: time="2026-03-07T02:38:08.022148041Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 7 02:38:08.027755 containerd[1560]: time="2026-03-07T02:38:08.027676323Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:38:08.047241 containerd[1560]: time="2026-03-07T02:38:08.043290985Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:38:08.047671 containerd[1560]: time="2026-03-07T02:38:08.047537863Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 3.056931035s" Mar 7 02:38:08.047928 containerd[1560]: time="2026-03-07T02:38:08.047746413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 7 02:38:08.058498 containerd[1560]: time="2026-03-07T02:38:08.058470515Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 7 02:38:08.071160 containerd[1560]: time="2026-03-07T02:38:08.069864965Z" level=info msg="CreateContainer within sandbox \"0947e803909cb562a8df5815555d6ce10d643f0d236dd8367c6bb0b86a2ffa06\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 7 02:38:08.144063 containerd[1560]: time="2026-03-07T02:38:08.137763866Z" level=info msg="Container 6280461cc584b32642e581c8dad9c0a2b17f7ad24eb0cd4d4372037091870580: CDI devices from CRI Config.CDIDevices: []" Mar 7 02:38:08.274394 containerd[1560]: time="2026-03-07T02:38:08.273857974Z" level=info msg="CreateContainer within sandbox \"0947e803909cb562a8df5815555d6ce10d643f0d236dd8367c6bb0b86a2ffa06\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6280461cc584b32642e581c8dad9c0a2b17f7ad24eb0cd4d4372037091870580\"" Mar 7 02:38:08.276102 containerd[1560]: time="2026-03-07T02:38:08.275223754Z" level=info msg="StartContainer for \"6280461cc584b32642e581c8dad9c0a2b17f7ad24eb0cd4d4372037091870580\"" Mar 7 02:38:08.277570 containerd[1560]: time="2026-03-07T02:38:08.277436005Z" level=info msg="connecting to shim 6280461cc584b32642e581c8dad9c0a2b17f7ad24eb0cd4d4372037091870580" address="unix:///run/containerd/s/797b5f1886f78f08bd75221066e8ea0fe685db1ded9c13f20bed663039b3ed87" protocol=ttrpc version=3 Mar 7 02:38:08.539735 systemd[1]: Started cri-containerd-6280461cc584b32642e581c8dad9c0a2b17f7ad24eb0cd4d4372037091870580.scope - libcontainer container 6280461cc584b32642e581c8dad9c0a2b17f7ad24eb0cd4d4372037091870580. Mar 7 02:38:09.198556 containerd[1560]: time="2026-03-07T02:38:09.198235122Z" level=info msg="StartContainer for \"6280461cc584b32642e581c8dad9c0a2b17f7ad24eb0cd4d4372037091870580\" returns successfully" Mar 7 02:38:10.008412 kubelet[2804]: I0307 02:38:10.007622 2804 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-wphnt" podStartSLOduration=75.023097878 podStartE2EDuration="1m41.007605041s" podCreationTimestamp="2026-03-07 02:36:29 +0000 UTC" firstStartedPulling="2026-03-07 02:37:42.071135439 +0000 UTC m=+116.843914770" lastFinishedPulling="2026-03-07 02:38:08.055642601 +0000 UTC m=+142.828421933" observedRunningTime="2026-03-07 02:38:09.996541076 +0000 UTC m=+144.769320427" watchObservedRunningTime="2026-03-07 02:38:10.007605041 +0000 UTC m=+144.780384373" Mar 7 02:38:10.243046 kubelet[2804]: I0307 02:38:10.242770 2804 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 7 02:38:10.247080 kubelet[2804]: I0307 02:38:10.246153 2804 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 7 02:38:18.613264 kubelet[2804]: E0307 02:38:18.612277 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:38:19.071076 containerd[1560]: time="2026-03-07T02:38:19.070858987Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:38:19.075034 containerd[1560]: time="2026-03-07T02:38:19.074705478Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 7 02:38:19.084151 containerd[1560]: time="2026-03-07T02:38:19.081507077Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:38:19.099253 containerd[1560]: time="2026-03-07T02:38:19.097133110Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:38:19.099253 containerd[1560]: time="2026-03-07T02:38:19.098150620Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 11.039534284s" Mar 7 02:38:19.099253 containerd[1560]: time="2026-03-07T02:38:19.098180286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 7 02:38:19.224494 containerd[1560]: time="2026-03-07T02:38:19.222031019Z" level=info msg="CreateContainer within sandbox \"3897d62b534d79bc9eb7eeb39e0f2b147aab3b9c722737fa1bfa5a6b792a8b5e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 7 02:38:19.270613 containerd[1560]: time="2026-03-07T02:38:19.269778602Z" level=info msg="Container c0f27aa621c9017bda734f3cd17d9d2f692ccd965f6e415cae003e4d13e61099: CDI devices from CRI Config.CDIDevices: []" Mar 7 02:38:19.277049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3260465680.mount: Deactivated successfully. Mar 7 02:38:19.320520 containerd[1560]: time="2026-03-07T02:38:19.320264183Z" level=info msg="CreateContainer within sandbox \"3897d62b534d79bc9eb7eeb39e0f2b147aab3b9c722737fa1bfa5a6b792a8b5e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c0f27aa621c9017bda734f3cd17d9d2f692ccd965f6e415cae003e4d13e61099\"" Mar 7 02:38:19.330288 containerd[1560]: time="2026-03-07T02:38:19.324823967Z" level=info msg="StartContainer for \"c0f27aa621c9017bda734f3cd17d9d2f692ccd965f6e415cae003e4d13e61099\"" Mar 7 02:38:19.339698 containerd[1560]: time="2026-03-07T02:38:19.338111664Z" level=info msg="connecting to shim c0f27aa621c9017bda734f3cd17d9d2f692ccd965f6e415cae003e4d13e61099" address="unix:///run/containerd/s/3240009c1b75068d133c52f17a25321328af84f902caf88fe9e506401cac2e03" protocol=ttrpc version=3 Mar 7 02:38:19.478635 systemd[1]: Started cri-containerd-c0f27aa621c9017bda734f3cd17d9d2f692ccd965f6e415cae003e4d13e61099.scope - libcontainer container c0f27aa621c9017bda734f3cd17d9d2f692ccd965f6e415cae003e4d13e61099. Mar 7 02:38:19.834512 containerd[1560]: time="2026-03-07T02:38:19.830155856Z" level=info msg="StartContainer for \"c0f27aa621c9017bda734f3cd17d9d2f692ccd965f6e415cae003e4d13e61099\" returns successfully" Mar 7 02:38:20.829810 kubelet[2804]: I0307 02:38:20.818575 2804 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-745c6c977f-s7hmh" podStartSLOduration=79.769830586 podStartE2EDuration="1m51.818552959s" podCreationTimestamp="2026-03-07 02:36:29 +0000 UTC" firstStartedPulling="2026-03-07 02:37:47.05049013 +0000 UTC m=+121.823269461" lastFinishedPulling="2026-03-07 02:38:19.099212502 +0000 UTC m=+153.871991834" observedRunningTime="2026-03-07 02:38:20.313586761 +0000 UTC m=+155.086366092" watchObservedRunningTime="2026-03-07 02:38:20.818552959 +0000 UTC m=+155.591332289" Mar 7 02:38:32.624535 kubelet[2804]: E0307 02:38:32.621298 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:38:33.617103 kubelet[2804]: E0307 02:38:33.615841 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:38:51.150579 kubelet[2804]: E0307 02:38:51.148276 2804 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="9.488s" Mar 7 02:38:58.613402 kubelet[2804]: E0307 02:38:58.613063 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:39:15.631188 kubelet[2804]: E0307 02:39:15.629928 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:39:17.651038 kubelet[2804]: E0307 02:39:17.650392 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:39:22.612780 kubelet[2804]: E0307 02:39:22.612644 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:39:27.472592 systemd[1]: Started sshd@9-10.0.0.118:22-10.0.0.1:40064.service - OpenSSH per-connection server daemon (10.0.0.1:40064). Mar 7 02:39:27.863485 sshd[5833]: Accepted publickey for core from 10.0.0.1 port 40064 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:39:27.872100 sshd-session[5833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:39:27.904675 systemd-logind[1539]: New session 10 of user core. Mar 7 02:39:27.924638 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 7 02:39:29.417379 sshd[5837]: Connection closed by 10.0.0.1 port 40064 Mar 7 02:39:29.417564 sshd-session[5833]: pam_unix(sshd:session): session closed for user core Mar 7 02:39:29.434439 systemd[1]: sshd@9-10.0.0.118:22-10.0.0.1:40064.service: Deactivated successfully. Mar 7 02:39:29.445848 systemd[1]: session-10.scope: Deactivated successfully. Mar 7 02:39:29.469293 systemd-logind[1539]: Session 10 logged out. Waiting for processes to exit. Mar 7 02:39:29.479283 systemd-logind[1539]: Removed session 10. Mar 7 02:39:29.622577 kubelet[2804]: E0307 02:39:29.616854 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:39:34.476160 systemd[1]: Started sshd@10-10.0.0.118:22-10.0.0.1:37956.service - OpenSSH per-connection server daemon (10.0.0.1:37956). Mar 7 02:39:34.710846 sshd[5945]: Accepted publickey for core from 10.0.0.1 port 37956 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:39:34.717032 sshd-session[5945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:39:34.746744 systemd-logind[1539]: New session 11 of user core. Mar 7 02:39:34.759771 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 7 02:39:35.239571 sshd[5948]: Connection closed by 10.0.0.1 port 37956 Mar 7 02:39:35.240390 sshd-session[5945]: pam_unix(sshd:session): session closed for user core Mar 7 02:39:35.266731 systemd[1]: sshd@10-10.0.0.118:22-10.0.0.1:37956.service: Deactivated successfully. Mar 7 02:39:35.280081 systemd[1]: session-11.scope: Deactivated successfully. Mar 7 02:39:35.288919 systemd-logind[1539]: Session 11 logged out. Waiting for processes to exit. Mar 7 02:39:35.303594 systemd-logind[1539]: Removed session 11. Mar 7 02:39:40.302815 systemd[1]: Started sshd@11-10.0.0.118:22-10.0.0.1:37966.service - OpenSSH per-connection server daemon (10.0.0.1:37966). Mar 7 02:39:40.458930 sshd[5986]: Accepted publickey for core from 10.0.0.1 port 37966 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:39:40.464243 sshd-session[5986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:39:40.487867 systemd-logind[1539]: New session 12 of user core. Mar 7 02:39:40.504044 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 7 02:39:40.908707 sshd[5989]: Connection closed by 10.0.0.1 port 37966 Mar 7 02:39:40.907097 sshd-session[5986]: pam_unix(sshd:session): session closed for user core Mar 7 02:39:40.924484 systemd[1]: sshd@11-10.0.0.118:22-10.0.0.1:37966.service: Deactivated successfully. Mar 7 02:39:40.929624 systemd[1]: session-12.scope: Deactivated successfully. Mar 7 02:39:40.931541 systemd-logind[1539]: Session 12 logged out. Waiting for processes to exit. Mar 7 02:39:40.948236 systemd-logind[1539]: Removed session 12. Mar 7 02:39:42.612477 kubelet[2804]: E0307 02:39:42.612282 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:39:45.926157 systemd[1]: Started sshd@12-10.0.0.118:22-10.0.0.1:40998.service - OpenSSH per-connection server daemon (10.0.0.1:40998). Mar 7 02:39:46.099200 sshd[6008]: Accepted publickey for core from 10.0.0.1 port 40998 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:39:46.103435 sshd-session[6008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:39:46.134393 systemd-logind[1539]: New session 13 of user core. Mar 7 02:39:46.138734 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 7 02:39:46.491449 sshd[6011]: Connection closed by 10.0.0.1 port 40998 Mar 7 02:39:46.496645 sshd-session[6008]: pam_unix(sshd:session): session closed for user core Mar 7 02:39:46.511797 systemd[1]: sshd@12-10.0.0.118:22-10.0.0.1:40998.service: Deactivated successfully. Mar 7 02:39:46.520484 systemd[1]: session-13.scope: Deactivated successfully. Mar 7 02:39:46.524611 systemd-logind[1539]: Session 13 logged out. Waiting for processes to exit. Mar 7 02:39:46.530139 systemd-logind[1539]: Removed session 13. Mar 7 02:39:49.614086 kubelet[2804]: E0307 02:39:49.613702 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:39:51.522214 systemd[1]: Started sshd@13-10.0.0.118:22-10.0.0.1:41010.service - OpenSSH per-connection server daemon (10.0.0.1:41010). Mar 7 02:39:51.685113 sshd[6048]: Accepted publickey for core from 10.0.0.1 port 41010 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:39:51.690781 sshd-session[6048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:39:51.735448 systemd-logind[1539]: New session 14 of user core. Mar 7 02:39:51.774846 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 7 02:39:52.232561 sshd[6051]: Connection closed by 10.0.0.1 port 41010 Mar 7 02:39:52.237094 sshd-session[6048]: pam_unix(sshd:session): session closed for user core Mar 7 02:39:52.280921 systemd[1]: sshd@13-10.0.0.118:22-10.0.0.1:41010.service: Deactivated successfully. Mar 7 02:39:52.286724 systemd[1]: session-14.scope: Deactivated successfully. Mar 7 02:39:52.294045 systemd-logind[1539]: Session 14 logged out. Waiting for processes to exit. Mar 7 02:39:52.298990 systemd-logind[1539]: Removed session 14. Mar 7 02:39:57.274683 systemd[1]: Started sshd@14-10.0.0.118:22-10.0.0.1:43936.service - OpenSSH per-connection server daemon (10.0.0.1:43936). Mar 7 02:39:57.579119 sshd[6071]: Accepted publickey for core from 10.0.0.1 port 43936 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:39:57.582730 sshd-session[6071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:39:57.626691 systemd-logind[1539]: New session 15 of user core. Mar 7 02:39:57.635577 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 7 02:39:58.075385 sshd[6074]: Connection closed by 10.0.0.1 port 43936 Mar 7 02:39:58.076412 sshd-session[6071]: pam_unix(sshd:session): session closed for user core Mar 7 02:39:58.089811 systemd[1]: sshd@14-10.0.0.118:22-10.0.0.1:43936.service: Deactivated successfully. Mar 7 02:39:58.095494 systemd[1]: session-15.scope: Deactivated successfully. Mar 7 02:39:58.098411 systemd-logind[1539]: Session 15 logged out. Waiting for processes to exit. Mar 7 02:39:58.103445 systemd-logind[1539]: Removed session 15. Mar 7 02:40:03.095493 systemd[1]: Started sshd@15-10.0.0.118:22-10.0.0.1:48554.service - OpenSSH per-connection server daemon (10.0.0.1:48554). Mar 7 02:40:03.466191 sshd[6114]: Accepted publickey for core from 10.0.0.1 port 48554 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:40:03.472832 sshd-session[6114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:40:03.499216 systemd-logind[1539]: New session 16 of user core. Mar 7 02:40:03.513074 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 7 02:40:03.903133 sshd[6117]: Connection closed by 10.0.0.1 port 48554 Mar 7 02:40:03.909496 sshd-session[6114]: pam_unix(sshd:session): session closed for user core Mar 7 02:40:03.923591 systemd[1]: sshd@15-10.0.0.118:22-10.0.0.1:48554.service: Deactivated successfully. Mar 7 02:40:03.930673 systemd[1]: session-16.scope: Deactivated successfully. Mar 7 02:40:03.945436 systemd-logind[1539]: Session 16 logged out. Waiting for processes to exit. Mar 7 02:40:03.960785 systemd-logind[1539]: Removed session 16. Mar 7 02:40:08.940533 systemd[1]: Started sshd@16-10.0.0.118:22-10.0.0.1:48564.service - OpenSSH per-connection server daemon (10.0.0.1:48564). Mar 7 02:40:09.139509 sshd[6158]: Accepted publickey for core from 10.0.0.1 port 48564 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:40:09.143839 sshd-session[6158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:40:09.185503 systemd-logind[1539]: New session 17 of user core. Mar 7 02:40:09.190703 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 7 02:40:09.641086 sshd[6161]: Connection closed by 10.0.0.1 port 48564 Mar 7 02:40:09.646624 sshd-session[6158]: pam_unix(sshd:session): session closed for user core Mar 7 02:40:09.666232 systemd[1]: sshd@16-10.0.0.118:22-10.0.0.1:48564.service: Deactivated successfully. Mar 7 02:40:09.673611 systemd[1]: session-17.scope: Deactivated successfully. Mar 7 02:40:09.679946 systemd-logind[1539]: Session 17 logged out. Waiting for processes to exit. Mar 7 02:40:09.691886 systemd-logind[1539]: Removed session 17. Mar 7 02:40:14.699555 systemd[1]: Started sshd@17-10.0.0.118:22-10.0.0.1:44506.service - OpenSSH per-connection server daemon (10.0.0.1:44506). Mar 7 02:40:14.828782 sshd[6175]: Accepted publickey for core from 10.0.0.1 port 44506 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:40:14.838382 sshd-session[6175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:40:14.874171 systemd-logind[1539]: New session 18 of user core. Mar 7 02:40:14.886234 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 7 02:40:15.207706 sshd[6178]: Connection closed by 10.0.0.1 port 44506 Mar 7 02:40:15.208178 sshd-session[6175]: pam_unix(sshd:session): session closed for user core Mar 7 02:40:15.214546 systemd[1]: sshd@17-10.0.0.118:22-10.0.0.1:44506.service: Deactivated successfully. Mar 7 02:40:15.218178 systemd[1]: session-18.scope: Deactivated successfully. Mar 7 02:40:15.223820 systemd-logind[1539]: Session 18 logged out. Waiting for processes to exit. Mar 7 02:40:15.233697 systemd-logind[1539]: Removed session 18. Mar 7 02:40:20.245639 systemd[1]: Started sshd@18-10.0.0.118:22-10.0.0.1:44518.service - OpenSSH per-connection server daemon (10.0.0.1:44518). Mar 7 02:40:20.395494 sshd[6227]: Accepted publickey for core from 10.0.0.1 port 44518 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:40:20.398258 sshd-session[6227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:40:20.410130 systemd-logind[1539]: New session 19 of user core. Mar 7 02:40:20.420638 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 7 02:40:20.738114 sshd[6234]: Connection closed by 10.0.0.1 port 44518 Mar 7 02:40:20.741033 sshd-session[6227]: pam_unix(sshd:session): session closed for user core Mar 7 02:40:20.756610 systemd[1]: sshd@18-10.0.0.118:22-10.0.0.1:44518.service: Deactivated successfully. Mar 7 02:40:20.756793 systemd-logind[1539]: Session 19 logged out. Waiting for processes to exit. Mar 7 02:40:20.766884 systemd[1]: session-19.scope: Deactivated successfully. Mar 7 02:40:20.785170 systemd-logind[1539]: Removed session 19. Mar 7 02:40:25.614140 kubelet[2804]: E0307 02:40:25.612477 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:40:25.787499 systemd[1]: Started sshd@19-10.0.0.118:22-10.0.0.1:36156.service - OpenSSH per-connection server daemon (10.0.0.1:36156). Mar 7 02:40:25.957046 sshd[6250]: Accepted publickey for core from 10.0.0.1 port 36156 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:40:25.959077 sshd-session[6250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:40:25.977454 systemd-logind[1539]: New session 20 of user core. Mar 7 02:40:25.990577 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 7 02:40:26.392005 sshd[6253]: Connection closed by 10.0.0.1 port 36156 Mar 7 02:40:26.390401 sshd-session[6250]: pam_unix(sshd:session): session closed for user core Mar 7 02:40:26.413049 systemd[1]: sshd@19-10.0.0.118:22-10.0.0.1:36156.service: Deactivated successfully. Mar 7 02:40:26.416489 systemd[1]: session-20.scope: Deactivated successfully. Mar 7 02:40:26.420870 systemd-logind[1539]: Session 20 logged out. Waiting for processes to exit. Mar 7 02:40:26.426088 systemd-logind[1539]: Removed session 20. Mar 7 02:40:31.407761 systemd[1]: Started sshd@20-10.0.0.118:22-10.0.0.1:36162.service - OpenSSH per-connection server daemon (10.0.0.1:36162). Mar 7 02:40:31.536163 sshd[6345]: Accepted publickey for core from 10.0.0.1 port 36162 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:40:31.539960 sshd-session[6345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:40:31.591742 systemd-logind[1539]: New session 21 of user core. Mar 7 02:40:31.600144 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 7 02:40:31.859224 sshd[6348]: Connection closed by 10.0.0.1 port 36162 Mar 7 02:40:31.860213 sshd-session[6345]: pam_unix(sshd:session): session closed for user core Mar 7 02:40:31.868794 systemd[1]: sshd@20-10.0.0.118:22-10.0.0.1:36162.service: Deactivated successfully. Mar 7 02:40:31.871669 systemd[1]: session-21.scope: Deactivated successfully. Mar 7 02:40:31.886945 systemd-logind[1539]: Session 21 logged out. Waiting for processes to exit. Mar 7 02:40:31.891407 systemd-logind[1539]: Removed session 21. Mar 7 02:40:34.678660 containerd[1560]: time="2026-03-07T02:40:34.623929844Z" level=warning msg="container event discarded" container=fd26256077d7e2290a466fa157a9c9ed4ef0a1e522a48e404a8a09a44c6fb11c type=CONTAINER_CREATED_EVENT Mar 7 02:40:34.678660 containerd[1560]: time="2026-03-07T02:40:34.678611993Z" level=warning msg="container event discarded" container=fd26256077d7e2290a466fa157a9c9ed4ef0a1e522a48e404a8a09a44c6fb11c type=CONTAINER_STARTED_EVENT Mar 7 02:40:34.759860 containerd[1560]: time="2026-03-07T02:40:34.759447785Z" level=warning msg="container event discarded" container=3ec245df886760647656a35fa60dc093be19798abb93454f2f39b9834cb180d8 type=CONTAINER_CREATED_EVENT Mar 7 02:40:34.759860 containerd[1560]: time="2026-03-07T02:40:34.759580884Z" level=warning msg="container event discarded" container=3ec245df886760647656a35fa60dc093be19798abb93454f2f39b9834cb180d8 type=CONTAINER_STARTED_EVENT Mar 7 02:40:34.759860 containerd[1560]: time="2026-03-07T02:40:34.759596613Z" level=warning msg="container event discarded" container=a5cd09a232f05f6c3781b30badb2a3769608085f54ee5ca0e00996102e92ff3a type=CONTAINER_CREATED_EVENT Mar 7 02:40:34.759860 containerd[1560]: time="2026-03-07T02:40:34.759606061Z" level=warning msg="container event discarded" container=a5cd09a232f05f6c3781b30badb2a3769608085f54ee5ca0e00996102e92ff3a type=CONTAINER_STARTED_EVENT Mar 7 02:40:34.805771 containerd[1560]: time="2026-03-07T02:40:34.805508033Z" level=warning msg="container event discarded" container=f8e73e6e706688cb7a7e928c9a7fc36c382337fdbb56d67c43c48b19779e6d30 type=CONTAINER_CREATED_EVENT Mar 7 02:40:34.854267 containerd[1560]: time="2026-03-07T02:40:34.853877255Z" level=warning msg="container event discarded" container=eee172c9098025bb9ca394b1974984e963c95a9e2a02807af8c56cbdc064e924 type=CONTAINER_CREATED_EVENT Mar 7 02:40:34.854267 containerd[1560]: time="2026-03-07T02:40:34.854050278Z" level=warning msg="container event discarded" container=d9a191c3f9437fbb791acb8392d460187b627b83de2b58420e723c677be2c02b type=CONTAINER_CREATED_EVENT Mar 7 02:40:35.076674 containerd[1560]: time="2026-03-07T02:40:35.073803783Z" level=warning msg="container event discarded" container=f8e73e6e706688cb7a7e928c9a7fc36c382337fdbb56d67c43c48b19779e6d30 type=CONTAINER_STARTED_EVENT Mar 7 02:40:35.112261 containerd[1560]: time="2026-03-07T02:40:35.111457920Z" level=warning msg="container event discarded" container=d9a191c3f9437fbb791acb8392d460187b627b83de2b58420e723c677be2c02b type=CONTAINER_STARTED_EVENT Mar 7 02:40:35.148026 containerd[1560]: time="2026-03-07T02:40:35.143026348Z" level=warning msg="container event discarded" container=eee172c9098025bb9ca394b1974984e963c95a9e2a02807af8c56cbdc064e924 type=CONTAINER_STARTED_EVENT Mar 7 02:40:36.900698 systemd[1]: Started sshd@21-10.0.0.118:22-10.0.0.1:54908.service - OpenSSH per-connection server daemon (10.0.0.1:54908). Mar 7 02:40:37.045114 sshd[6384]: Accepted publickey for core from 10.0.0.1 port 54908 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:40:37.047874 sshd-session[6384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:40:37.087111 systemd-logind[1539]: New session 22 of user core. Mar 7 02:40:37.104545 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 7 02:40:37.605413 sshd[6392]: Connection closed by 10.0.0.1 port 54908 Mar 7 02:40:37.607449 sshd-session[6384]: pam_unix(sshd:session): session closed for user core Mar 7 02:40:37.623166 systemd[1]: sshd@21-10.0.0.118:22-10.0.0.1:54908.service: Deactivated successfully. Mar 7 02:40:37.628616 systemd[1]: session-22.scope: Deactivated successfully. Mar 7 02:40:37.631572 systemd-logind[1539]: Session 22 logged out. Waiting for processes to exit. Mar 7 02:40:37.636485 systemd-logind[1539]: Removed session 22. Mar 7 02:40:40.616082 kubelet[2804]: E0307 02:40:40.613458 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:40:41.622841 kubelet[2804]: E0307 02:40:41.615891 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:40:42.612804 kubelet[2804]: E0307 02:40:42.612692 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:40:42.627701 systemd[1]: Started sshd@22-10.0.0.118:22-10.0.0.1:34254.service - OpenSSH per-connection server daemon (10.0.0.1:34254). Mar 7 02:40:42.775090 sshd[6425]: Accepted publickey for core from 10.0.0.1 port 34254 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:40:42.784632 sshd-session[6425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:40:42.803378 systemd-logind[1539]: New session 23 of user core. Mar 7 02:40:42.815503 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 7 02:40:43.155391 sshd[6428]: Connection closed by 10.0.0.1 port 34254 Mar 7 02:40:43.164384 sshd-session[6425]: pam_unix(sshd:session): session closed for user core Mar 7 02:40:43.181583 systemd-logind[1539]: Session 23 logged out. Waiting for processes to exit. Mar 7 02:40:43.185178 systemd[1]: sshd@22-10.0.0.118:22-10.0.0.1:34254.service: Deactivated successfully. Mar 7 02:40:43.191289 systemd[1]: session-23.scope: Deactivated successfully. Mar 7 02:40:43.195719 systemd-logind[1539]: Removed session 23. Mar 7 02:40:47.620957 kubelet[2804]: E0307 02:40:47.617064 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:40:48.179599 systemd[1]: Started sshd@23-10.0.0.118:22-10.0.0.1:34258.service - OpenSSH per-connection server daemon (10.0.0.1:34258). Mar 7 02:40:48.351870 sshd[6460]: Accepted publickey for core from 10.0.0.1 port 34258 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:40:48.356677 sshd-session[6460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:40:48.383626 systemd-logind[1539]: New session 24 of user core. Mar 7 02:40:48.420251 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 7 02:40:48.695614 sshd[6463]: Connection closed by 10.0.0.1 port 34258 Mar 7 02:40:48.696144 sshd-session[6460]: pam_unix(sshd:session): session closed for user core Mar 7 02:40:48.714727 systemd[1]: sshd@23-10.0.0.118:22-10.0.0.1:34258.service: Deactivated successfully. Mar 7 02:40:48.722643 systemd[1]: session-24.scope: Deactivated successfully. Mar 7 02:40:48.736423 systemd-logind[1539]: Session 24 logged out. Waiting for processes to exit. Mar 7 02:40:48.747902 systemd-logind[1539]: Removed session 24. Mar 7 02:40:50.996953 containerd[1560]: time="2026-03-07T02:40:50.994219958Z" level=warning msg="container event discarded" container=67715bfe4611937eee10fbb759c82cf277d7e6b9c42cb941a9e8b6a2d223dc68 type=CONTAINER_CREATED_EVENT Mar 7 02:40:50.996953 containerd[1560]: time="2026-03-07T02:40:50.994423307Z" level=warning msg="container event discarded" container=67715bfe4611937eee10fbb759c82cf277d7e6b9c42cb941a9e8b6a2d223dc68 type=CONTAINER_STARTED_EVENT Mar 7 02:40:51.078040 containerd[1560]: time="2026-03-07T02:40:51.077836595Z" level=warning msg="container event discarded" container=fa99962e7dd7e0bbd1613ddf1e65450742ad6d1bf31f06680122ad039f5743a5 type=CONTAINER_CREATED_EVENT Mar 7 02:40:51.437942 containerd[1560]: time="2026-03-07T02:40:51.435023815Z" level=warning msg="container event discarded" container=fa99962e7dd7e0bbd1613ddf1e65450742ad6d1bf31f06680122ad039f5743a5 type=CONTAINER_STARTED_EVENT Mar 7 02:40:51.556411 containerd[1560]: time="2026-03-07T02:40:51.556243442Z" level=warning msg="container event discarded" container=2c09ccc574fb2070d9d96ac1ee0149c5c159c4623ceb63ecbf1efd5b5738ef52 type=CONTAINER_CREATED_EVENT Mar 7 02:40:51.556637 containerd[1560]: time="2026-03-07T02:40:51.556595739Z" level=warning msg="container event discarded" container=2c09ccc574fb2070d9d96ac1ee0149c5c159c4623ceb63ecbf1efd5b5738ef52 type=CONTAINER_STARTED_EVENT Mar 7 02:40:53.748853 systemd[1]: Started sshd@24-10.0.0.118:22-10.0.0.1:36024.service - OpenSSH per-connection server daemon (10.0.0.1:36024). Mar 7 02:40:54.000616 sshd[6500]: Accepted publickey for core from 10.0.0.1 port 36024 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:40:54.010810 sshd-session[6500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:40:54.069818 systemd-logind[1539]: New session 25 of user core. Mar 7 02:40:54.089882 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 7 02:40:54.437042 sshd[6503]: Connection closed by 10.0.0.1 port 36024 Mar 7 02:40:54.437120 sshd-session[6500]: pam_unix(sshd:session): session closed for user core Mar 7 02:40:54.445603 systemd[1]: sshd@24-10.0.0.118:22-10.0.0.1:36024.service: Deactivated successfully. Mar 7 02:40:54.448114 systemd[1]: session-25.scope: Deactivated successfully. Mar 7 02:40:54.460789 systemd-logind[1539]: Session 25 logged out. Waiting for processes to exit. Mar 7 02:40:54.463577 systemd-logind[1539]: Removed session 25. Mar 7 02:40:54.614800 kubelet[2804]: E0307 02:40:54.614656 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:40:59.469655 systemd[1]: Started sshd@25-10.0.0.118:22-10.0.0.1:36034.service - OpenSSH per-connection server daemon (10.0.0.1:36034). Mar 7 02:40:59.616487 sshd[6517]: Accepted publickey for core from 10.0.0.1 port 36034 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:40:59.632628 sshd-session[6517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:40:59.682815 systemd-logind[1539]: New session 26 of user core. Mar 7 02:40:59.701819 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 7 02:41:00.043032 sshd[6520]: Connection closed by 10.0.0.1 port 36034 Mar 7 02:41:00.043582 sshd-session[6517]: pam_unix(sshd:session): session closed for user core Mar 7 02:41:00.075922 systemd[1]: sshd@25-10.0.0.118:22-10.0.0.1:36034.service: Deactivated successfully. Mar 7 02:41:00.083458 systemd[1]: session-26.scope: Deactivated successfully. Mar 7 02:41:00.086781 systemd-logind[1539]: Session 26 logged out. Waiting for processes to exit. Mar 7 02:41:00.098942 systemd-logind[1539]: Removed session 26. Mar 7 02:41:04.026913 containerd[1560]: time="2026-03-07T02:41:04.026592096Z" level=warning msg="container event discarded" container=7aabd3398edbf2b335b9787154da8f646243eceb3dcd018d599640adc2dc2cab type=CONTAINER_CREATED_EVENT Mar 7 02:41:05.078153 systemd[1]: Started sshd@26-10.0.0.118:22-10.0.0.1:50284.service - OpenSSH per-connection server daemon (10.0.0.1:50284). Mar 7 02:41:05.282469 sshd[6560]: Accepted publickey for core from 10.0.0.1 port 50284 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:41:05.288608 sshd-session[6560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:41:05.306902 systemd-logind[1539]: New session 27 of user core. Mar 7 02:41:05.334916 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 7 02:41:05.873115 sshd[6563]: Connection closed by 10.0.0.1 port 50284 Mar 7 02:41:05.874182 sshd-session[6560]: pam_unix(sshd:session): session closed for user core Mar 7 02:41:05.891932 systemd[1]: sshd@26-10.0.0.118:22-10.0.0.1:50284.service: Deactivated successfully. Mar 7 02:41:05.896571 systemd[1]: session-27.scope: Deactivated successfully. Mar 7 02:41:05.900059 systemd-logind[1539]: Session 27 logged out. Waiting for processes to exit. Mar 7 02:41:05.902279 systemd-logind[1539]: Removed session 27. Mar 7 02:41:06.614100 kubelet[2804]: E0307 02:41:06.611937 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:41:09.148475 containerd[1560]: time="2026-03-07T02:41:09.148286341Z" level=warning msg="container event discarded" container=7aabd3398edbf2b335b9787154da8f646243eceb3dcd018d599640adc2dc2cab type=CONTAINER_STARTED_EVENT Mar 7 02:41:10.941616 systemd[1]: Started sshd@27-10.0.0.118:22-10.0.0.1:50296.service - OpenSSH per-connection server daemon (10.0.0.1:50296). Mar 7 02:41:11.086158 sshd[6602]: Accepted publickey for core from 10.0.0.1 port 50296 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:41:11.088529 sshd-session[6602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:41:11.116578 systemd-logind[1539]: New session 28 of user core. Mar 7 02:41:11.124701 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 7 02:41:11.452159 sshd[6605]: Connection closed by 10.0.0.1 port 50296 Mar 7 02:41:11.451556 sshd-session[6602]: pam_unix(sshd:session): session closed for user core Mar 7 02:41:11.470209 systemd[1]: sshd@27-10.0.0.118:22-10.0.0.1:50296.service: Deactivated successfully. Mar 7 02:41:11.479810 systemd[1]: session-28.scope: Deactivated successfully. Mar 7 02:41:11.494230 systemd-logind[1539]: Session 28 logged out. Waiting for processes to exit. Mar 7 02:41:11.508085 systemd-logind[1539]: Removed session 28. Mar 7 02:41:14.939035 containerd[1560]: time="2026-03-07T02:41:14.938634641Z" level=warning msg="container event discarded" container=7aabd3398edbf2b335b9787154da8f646243eceb3dcd018d599640adc2dc2cab type=CONTAINER_STOPPED_EVENT Mar 7 02:41:15.909159 containerd[1560]: time="2026-03-07T02:41:15.894470250Z" level=warning msg="container event discarded" container=c68ccca666324c281d4587c92477654fc544ddd3f18fafc8670d8a58e54e8081 type=CONTAINER_CREATED_EVENT Mar 7 02:41:16.134425 containerd[1560]: time="2026-03-07T02:41:16.133919201Z" level=warning msg="container event discarded" container=c68ccca666324c281d4587c92477654fc544ddd3f18fafc8670d8a58e54e8081 type=CONTAINER_STARTED_EVENT Mar 7 02:41:16.484166 systemd[1]: Started sshd@28-10.0.0.118:22-10.0.0.1:43204.service - OpenSSH per-connection server daemon (10.0.0.1:43204). Mar 7 02:41:16.648417 sshd[6621]: Accepted publickey for core from 10.0.0.1 port 43204 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:41:16.655160 sshd-session[6621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:41:16.673540 systemd-logind[1539]: New session 29 of user core. Mar 7 02:41:16.684756 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 7 02:41:16.986087 sshd[6624]: Connection closed by 10.0.0.1 port 43204 Mar 7 02:41:16.984100 sshd-session[6621]: pam_unix(sshd:session): session closed for user core Mar 7 02:41:16.998779 systemd[1]: sshd@28-10.0.0.118:22-10.0.0.1:43204.service: Deactivated successfully. Mar 7 02:41:17.003886 systemd[1]: session-29.scope: Deactivated successfully. Mar 7 02:41:17.007782 systemd-logind[1539]: Session 29 logged out. Waiting for processes to exit. Mar 7 02:41:17.014040 systemd-logind[1539]: Removed session 29. Mar 7 02:41:22.043298 systemd[1]: Started sshd@29-10.0.0.118:22-10.0.0.1:43218.service - OpenSSH per-connection server daemon (10.0.0.1:43218). Mar 7 02:41:22.151229 sshd[6663]: Accepted publickey for core from 10.0.0.1 port 43218 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:41:22.154706 sshd-session[6663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:41:22.172479 systemd-logind[1539]: New session 30 of user core. Mar 7 02:41:22.193825 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 7 02:41:22.385736 sshd[6666]: Connection closed by 10.0.0.1 port 43218 Mar 7 02:41:22.386049 sshd-session[6663]: pam_unix(sshd:session): session closed for user core Mar 7 02:41:22.394818 systemd[1]: sshd@29-10.0.0.118:22-10.0.0.1:43218.service: Deactivated successfully. Mar 7 02:41:22.401797 systemd[1]: session-30.scope: Deactivated successfully. Mar 7 02:41:22.408302 systemd-logind[1539]: Session 30 logged out. Waiting for processes to exit. Mar 7 02:41:22.418399 systemd-logind[1539]: Removed session 30. Mar 7 02:41:27.421048 systemd[1]: Started sshd@30-10.0.0.118:22-10.0.0.1:56714.service - OpenSSH per-connection server daemon (10.0.0.1:56714). Mar 7 02:41:27.586846 sshd[6681]: Accepted publickey for core from 10.0.0.1 port 56714 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:41:27.589918 sshd-session[6681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:41:27.609708 systemd-logind[1539]: New session 31 of user core. Mar 7 02:41:27.624690 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 7 02:41:27.881809 sshd[6684]: Connection closed by 10.0.0.1 port 56714 Mar 7 02:41:27.882566 sshd-session[6681]: pam_unix(sshd:session): session closed for user core Mar 7 02:41:27.890599 systemd[1]: sshd@30-10.0.0.118:22-10.0.0.1:56714.service: Deactivated successfully. Mar 7 02:41:27.896506 systemd[1]: session-31.scope: Deactivated successfully. Mar 7 02:41:27.907420 systemd-logind[1539]: Session 31 logged out. Waiting for processes to exit. Mar 7 02:41:27.913515 systemd-logind[1539]: Removed session 31. Mar 7 02:41:29.574374 containerd[1560]: time="2026-03-07T02:41:29.574162592Z" level=warning msg="container event discarded" container=96c9bb200907a0b422ca5bd1870ac6e3cea2e019155c8c58da3180b44c99eea5 type=CONTAINER_CREATED_EVENT Mar 7 02:41:29.574374 containerd[1560]: time="2026-03-07T02:41:29.574293375Z" level=warning msg="container event discarded" container=96c9bb200907a0b422ca5bd1870ac6e3cea2e019155c8c58da3180b44c99eea5 type=CONTAINER_STARTED_EVENT Mar 7 02:41:29.929548 containerd[1560]: time="2026-03-07T02:41:29.929207667Z" level=warning msg="container event discarded" container=032829de2a16d0ba9181a631a5fd5178bdfa463a4ac98172267884c6a13b442f type=CONTAINER_CREATED_EVENT Mar 7 02:41:29.929548 containerd[1560]: time="2026-03-07T02:41:29.929301001Z" level=warning msg="container event discarded" container=032829de2a16d0ba9181a631a5fd5178bdfa463a4ac98172267884c6a13b442f type=CONTAINER_STARTED_EVENT Mar 7 02:41:32.912625 systemd[1]: Started sshd@31-10.0.0.118:22-10.0.0.1:46190.service - OpenSSH per-connection server daemon (10.0.0.1:46190). Mar 7 02:41:33.152267 sshd[6775]: Accepted publickey for core from 10.0.0.1 port 46190 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:41:33.164802 sshd-session[6775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:41:33.187791 systemd-logind[1539]: New session 32 of user core. Mar 7 02:41:33.207683 systemd[1]: Started session-32.scope - Session 32 of User core. Mar 7 02:41:33.686232 sshd[6778]: Connection closed by 10.0.0.1 port 46190 Mar 7 02:41:33.687593 sshd-session[6775]: pam_unix(sshd:session): session closed for user core Mar 7 02:41:33.695782 systemd[1]: sshd@31-10.0.0.118:22-10.0.0.1:46190.service: Deactivated successfully. Mar 7 02:41:33.701750 systemd[1]: session-32.scope: Deactivated successfully. Mar 7 02:41:33.713494 systemd-logind[1539]: Session 32 logged out. Waiting for processes to exit. Mar 7 02:41:33.720146 systemd-logind[1539]: Removed session 32. Mar 7 02:41:37.251455 containerd[1560]: time="2026-03-07T02:41:37.248874455Z" level=warning msg="container event discarded" container=1957b2791784cd267262e7b432a790200cf22ed89c4c2b80c090a1e25c3fbb4e type=CONTAINER_CREATED_EVENT Mar 7 02:41:37.664199 containerd[1560]: time="2026-03-07T02:41:37.662521422Z" level=warning msg="container event discarded" container=1957b2791784cd267262e7b432a790200cf22ed89c4c2b80c090a1e25c3fbb4e type=CONTAINER_STARTED_EVENT Mar 7 02:41:38.586432 containerd[1560]: time="2026-03-07T02:41:38.586179639Z" level=warning msg="container event discarded" container=ecbbe3525f6f506693a09e72057abd97d6459a5722c54b265cfabfcf060a171f type=CONTAINER_CREATED_EVENT Mar 7 02:41:38.614081 kubelet[2804]: E0307 02:41:38.613209 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:41:38.714696 systemd[1]: Started sshd@32-10.0.0.118:22-10.0.0.1:46202.service - OpenSSH per-connection server daemon (10.0.0.1:46202). Mar 7 02:41:38.841225 sshd[6836]: Accepted publickey for core from 10.0.0.1 port 46202 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:41:38.850257 sshd-session[6836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:41:38.874503 systemd-logind[1539]: New session 33 of user core. Mar 7 02:41:38.895837 systemd[1]: Started session-33.scope - Session 33 of User core. Mar 7 02:41:39.106758 containerd[1560]: time="2026-03-07T02:41:39.106187510Z" level=warning msg="container event discarded" container=ecbbe3525f6f506693a09e72057abd97d6459a5722c54b265cfabfcf060a171f type=CONTAINER_STARTED_EVENT Mar 7 02:41:39.237359 sshd[6839]: Connection closed by 10.0.0.1 port 46202 Mar 7 02:41:39.239885 sshd-session[6836]: pam_unix(sshd:session): session closed for user core Mar 7 02:41:39.249270 systemd[1]: sshd@32-10.0.0.118:22-10.0.0.1:46202.service: Deactivated successfully. Mar 7 02:41:39.258154 systemd[1]: session-33.scope: Deactivated successfully. Mar 7 02:41:39.266805 systemd-logind[1539]: Session 33 logged out. Waiting for processes to exit. Mar 7 02:41:39.274673 systemd-logind[1539]: Removed session 33. Mar 7 02:41:39.542508 containerd[1560]: time="2026-03-07T02:41:39.541403048Z" level=warning msg="container event discarded" container=ecbbe3525f6f506693a09e72057abd97d6459a5722c54b265cfabfcf060a171f type=CONTAINER_STOPPED_EVENT Mar 7 02:41:44.273682 systemd[1]: Started sshd@33-10.0.0.118:22-10.0.0.1:51146.service - OpenSSH per-connection server daemon (10.0.0.1:51146). Mar 7 02:41:44.393092 sshd[6853]: Accepted publickey for core from 10.0.0.1 port 51146 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:41:44.396116 sshd-session[6853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:41:44.421387 systemd-logind[1539]: New session 34 of user core. Mar 7 02:41:44.462780 systemd[1]: Started session-34.scope - Session 34 of User core. Mar 7 02:41:44.807189 sshd[6856]: Connection closed by 10.0.0.1 port 51146 Mar 7 02:41:44.811429 sshd-session[6853]: pam_unix(sshd:session): session closed for user core Mar 7 02:41:44.832916 systemd[1]: sshd@33-10.0.0.118:22-10.0.0.1:51146.service: Deactivated successfully. Mar 7 02:41:44.842796 systemd[1]: session-34.scope: Deactivated successfully. Mar 7 02:41:44.852825 systemd-logind[1539]: Session 34 logged out. Waiting for processes to exit. Mar 7 02:41:44.859807 systemd[1]: Started sshd@34-10.0.0.118:22-10.0.0.1:51150.service - OpenSSH per-connection server daemon (10.0.0.1:51150). Mar 7 02:41:44.865979 systemd-logind[1539]: Removed session 34. Mar 7 02:41:45.050510 sshd[6870]: Accepted publickey for core from 10.0.0.1 port 51150 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:41:45.054113 sshd-session[6870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:41:45.079914 systemd-logind[1539]: New session 35 of user core. Mar 7 02:41:45.099296 systemd[1]: Started session-35.scope - Session 35 of User core. Mar 7 02:41:45.642108 sshd[6873]: Connection closed by 10.0.0.1 port 51150 Mar 7 02:41:45.643477 sshd-session[6870]: pam_unix(sshd:session): session closed for user core Mar 7 02:41:45.669954 systemd[1]: sshd@34-10.0.0.118:22-10.0.0.1:51150.service: Deactivated successfully. Mar 7 02:41:45.679561 systemd[1]: session-35.scope: Deactivated successfully. Mar 7 02:41:45.681875 systemd-logind[1539]: Session 35 logged out. Waiting for processes to exit. Mar 7 02:41:45.690640 systemd[1]: Started sshd@35-10.0.0.118:22-10.0.0.1:51160.service - OpenSSH per-connection server daemon (10.0.0.1:51160). Mar 7 02:41:45.704152 systemd-logind[1539]: Removed session 35. Mar 7 02:41:45.851571 sshd[6887]: Accepted publickey for core from 10.0.0.1 port 51160 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:41:45.865724 sshd-session[6887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:41:45.880773 systemd-logind[1539]: New session 36 of user core. Mar 7 02:41:45.892815 systemd[1]: Started session-36.scope - Session 36 of User core. Mar 7 02:41:46.143155 sshd[6890]: Connection closed by 10.0.0.1 port 51160 Mar 7 02:41:46.140652 sshd-session[6887]: pam_unix(sshd:session): session closed for user core Mar 7 02:41:46.150855 systemd[1]: sshd@35-10.0.0.118:22-10.0.0.1:51160.service: Deactivated successfully. Mar 7 02:41:46.162212 systemd[1]: session-36.scope: Deactivated successfully. Mar 7 02:41:46.165654 systemd-logind[1539]: Session 36 logged out. Waiting for processes to exit. Mar 7 02:41:46.182708 systemd-logind[1539]: Removed session 36. Mar 7 02:41:48.612199 kubelet[2804]: E0307 02:41:48.611631 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:41:51.173420 systemd[1]: Started sshd@36-10.0.0.118:22-10.0.0.1:51164.service - OpenSSH per-connection server daemon (10.0.0.1:51164). Mar 7 02:41:51.337034 sshd[6924]: Accepted publickey for core from 10.0.0.1 port 51164 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:41:51.340616 sshd-session[6924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:41:51.367721 systemd-logind[1539]: New session 37 of user core. Mar 7 02:41:51.428648 systemd[1]: Started session-37.scope - Session 37 of User core. Mar 7 02:41:51.618068 kubelet[2804]: E0307 02:41:51.617911 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:41:51.855368 sshd[6927]: Connection closed by 10.0.0.1 port 51164 Mar 7 02:41:51.859535 sshd-session[6924]: pam_unix(sshd:session): session closed for user core Mar 7 02:41:51.872606 systemd[1]: sshd@36-10.0.0.118:22-10.0.0.1:51164.service: Deactivated successfully. Mar 7 02:41:51.885293 systemd[1]: session-37.scope: Deactivated successfully. Mar 7 02:41:51.903168 systemd-logind[1539]: Session 37 logged out. Waiting for processes to exit. Mar 7 02:41:51.913786 systemd-logind[1539]: Removed session 37. Mar 7 02:41:56.898538 systemd[1]: Started sshd@37-10.0.0.118:22-10.0.0.1:43804.service - OpenSSH per-connection server daemon (10.0.0.1:43804). Mar 7 02:41:57.124987 sshd[6943]: Accepted publickey for core from 10.0.0.1 port 43804 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:41:57.127741 sshd-session[6943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:41:57.163785 systemd-logind[1539]: New session 38 of user core. Mar 7 02:41:57.193821 systemd[1]: Started session-38.scope - Session 38 of User core. Mar 7 02:41:57.582179 sshd[6946]: Connection closed by 10.0.0.1 port 43804 Mar 7 02:41:57.580140 sshd-session[6943]: pam_unix(sshd:session): session closed for user core Mar 7 02:41:57.600901 systemd[1]: sshd@37-10.0.0.118:22-10.0.0.1:43804.service: Deactivated successfully. Mar 7 02:41:57.606622 systemd[1]: session-38.scope: Deactivated successfully. Mar 7 02:41:57.632141 systemd-logind[1539]: Session 38 logged out. Waiting for processes to exit. Mar 7 02:41:57.634176 systemd-logind[1539]: Removed session 38. Mar 7 02:41:58.614519 kubelet[2804]: E0307 02:41:58.613722 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:42:00.627610 kubelet[2804]: E0307 02:42:00.626733 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:42:02.638995 systemd[1]: Started sshd@38-10.0.0.118:22-10.0.0.1:33910.service - OpenSSH per-connection server daemon (10.0.0.1:33910). Mar 7 02:42:02.831891 sshd[6984]: Accepted publickey for core from 10.0.0.1 port 33910 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:42:02.829276 sshd-session[6984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:42:02.890433 systemd-logind[1539]: New session 39 of user core. Mar 7 02:42:02.901505 systemd[1]: Started session-39.scope - Session 39 of User core. Mar 7 02:42:03.370681 sshd[6987]: Connection closed by 10.0.0.1 port 33910 Mar 7 02:42:03.372390 sshd-session[6984]: pam_unix(sshd:session): session closed for user core Mar 7 02:42:03.387182 systemd[1]: sshd@38-10.0.0.118:22-10.0.0.1:33910.service: Deactivated successfully. Mar 7 02:42:03.389774 systemd-logind[1539]: Session 39 logged out. Waiting for processes to exit. Mar 7 02:42:03.402574 systemd[1]: session-39.scope: Deactivated successfully. Mar 7 02:42:03.408770 systemd-logind[1539]: Removed session 39. Mar 7 02:42:08.455300 systemd[1]: Started sshd@39-10.0.0.118:22-10.0.0.1:33914.service - OpenSSH per-connection server daemon (10.0.0.1:33914). Mar 7 02:42:08.613488 kubelet[2804]: E0307 02:42:08.613292 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:42:08.678607 sshd[7027]: Accepted publickey for core from 10.0.0.1 port 33914 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:42:08.679970 sshd-session[7027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:42:08.702949 systemd-logind[1539]: New session 40 of user core. Mar 7 02:42:08.720662 systemd[1]: Started session-40.scope - Session 40 of User core. Mar 7 02:42:09.185655 sshd[7030]: Connection closed by 10.0.0.1 port 33914 Mar 7 02:42:09.189696 sshd-session[7027]: pam_unix(sshd:session): session closed for user core Mar 7 02:42:09.202943 systemd[1]: sshd@39-10.0.0.118:22-10.0.0.1:33914.service: Deactivated successfully. Mar 7 02:42:09.205824 systemd[1]: session-40.scope: Deactivated successfully. Mar 7 02:42:09.229666 systemd-logind[1539]: Session 40 logged out. Waiting for processes to exit. Mar 7 02:42:09.261531 systemd-logind[1539]: Removed session 40. Mar 7 02:42:09.529754 containerd[1560]: time="2026-03-07T02:42:09.529448206Z" level=warning msg="container event discarded" container=86154f3da3da82db80ded1423f8c23a5d8fc077aaf75a87d75abc9863e08afe9 type=CONTAINER_CREATED_EVENT Mar 7 02:42:09.912182 containerd[1560]: time="2026-03-07T02:42:09.910234946Z" level=warning msg="container event discarded" container=86154f3da3da82db80ded1423f8c23a5d8fc077aaf75a87d75abc9863e08afe9 type=CONTAINER_STARTED_EVENT Mar 7 02:42:10.792138 containerd[1560]: time="2026-03-07T02:42:10.784139485Z" level=warning msg="container event discarded" container=86154f3da3da82db80ded1423f8c23a5d8fc077aaf75a87d75abc9863e08afe9 type=CONTAINER_STOPPED_EVENT Mar 7 02:42:14.232163 systemd[1]: Started sshd@40-10.0.0.118:22-10.0.0.1:56066.service - OpenSSH per-connection server daemon (10.0.0.1:56066). Mar 7 02:42:14.463627 sshd[7044]: Accepted publickey for core from 10.0.0.1 port 56066 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:42:14.471434 sshd-session[7044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:42:14.493647 systemd-logind[1539]: New session 41 of user core. Mar 7 02:42:14.511523 systemd[1]: Started session-41.scope - Session 41 of User core. Mar 7 02:42:15.170211 sshd[7047]: Connection closed by 10.0.0.1 port 56066 Mar 7 02:42:15.175224 sshd-session[7044]: pam_unix(sshd:session): session closed for user core Mar 7 02:42:15.204388 systemd[1]: sshd@40-10.0.0.118:22-10.0.0.1:56066.service: Deactivated successfully. Mar 7 02:42:15.232768 systemd[1]: session-41.scope: Deactivated successfully. Mar 7 02:42:15.238734 systemd-logind[1539]: Session 41 logged out. Waiting for processes to exit. Mar 7 02:42:15.256866 systemd[1]: Started sshd@41-10.0.0.118:22-10.0.0.1:56070.service - OpenSSH per-connection server daemon (10.0.0.1:56070). Mar 7 02:42:15.263710 systemd-logind[1539]: Removed session 41. Mar 7 02:42:15.455703 sshd[7062]: Accepted publickey for core from 10.0.0.1 port 56070 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:42:15.464884 sshd-session[7062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:42:15.497552 systemd-logind[1539]: New session 42 of user core. Mar 7 02:42:15.525488 systemd[1]: Started session-42.scope - Session 42 of User core. Mar 7 02:42:16.937277 sshd[7080]: Connection closed by 10.0.0.1 port 56070 Mar 7 02:42:16.946743 sshd-session[7062]: pam_unix(sshd:session): session closed for user core Mar 7 02:42:16.982618 systemd[1]: Started sshd@42-10.0.0.118:22-10.0.0.1:56072.service - OpenSSH per-connection server daemon (10.0.0.1:56072). Mar 7 02:42:16.983612 systemd[1]: sshd@41-10.0.0.118:22-10.0.0.1:56070.service: Deactivated successfully. Mar 7 02:42:17.009457 systemd[1]: session-42.scope: Deactivated successfully. Mar 7 02:42:17.023603 systemd-logind[1539]: Session 42 logged out. Waiting for processes to exit. Mar 7 02:42:17.033725 systemd-logind[1539]: Removed session 42. Mar 7 02:42:17.329827 sshd[7092]: Accepted publickey for core from 10.0.0.1 port 56072 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:42:17.340817 sshd-session[7092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:42:17.379183 systemd-logind[1539]: New session 43 of user core. Mar 7 02:42:17.410207 systemd[1]: Started session-43.scope - Session 43 of User core. Mar 7 02:42:20.064931 sshd[7098]: Connection closed by 10.0.0.1 port 56072 Mar 7 02:42:20.065416 sshd-session[7092]: pam_unix(sshd:session): session closed for user core Mar 7 02:42:20.096978 systemd[1]: sshd@42-10.0.0.118:22-10.0.0.1:56072.service: Deactivated successfully. Mar 7 02:42:20.106238 systemd[1]: session-43.scope: Deactivated successfully. Mar 7 02:42:20.110418 systemd-logind[1539]: Session 43 logged out. Waiting for processes to exit. Mar 7 02:42:20.137545 systemd[1]: Started sshd@43-10.0.0.118:22-10.0.0.1:56080.service - OpenSSH per-connection server daemon (10.0.0.1:56080). Mar 7 02:42:20.143100 systemd-logind[1539]: Removed session 43. Mar 7 02:42:20.356211 sshd[7124]: Accepted publickey for core from 10.0.0.1 port 56080 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:42:20.364713 sshd-session[7124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:42:20.400451 systemd-logind[1539]: New session 44 of user core. Mar 7 02:42:20.425928 systemd[1]: Started session-44.scope - Session 44 of User core. Mar 7 02:42:22.045001 sshd[7145]: Connection closed by 10.0.0.1 port 56080 Mar 7 02:42:22.048998 sshd-session[7124]: pam_unix(sshd:session): session closed for user core Mar 7 02:42:22.127794 systemd[1]: sshd@43-10.0.0.118:22-10.0.0.1:56080.service: Deactivated successfully. Mar 7 02:42:22.132452 systemd[1]: session-44.scope: Deactivated successfully. Mar 7 02:42:22.148456 systemd-logind[1539]: Session 44 logged out. Waiting for processes to exit. Mar 7 02:42:22.175236 systemd[1]: Started sshd@44-10.0.0.118:22-10.0.0.1:36828.service - OpenSSH per-connection server daemon (10.0.0.1:36828). Mar 7 02:42:22.183272 systemd-logind[1539]: Removed session 44. Mar 7 02:42:22.381694 sshd[7180]: Accepted publickey for core from 10.0.0.1 port 36828 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:42:22.386143 sshd-session[7180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:42:22.431441 systemd-logind[1539]: New session 45 of user core. Mar 7 02:42:22.443676 systemd[1]: Started session-45.scope - Session 45 of User core. Mar 7 02:42:22.915651 sshd[7183]: Connection closed by 10.0.0.1 port 36828 Mar 7 02:42:22.916583 sshd-session[7180]: pam_unix(sshd:session): session closed for user core Mar 7 02:42:22.928384 systemd[1]: sshd@44-10.0.0.118:22-10.0.0.1:36828.service: Deactivated successfully. Mar 7 02:42:22.934657 systemd[1]: session-45.scope: Deactivated successfully. Mar 7 02:42:22.942467 systemd-logind[1539]: Session 45 logged out. Waiting for processes to exit. Mar 7 02:42:22.956529 systemd-logind[1539]: Removed session 45. Mar 7 02:42:25.185895 containerd[1560]: time="2026-03-07T02:42:25.185620871Z" level=warning msg="container event discarded" container=f4cac8a2e0b6b2ce1f9975f7cee866f49f0d82e1d0c9acba6819bc7eed956db0 type=CONTAINER_CREATED_EVENT Mar 7 02:42:25.551250 containerd[1560]: time="2026-03-07T02:42:25.550770193Z" level=warning msg="container event discarded" container=f4cac8a2e0b6b2ce1f9975f7cee866f49f0d82e1d0c9acba6819bc7eed956db0 type=CONTAINER_STARTED_EVENT Mar 7 02:42:27.655509 kubelet[2804]: E0307 02:42:27.647642 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:42:27.910878 containerd[1560]: time="2026-03-07T02:42:27.910680719Z" level=warning msg="container event discarded" container=f4cac8a2e0b6b2ce1f9975f7cee866f49f0d82e1d0c9acba6819bc7eed956db0 type=CONTAINER_STOPPED_EVENT Mar 7 02:42:27.974612 systemd[1]: Started sshd@45-10.0.0.118:22-10.0.0.1:36836.service - OpenSSH per-connection server daemon (10.0.0.1:36836). Mar 7 02:42:28.153189 sshd[7197]: Accepted publickey for core from 10.0.0.1 port 36836 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:42:28.162290 sshd-session[7197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:42:28.192502 systemd-logind[1539]: New session 46 of user core. Mar 7 02:42:28.219812 systemd[1]: Started session-46.scope - Session 46 of User core. Mar 7 02:42:28.743381 sshd[7200]: Connection closed by 10.0.0.1 port 36836 Mar 7 02:42:28.742566 sshd-session[7197]: pam_unix(sshd:session): session closed for user core Mar 7 02:42:28.769549 systemd[1]: sshd@45-10.0.0.118:22-10.0.0.1:36836.service: Deactivated successfully. Mar 7 02:42:28.794577 systemd[1]: session-46.scope: Deactivated successfully. Mar 7 02:42:28.805546 systemd-logind[1539]: Session 46 logged out. Waiting for processes to exit. Mar 7 02:42:28.838595 systemd-logind[1539]: Removed session 46. Mar 7 02:42:28.890231 containerd[1560]: time="2026-03-07T02:42:28.890174180Z" level=warning msg="container event discarded" container=80a2104ebb7ad068b38688d62c5309ccee19835342590e0c644fb2e5dee2ffb9 type=CONTAINER_CREATED_EVENT Mar 7 02:42:29.321101 containerd[1560]: time="2026-03-07T02:42:29.320775712Z" level=warning msg="container event discarded" container=80a2104ebb7ad068b38688d62c5309ccee19835342590e0c644fb2e5dee2ffb9 type=CONTAINER_STARTED_EVENT Mar 7 02:42:32.751440 containerd[1560]: time="2026-03-07T02:42:32.750960837Z" level=warning msg="container event discarded" container=308ef2f7b7c2da67fcbc9feb6c5bb28154e1c5c01f29016de514aa5f0971e962 type=CONTAINER_CREATED_EVENT Mar 7 02:42:32.751440 containerd[1560]: time="2026-03-07T02:42:32.751085680Z" level=warning msg="container event discarded" container=308ef2f7b7c2da67fcbc9feb6c5bb28154e1c5c01f29016de514aa5f0971e962 type=CONTAINER_STARTED_EVENT Mar 7 02:42:33.774531 systemd[1]: Started sshd@46-10.0.0.118:22-10.0.0.1:38096.service - OpenSSH per-connection server daemon (10.0.0.1:38096). Mar 7 02:42:34.021961 containerd[1560]: time="2026-03-07T02:42:34.021896507Z" level=warning msg="container event discarded" container=7916705e07d83c943e5209b8f466520ee4533b275204eba94b2999b13186dbfd type=CONTAINER_CREATED_EVENT Mar 7 02:42:34.043452 sshd[7287]: Accepted publickey for core from 10.0.0.1 port 38096 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:42:34.053537 sshd-session[7287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:42:34.086981 systemd-logind[1539]: New session 47 of user core. Mar 7 02:42:34.108705 systemd[1]: Started session-47.scope - Session 47 of User core. Mar 7 02:42:34.323182 containerd[1560]: time="2026-03-07T02:42:34.319118499Z" level=warning msg="container event discarded" container=7916705e07d83c943e5209b8f466520ee4533b275204eba94b2999b13186dbfd type=CONTAINER_STARTED_EVENT Mar 7 02:42:34.633134 sshd[7290]: Connection closed by 10.0.0.1 port 38096 Mar 7 02:42:34.642474 sshd-session[7287]: pam_unix(sshd:session): session closed for user core Mar 7 02:42:34.659609 systemd-logind[1539]: Session 47 logged out. Waiting for processes to exit. Mar 7 02:42:34.665875 systemd[1]: sshd@46-10.0.0.118:22-10.0.0.1:38096.service: Deactivated successfully. Mar 7 02:42:34.676797 systemd[1]: session-47.scope: Deactivated successfully. Mar 7 02:42:34.689669 systemd-logind[1539]: Removed session 47. Mar 7 02:42:37.343070 containerd[1560]: time="2026-03-07T02:42:37.342949136Z" level=warning msg="container event discarded" container=1e8c4b3d77cf1ef88f73f8e9741116c902d2b75e69abc04a278ec36a8e16d928 type=CONTAINER_CREATED_EVENT Mar 7 02:42:37.540995 containerd[1560]: time="2026-03-07T02:42:37.540906613Z" level=warning msg="container event discarded" container=1e8c4b3d77cf1ef88f73f8e9741116c902d2b75e69abc04a278ec36a8e16d928 type=CONTAINER_STARTED_EVENT Mar 7 02:42:39.685850 systemd[1]: Started sshd@47-10.0.0.118:22-10.0.0.1:38112.service - OpenSSH per-connection server daemon (10.0.0.1:38112). Mar 7 02:42:39.872991 sshd[7329]: Accepted publickey for core from 10.0.0.1 port 38112 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:42:39.877939 sshd-session[7329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:42:39.904395 systemd-logind[1539]: New session 48 of user core. Mar 7 02:42:39.925925 systemd[1]: Started session-48.scope - Session 48 of User core. Mar 7 02:42:40.481528 sshd[7332]: Connection closed by 10.0.0.1 port 38112 Mar 7 02:42:40.484125 sshd-session[7329]: pam_unix(sshd:session): session closed for user core Mar 7 02:42:40.511009 systemd[1]: sshd@47-10.0.0.118:22-10.0.0.1:38112.service: Deactivated successfully. Mar 7 02:42:40.520485 systemd[1]: session-48.scope: Deactivated successfully. Mar 7 02:42:40.543491 systemd-logind[1539]: Session 48 logged out. Waiting for processes to exit. Mar 7 02:42:40.565000 systemd-logind[1539]: Removed session 48. Mar 7 02:42:42.074440 containerd[1560]: time="2026-03-07T02:42:42.074195999Z" level=warning msg="container event discarded" container=0947e803909cb562a8df5815555d6ce10d643f0d236dd8367c6bb0b86a2ffa06 type=CONTAINER_CREATED_EVENT Mar 7 02:42:42.074440 containerd[1560]: time="2026-03-07T02:42:42.074403586Z" level=warning msg="container event discarded" container=0947e803909cb562a8df5815555d6ce10d643f0d236dd8367c6bb0b86a2ffa06 type=CONTAINER_STARTED_EVENT Mar 7 02:42:43.404467 containerd[1560]: time="2026-03-07T02:42:43.404294118Z" level=warning msg="container event discarded" container=f262cbf2cb997f9af4680b2e6ab54cca57e37434115953ecad44682d94aab5b0 type=CONTAINER_CREATED_EVENT Mar 7 02:42:43.405001 containerd[1560]: time="2026-03-07T02:42:43.404972004Z" level=warning msg="container event discarded" container=f262cbf2cb997f9af4680b2e6ab54cca57e37434115953ecad44682d94aab5b0 type=CONTAINER_STARTED_EVENT Mar 7 02:42:44.543597 containerd[1560]: time="2026-03-07T02:42:44.543196611Z" level=warning msg="container event discarded" container=c9c87297fba7b7fc5acf4374546c55f52ccecaf1625a3bcd732f3aec65bc90fe type=CONTAINER_CREATED_EVENT Mar 7 02:42:44.544245 containerd[1560]: time="2026-03-07T02:42:44.543381427Z" level=warning msg="container event discarded" container=c9c87297fba7b7fc5acf4374546c55f52ccecaf1625a3bcd732f3aec65bc90fe type=CONTAINER_STARTED_EVENT Mar 7 02:42:44.777860 containerd[1560]: time="2026-03-07T02:42:44.777420107Z" level=warning msg="container event discarded" container=b9eac1e36193786b0d289ef673643058047222e073e273704dabdaa0955c5c81 type=CONTAINER_CREATED_EVENT Mar 7 02:42:44.812919 containerd[1560]: time="2026-03-07T02:42:44.812657572Z" level=warning msg="container event discarded" container=bb3dada6fc3bfc1c1f3fd677131df17847214e5d2295da5aab1756961c137d9c type=CONTAINER_CREATED_EVENT Mar 7 02:42:44.812919 containerd[1560]: time="2026-03-07T02:42:44.812745847Z" level=warning msg="container event discarded" container=bb3dada6fc3bfc1c1f3fd677131df17847214e5d2295da5aab1756961c137d9c type=CONTAINER_STARTED_EVENT Mar 7 02:42:45.390396 containerd[1560]: time="2026-03-07T02:42:45.390171433Z" level=warning msg="container event discarded" container=a295e1132625d967a7e5d6f0311759bdfbd3428eb6f7aad6903b86ab64e6bc37 type=CONTAINER_CREATED_EVENT Mar 7 02:42:45.390396 containerd[1560]: time="2026-03-07T02:42:45.390284244Z" level=warning msg="container event discarded" container=a295e1132625d967a7e5d6f0311759bdfbd3428eb6f7aad6903b86ab64e6bc37 type=CONTAINER_STARTED_EVENT Mar 7 02:42:45.405265 containerd[1560]: time="2026-03-07T02:42:45.405119451Z" level=warning msg="container event discarded" container=da6a73fb0951ef5b525e558370341bfffd009f57e99f762e139cb541ffaecb9d type=CONTAINER_CREATED_EVENT Mar 7 02:42:45.438826 containerd[1560]: time="2026-03-07T02:42:45.438750931Z" level=warning msg="container event discarded" container=b9eac1e36193786b0d289ef673643058047222e073e273704dabdaa0955c5c81 type=CONTAINER_STARTED_EVENT Mar 7 02:42:45.535283 systemd[1]: Started sshd@48-10.0.0.118:22-10.0.0.1:35972.service - OpenSSH per-connection server daemon (10.0.0.1:35972). Mar 7 02:42:45.556875 containerd[1560]: time="2026-03-07T02:42:45.556822528Z" level=warning msg="container event discarded" container=d210d46f61a2ac895df3acd9c8c2b2b0d35b174a0df11140949feb567b1918c1 type=CONTAINER_CREATED_EVENT Mar 7 02:42:45.723537 sshd[7346]: Accepted publickey for core from 10.0.0.1 port 35972 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:42:45.729933 sshd-session[7346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:42:45.754479 systemd-logind[1539]: New session 49 of user core. Mar 7 02:42:45.777670 systemd[1]: Started session-49.scope - Session 49 of User core. Mar 7 02:42:45.851145 containerd[1560]: time="2026-03-07T02:42:45.847591856Z" level=warning msg="container event discarded" container=2719a19b13e863e20c8492ce3d3faa1ce52850912b8cb236bb2628f99e47cff0 type=CONTAINER_CREATED_EVENT Mar 7 02:42:45.851145 containerd[1560]: time="2026-03-07T02:42:45.847734011Z" level=warning msg="container event discarded" container=2719a19b13e863e20c8492ce3d3faa1ce52850912b8cb236bb2628f99e47cff0 type=CONTAINER_STARTED_EVENT Mar 7 02:42:45.923379 containerd[1560]: time="2026-03-07T02:42:45.923209983Z" level=warning msg="container event discarded" container=d210d46f61a2ac895df3acd9c8c2b2b0d35b174a0df11140949feb567b1918c1 type=CONTAINER_STARTED_EVENT Mar 7 02:42:46.252836 sshd[7351]: Connection closed by 10.0.0.1 port 35972 Mar 7 02:42:46.251909 sshd-session[7346]: pam_unix(sshd:session): session closed for user core Mar 7 02:42:46.292609 systemd[1]: sshd@48-10.0.0.118:22-10.0.0.1:35972.service: Deactivated successfully. Mar 7 02:42:46.307563 systemd[1]: session-49.scope: Deactivated successfully. Mar 7 02:42:46.311760 systemd-logind[1539]: Session 49 logged out. Waiting for processes to exit. Mar 7 02:42:46.323443 systemd-logind[1539]: Removed session 49. Mar 7 02:42:46.742770 containerd[1560]: time="2026-03-07T02:42:46.742681054Z" level=warning msg="container event discarded" container=da6a73fb0951ef5b525e558370341bfffd009f57e99f762e139cb541ffaecb9d type=CONTAINER_STARTED_EVENT Mar 7 02:42:47.039740 containerd[1560]: time="2026-03-07T02:42:47.034534713Z" level=warning msg="container event discarded" container=3897d62b534d79bc9eb7eeb39e0f2b147aab3b9c722737fa1bfa5a6b792a8b5e type=CONTAINER_CREATED_EVENT Mar 7 02:42:47.039740 containerd[1560]: time="2026-03-07T02:42:47.034586790Z" level=warning msg="container event discarded" container=3897d62b534d79bc9eb7eeb39e0f2b147aab3b9c722737fa1bfa5a6b792a8b5e type=CONTAINER_STARTED_EVENT Mar 7 02:42:51.283567 systemd[1]: Started sshd@49-10.0.0.118:22-10.0.0.1:35976.service - OpenSSH per-connection server daemon (10.0.0.1:35976). Mar 7 02:42:51.440977 sshd[7386]: Accepted publickey for core from 10.0.0.1 port 35976 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:42:51.449809 sshd-session[7386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:42:51.498663 systemd-logind[1539]: New session 50 of user core. Mar 7 02:42:51.527764 systemd[1]: Started session-50.scope - Session 50 of User core. Mar 7 02:42:51.979265 sshd[7389]: Connection closed by 10.0.0.1 port 35976 Mar 7 02:42:51.977633 sshd-session[7386]: pam_unix(sshd:session): session closed for user core Mar 7 02:42:51.992455 systemd[1]: sshd@49-10.0.0.118:22-10.0.0.1:35976.service: Deactivated successfully. Mar 7 02:42:52.015637 systemd[1]: session-50.scope: Deactivated successfully. Mar 7 02:42:52.028918 systemd-logind[1539]: Session 50 logged out. Waiting for processes to exit. Mar 7 02:42:52.046104 systemd-logind[1539]: Removed session 50. Mar 7 02:42:54.405409 containerd[1560]: time="2026-03-07T02:42:54.405217117Z" level=warning msg="container event discarded" container=2959c3e27bc0fe4ac81c0c16acb5461e810e45a90afd6bb83013a7cb8bfcf2ba type=CONTAINER_CREATED_EVENT Mar 7 02:42:54.745793 containerd[1560]: time="2026-03-07T02:42:54.744859010Z" level=warning msg="container event discarded" container=2959c3e27bc0fe4ac81c0c16acb5461e810e45a90afd6bb83013a7cb8bfcf2ba type=CONTAINER_STARTED_EVENT Mar 7 02:42:57.021141 systemd[1]: Started sshd@50-10.0.0.118:22-10.0.0.1:37104.service - OpenSSH per-connection server daemon (10.0.0.1:37104). Mar 7 02:42:57.117997 sshd[7418]: Accepted publickey for core from 10.0.0.1 port 37104 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:42:57.120835 sshd-session[7418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:42:57.136486 systemd-logind[1539]: New session 51 of user core. Mar 7 02:42:57.143772 systemd[1]: Started session-51.scope - Session 51 of User core. Mar 7 02:42:57.493761 sshd[7421]: Connection closed by 10.0.0.1 port 37104 Mar 7 02:42:57.490506 sshd-session[7418]: pam_unix(sshd:session): session closed for user core Mar 7 02:42:57.506695 systemd[1]: sshd@50-10.0.0.118:22-10.0.0.1:37104.service: Deactivated successfully. Mar 7 02:42:57.516872 systemd[1]: session-51.scope: Deactivated successfully. Mar 7 02:42:57.526494 systemd-logind[1539]: Session 51 logged out. Waiting for processes to exit. Mar 7 02:42:57.538211 systemd-logind[1539]: Removed session 51. Mar 7 02:42:59.616289 kubelet[2804]: E0307 02:42:59.613431 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:43:02.523864 systemd[1]: Started sshd@51-10.0.0.118:22-10.0.0.1:46994.service - OpenSSH per-connection server daemon (10.0.0.1:46994). Mar 7 02:43:02.692268 sshd[7461]: Accepted publickey for core from 10.0.0.1 port 46994 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:43:02.694473 sshd-session[7461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:43:02.723287 systemd-logind[1539]: New session 52 of user core. Mar 7 02:43:02.736800 systemd[1]: Started session-52.scope - Session 52 of User core. Mar 7 02:43:02.973734 sshd[7464]: Connection closed by 10.0.0.1 port 46994 Mar 7 02:43:02.973234 sshd-session[7461]: pam_unix(sshd:session): session closed for user core Mar 7 02:43:02.984871 systemd[1]: sshd@51-10.0.0.118:22-10.0.0.1:46994.service: Deactivated successfully. Mar 7 02:43:02.989234 systemd[1]: session-52.scope: Deactivated successfully. Mar 7 02:43:02.999496 systemd-logind[1539]: Session 52 logged out. Waiting for processes to exit. Mar 7 02:43:03.010673 systemd-logind[1539]: Removed session 52. Mar 7 02:43:04.617379 kubelet[2804]: E0307 02:43:04.617246 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:43:04.886692 containerd[1560]: time="2026-03-07T02:43:04.886480237Z" level=warning msg="container event discarded" container=5f0502672d99b63a88240e750a153edd873c25b26c4f315c215f7230254cfb3e type=CONTAINER_CREATED_EVENT Mar 7 02:43:05.272800 containerd[1560]: time="2026-03-07T02:43:05.272583398Z" level=warning msg="container event discarded" container=673774c1213ee1937c41b008c811a531fc665db48b7ef4e13d7f00d44bf321a4 type=CONTAINER_CREATED_EVENT Mar 7 02:43:05.575424 containerd[1560]: time="2026-03-07T02:43:05.575199454Z" level=warning msg="container event discarded" container=5f0502672d99b63a88240e750a153edd873c25b26c4f315c215f7230254cfb3e type=CONTAINER_STARTED_EVENT Mar 7 02:43:06.111416 containerd[1560]: time="2026-03-07T02:43:06.108291248Z" level=warning msg="container event discarded" container=673774c1213ee1937c41b008c811a531fc665db48b7ef4e13d7f00d44bf321a4 type=CONTAINER_STARTED_EVENT Mar 7 02:43:08.025719 systemd[1]: Started sshd@52-10.0.0.118:22-10.0.0.1:47004.service - OpenSSH per-connection server daemon (10.0.0.1:47004). Mar 7 02:43:08.247949 containerd[1560]: time="2026-03-07T02:43:08.246395048Z" level=warning msg="container event discarded" container=6280461cc584b32642e581c8dad9c0a2b17f7ad24eb0cd4d4372037091870580 type=CONTAINER_CREATED_EVENT Mar 7 02:43:08.300731 sshd[7509]: Accepted publickey for core from 10.0.0.1 port 47004 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:43:08.311247 sshd-session[7509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:43:08.343696 systemd-logind[1539]: New session 53 of user core. Mar 7 02:43:08.378788 systemd[1]: Started session-53.scope - Session 53 of User core. Mar 7 02:43:08.857469 sshd[7512]: Connection closed by 10.0.0.1 port 47004 Mar 7 02:43:08.856732 sshd-session[7509]: pam_unix(sshd:session): session closed for user core Mar 7 02:43:08.886138 systemd[1]: sshd@52-10.0.0.118:22-10.0.0.1:47004.service: Deactivated successfully. Mar 7 02:43:08.897008 systemd[1]: session-53.scope: Deactivated successfully. Mar 7 02:43:08.907631 systemd-logind[1539]: Session 53 logged out. Waiting for processes to exit. Mar 7 02:43:08.911302 systemd-logind[1539]: Removed session 53. Mar 7 02:43:09.202270 containerd[1560]: time="2026-03-07T02:43:09.199800581Z" level=warning msg="container event discarded" container=6280461cc584b32642e581c8dad9c0a2b17f7ad24eb0cd4d4372037091870580 type=CONTAINER_STARTED_EVENT Mar 7 02:43:13.905494 systemd[1]: Started sshd@53-10.0.0.118:22-10.0.0.1:40996.service - OpenSSH per-connection server daemon (10.0.0.1:40996). Mar 7 02:43:14.183688 sshd[7526]: Accepted publickey for core from 10.0.0.1 port 40996 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:43:14.186900 sshd-session[7526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:43:14.222884 systemd-logind[1539]: New session 54 of user core. Mar 7 02:43:14.282965 systemd[1]: Started session-54.scope - Session 54 of User core. Mar 7 02:43:14.630230 kubelet[2804]: E0307 02:43:14.614391 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:43:14.630230 kubelet[2804]: E0307 02:43:14.620794 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:43:14.988756 sshd[7529]: Connection closed by 10.0.0.1 port 40996 Mar 7 02:43:14.993177 sshd-session[7526]: pam_unix(sshd:session): session closed for user core Mar 7 02:43:15.008998 systemd-logind[1539]: Session 54 logged out. Waiting for processes to exit. Mar 7 02:43:15.017104 systemd[1]: sshd@53-10.0.0.118:22-10.0.0.1:40996.service: Deactivated successfully. Mar 7 02:43:15.030497 systemd[1]: session-54.scope: Deactivated successfully. Mar 7 02:43:15.045238 systemd-logind[1539]: Removed session 54. Mar 7 02:43:19.323081 containerd[1560]: time="2026-03-07T02:43:19.322747888Z" level=warning msg="container event discarded" container=c0f27aa621c9017bda734f3cd17d9d2f692ccd965f6e415cae003e4d13e61099 type=CONTAINER_CREATED_EVENT Mar 7 02:43:19.833822 containerd[1560]: time="2026-03-07T02:43:19.833735339Z" level=warning msg="container event discarded" container=c0f27aa621c9017bda734f3cd17d9d2f692ccd965f6e415cae003e4d13e61099 type=CONTAINER_STARTED_EVENT Mar 7 02:43:20.046735 systemd[1]: Started sshd@54-10.0.0.118:22-10.0.0.1:41006.service - OpenSSH per-connection server daemon (10.0.0.1:41006). Mar 7 02:43:20.292987 sshd[7543]: Accepted publickey for core from 10.0.0.1 port 41006 ssh2: RSA SHA256:JKJ1/iVlHzuldEXlsScuYN1RiOZaGownH1IMBuP6bAQ Mar 7 02:43:20.296618 sshd-session[7543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:43:20.321843 systemd-logind[1539]: New session 55 of user core. Mar 7 02:43:20.367303 systemd[1]: Started session-55.scope - Session 55 of User core. Mar 7 02:43:20.744485 sshd[7565]: Connection closed by 10.0.0.1 port 41006 Mar 7 02:43:20.746820 sshd-session[7543]: pam_unix(sshd:session): session closed for user core Mar 7 02:43:20.770775 systemd[1]: sshd@54-10.0.0.118:22-10.0.0.1:41006.service: Deactivated successfully. Mar 7 02:43:20.775513 systemd[1]: session-55.scope: Deactivated successfully. Mar 7 02:43:20.784569 systemd-logind[1539]: Session 55 logged out. Waiting for processes to exit. Mar 7 02:43:20.789237 systemd-logind[1539]: Removed session 55. Mar 7 02:43:23.658296 kubelet[2804]: E0307 02:43:23.644205 2804 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"