Mar 13 00:43:47.153753 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Mar 12 22:08:29 -00 2026 Mar 13 00:43:47.154049 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:43:47.154062 kernel: BIOS-provided physical RAM map: Mar 13 00:43:47.154076 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 13 00:43:47.154086 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 13 00:43:47.154095 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 13 00:43:47.154105 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Mar 13 00:43:47.154114 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 13 00:43:47.154123 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Mar 13 00:43:47.154133 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Mar 13 00:43:47.154142 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Mar 13 00:43:47.154151 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Mar 13 00:43:47.154163 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Mar 13 00:43:47.154173 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Mar 13 00:43:47.154185 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Mar 13 00:43:47.154195 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 13 00:43:47.154205 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Mar 13 00:43:47.154218 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Mar 13 00:43:47.154227 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Mar 13 00:43:47.154237 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Mar 13 00:43:47.154248 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Mar 13 00:43:47.154257 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 13 00:43:47.154719 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 13 00:43:47.154730 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 13 00:43:47.154740 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Mar 13 00:43:47.154749 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 13 00:43:47.155023 kernel: NX (Execute Disable) protection: active Mar 13 00:43:47.155036 kernel: APIC: Static calls initialized Mar 13 00:43:47.155101 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Mar 13 00:43:47.155112 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Mar 13 00:43:47.155122 kernel: extended physical RAM map: Mar 13 00:43:47.155132 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 13 00:43:47.155142 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 13 00:43:47.155151 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 13 00:43:47.155161 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Mar 13 00:43:47.155171 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 13 00:43:47.155181 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Mar 13 00:43:47.155191 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Mar 13 00:43:47.155200 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Mar 13 00:43:47.155219 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Mar 13 00:43:47.155234 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Mar 13 00:43:47.155245 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Mar 13 00:43:47.155255 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Mar 13 00:43:47.155266 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Mar 13 00:43:47.155281 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Mar 13 00:43:47.155290 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Mar 13 00:43:47.155299 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Mar 13 00:43:47.155310 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 13 00:43:47.155320 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Mar 13 00:43:47.155331 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Mar 13 00:43:47.155341 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Mar 13 00:43:47.155352 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Mar 13 00:43:47.155363 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Mar 13 00:43:47.155373 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 13 00:43:47.155383 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 13 00:43:47.155397 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 13 00:43:47.155408 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Mar 13 00:43:47.155418 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 13 00:43:47.155428 kernel: efi: EFI v2.7 by EDK II Mar 13 00:43:47.155439 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Mar 13 00:43:47.155448 kernel: random: crng init done Mar 13 00:43:47.155458 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Mar 13 00:43:47.155468 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Mar 13 00:43:47.155478 kernel: secureboot: Secure boot disabled Mar 13 00:43:47.155489 kernel: SMBIOS 2.8 present. Mar 13 00:43:47.155500 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Mar 13 00:43:47.155516 kernel: DMI: Memory slots populated: 1/1 Mar 13 00:43:47.155525 kernel: Hypervisor detected: KVM Mar 13 00:43:47.155535 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Mar 13 00:43:47.155546 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 13 00:43:47.155557 kernel: kvm-clock: using sched offset of 10371898755 cycles Mar 13 00:43:47.155568 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 13 00:43:47.155578 kernel: tsc: Detected 2445.426 MHz processor Mar 13 00:43:47.155589 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 13 00:43:47.155600 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 13 00:43:47.155611 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Mar 13 00:43:47.155621 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 13 00:43:47.155635 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 13 00:43:47.155646 kernel: Using GB pages for direct mapping Mar 13 00:43:47.155658 kernel: ACPI: Early table checksum verification disabled Mar 13 00:43:47.155668 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Mar 13 00:43:47.155679 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 13 00:43:47.155690 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:43:47.155700 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:43:47.155710 kernel: ACPI: FACS 0x000000009CBDD000 000040 Mar 13 00:43:47.155724 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:43:47.155735 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:43:47.155746 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:43:47.156008 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:43:47.156023 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 13 00:43:47.156034 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Mar 13 00:43:47.156044 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Mar 13 00:43:47.156054 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Mar 13 00:43:47.156065 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Mar 13 00:43:47.156081 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Mar 13 00:43:47.156091 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Mar 13 00:43:47.156102 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Mar 13 00:43:47.156112 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Mar 13 00:43:47.156123 kernel: No NUMA configuration found Mar 13 00:43:47.156134 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Mar 13 00:43:47.156145 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Mar 13 00:43:47.156156 kernel: Zone ranges: Mar 13 00:43:47.156166 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 13 00:43:47.156180 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Mar 13 00:43:47.156191 kernel: Normal empty Mar 13 00:43:47.156202 kernel: Device empty Mar 13 00:43:47.156213 kernel: Movable zone start for each node Mar 13 00:43:47.156223 kernel: Early memory node ranges Mar 13 00:43:47.156233 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 13 00:43:47.156243 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Mar 13 00:43:47.156254 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Mar 13 00:43:47.156264 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Mar 13 00:43:47.156274 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Mar 13 00:43:47.156288 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Mar 13 00:43:47.156299 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Mar 13 00:43:47.156309 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Mar 13 00:43:47.156320 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Mar 13 00:43:47.156330 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 13 00:43:47.156352 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 13 00:43:47.156366 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Mar 13 00:43:47.156377 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 13 00:43:47.156388 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Mar 13 00:43:47.156399 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Mar 13 00:43:47.156410 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Mar 13 00:43:47.156424 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Mar 13 00:43:47.156435 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Mar 13 00:43:47.156446 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 13 00:43:47.156457 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 13 00:43:47.156468 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 13 00:43:47.156482 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 13 00:43:47.156493 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 13 00:43:47.156504 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 13 00:43:47.156515 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 13 00:43:47.156527 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 13 00:43:47.156537 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 13 00:43:47.156548 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 13 00:43:47.156559 kernel: TSC deadline timer available Mar 13 00:43:47.156570 kernel: CPU topo: Max. logical packages: 1 Mar 13 00:43:47.156584 kernel: CPU topo: Max. logical dies: 1 Mar 13 00:43:47.156595 kernel: CPU topo: Max. dies per package: 1 Mar 13 00:43:47.156605 kernel: CPU topo: Max. threads per core: 1 Mar 13 00:43:47.156616 kernel: CPU topo: Num. cores per package: 4 Mar 13 00:43:47.156627 kernel: CPU topo: Num. threads per package: 4 Mar 13 00:43:47.156638 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Mar 13 00:43:47.156648 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 13 00:43:47.156659 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 13 00:43:47.156670 kernel: kvm-guest: setup PV sched yield Mar 13 00:43:47.156681 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Mar 13 00:43:47.156695 kernel: Booting paravirtualized kernel on KVM Mar 13 00:43:47.156707 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 13 00:43:47.156718 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 13 00:43:47.156729 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Mar 13 00:43:47.156740 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Mar 13 00:43:47.156751 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 13 00:43:47.157018 kernel: kvm-guest: PV spinlocks enabled Mar 13 00:43:47.157118 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 13 00:43:47.157136 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:43:47.157148 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 13 00:43:47.157159 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 13 00:43:47.157170 kernel: Fallback order for Node 0: 0 Mar 13 00:43:47.157181 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Mar 13 00:43:47.157191 kernel: Policy zone: DMA32 Mar 13 00:43:47.157202 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 13 00:43:47.157213 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 13 00:43:47.157224 kernel: ftrace: allocating 40099 entries in 157 pages Mar 13 00:43:47.157240 kernel: ftrace: allocated 157 pages with 5 groups Mar 13 00:43:47.157251 kernel: Dynamic Preempt: voluntary Mar 13 00:43:47.157262 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 13 00:43:47.157274 kernel: rcu: RCU event tracing is enabled. Mar 13 00:43:47.158312 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 13 00:43:47.158351 kernel: Trampoline variant of Tasks RCU enabled. Mar 13 00:43:47.158363 kernel: Rude variant of Tasks RCU enabled. Mar 13 00:43:47.158450 kernel: Tracing variant of Tasks RCU enabled. Mar 13 00:43:47.158465 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 13 00:43:47.158484 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 13 00:43:47.158496 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 13 00:43:47.158507 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 13 00:43:47.158518 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 13 00:43:47.158529 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 13 00:43:47.158540 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 13 00:43:47.158551 kernel: Console: colour dummy device 80x25 Mar 13 00:43:47.158562 kernel: printk: legacy console [ttyS0] enabled Mar 13 00:43:47.158573 kernel: ACPI: Core revision 20240827 Mar 13 00:43:47.158587 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 13 00:43:47.158598 kernel: APIC: Switch to symmetric I/O mode setup Mar 13 00:43:47.158609 kernel: x2apic enabled Mar 13 00:43:47.158620 kernel: APIC: Switched APIC routing to: physical x2apic Mar 13 00:43:47.158632 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 13 00:43:47.158643 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 13 00:43:47.158653 kernel: kvm-guest: setup PV IPIs Mar 13 00:43:47.158664 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 13 00:43:47.158675 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 13 00:43:47.158695 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 13 00:43:47.158706 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 13 00:43:47.158718 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 13 00:43:47.158728 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 13 00:43:47.158740 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 13 00:43:47.158751 kernel: Spectre V2 : Mitigation: Retpolines Mar 13 00:43:47.159151 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 13 00:43:47.159164 kernel: Speculative Store Bypass: Vulnerable Mar 13 00:43:47.159176 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 13 00:43:47.159276 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 13 00:43:47.159288 kernel: active return thunk: srso_alias_return_thunk Mar 13 00:43:47.159299 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 13 00:43:47.159311 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 13 00:43:47.159322 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 13 00:43:47.159334 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 13 00:43:47.159344 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 13 00:43:47.159355 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 13 00:43:47.159381 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 13 00:43:47.159393 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 13 00:43:47.159404 kernel: Freeing SMP alternatives memory: 32K Mar 13 00:43:47.159415 kernel: pid_max: default: 32768 minimum: 301 Mar 13 00:43:47.159426 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 13 00:43:47.159438 kernel: landlock: Up and running. Mar 13 00:43:47.159449 kernel: SELinux: Initializing. Mar 13 00:43:47.159461 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 13 00:43:47.159551 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 13 00:43:47.159568 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 13 00:43:47.159579 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 13 00:43:47.159590 kernel: signal: max sigframe size: 1776 Mar 13 00:43:47.159602 kernel: rcu: Hierarchical SRCU implementation. Mar 13 00:43:47.159613 kernel: rcu: Max phase no-delay instances is 400. Mar 13 00:43:47.159625 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 13 00:43:47.159636 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 13 00:43:47.159647 kernel: smp: Bringing up secondary CPUs ... Mar 13 00:43:47.159658 kernel: smpboot: x86: Booting SMP configuration: Mar 13 00:43:47.159672 kernel: .... node #0, CPUs: #1 #2 #3 Mar 13 00:43:47.159683 kernel: smp: Brought up 1 node, 4 CPUs Mar 13 00:43:47.159694 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 13 00:43:47.159705 kernel: Memory: 2414476K/2565800K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46200K init, 2560K bss, 145388K reserved, 0K cma-reserved) Mar 13 00:43:47.160434 kernel: devtmpfs: initialized Mar 13 00:43:47.160448 kernel: x86/mm: Memory block size: 128MB Mar 13 00:43:47.160460 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Mar 13 00:43:47.160471 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Mar 13 00:43:47.160482 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Mar 13 00:43:47.160532 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Mar 13 00:43:47.160544 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Mar 13 00:43:47.160555 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Mar 13 00:43:47.160566 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 13 00:43:47.160578 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 13 00:43:47.160589 kernel: pinctrl core: initialized pinctrl subsystem Mar 13 00:43:47.160600 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 13 00:43:47.160611 kernel: audit: initializing netlink subsys (disabled) Mar 13 00:43:47.160622 kernel: audit: type=2000 audit(1773362610.806:1): state=initialized audit_enabled=0 res=1 Mar 13 00:43:47.160636 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 13 00:43:47.160647 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 13 00:43:47.160658 kernel: cpuidle: using governor menu Mar 13 00:43:47.160669 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 13 00:43:47.160680 kernel: dca service started, version 1.12.1 Mar 13 00:43:47.160691 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Mar 13 00:43:47.160702 kernel: PCI: Using configuration type 1 for base access Mar 13 00:43:47.160714 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 13 00:43:47.160726 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 13 00:43:47.160740 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 13 00:43:47.160751 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 13 00:43:47.161014 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 13 00:43:47.161028 kernel: ACPI: Added _OSI(Module Device) Mar 13 00:43:47.161040 kernel: ACPI: Added _OSI(Processor Device) Mar 13 00:43:47.161051 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 13 00:43:47.161063 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 13 00:43:47.161074 kernel: ACPI: Interpreter enabled Mar 13 00:43:47.161085 kernel: ACPI: PM: (supports S0 S3 S5) Mar 13 00:43:47.161102 kernel: ACPI: Using IOAPIC for interrupt routing Mar 13 00:43:47.161113 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 13 00:43:47.161125 kernel: PCI: Using E820 reservations for host bridge windows Mar 13 00:43:47.161136 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 13 00:43:47.161147 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 13 00:43:47.162404 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 13 00:43:47.162702 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 13 00:43:47.163170 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 13 00:43:47.163189 kernel: PCI host bridge to bus 0000:00 Mar 13 00:43:47.164349 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 13 00:43:47.164514 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 13 00:43:47.164668 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 13 00:43:47.165114 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Mar 13 00:43:47.165282 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Mar 13 00:43:47.165441 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Mar 13 00:43:47.165594 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 13 00:43:47.166152 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Mar 13 00:43:47.166500 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Mar 13 00:43:47.166671 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Mar 13 00:43:47.167180 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Mar 13 00:43:47.167377 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Mar 13 00:43:47.167561 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 13 00:43:47.167736 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 11718 usecs Mar 13 00:43:47.168177 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Mar 13 00:43:47.168359 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Mar 13 00:43:47.168518 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Mar 13 00:43:47.168687 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Mar 13 00:43:47.169263 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Mar 13 00:43:47.170122 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Mar 13 00:43:47.170600 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Mar 13 00:43:47.171049 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Mar 13 00:43:47.171662 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Mar 13 00:43:47.172101 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Mar 13 00:43:47.172284 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Mar 13 00:43:47.172471 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Mar 13 00:43:47.172739 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Mar 13 00:43:47.173261 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Mar 13 00:43:47.173432 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 13 00:43:47.173598 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 11718 usecs Mar 13 00:43:47.174054 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Mar 13 00:43:47.174246 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Mar 13 00:43:47.174443 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Mar 13 00:43:47.174648 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Mar 13 00:43:47.175112 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Mar 13 00:43:47.175134 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 13 00:43:47.175146 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 13 00:43:47.177024 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 13 00:43:47.177042 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 13 00:43:47.177088 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 13 00:43:47.177099 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 13 00:43:47.177109 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 13 00:43:47.177120 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 13 00:43:47.177131 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 13 00:43:47.177141 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 13 00:43:47.177151 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 13 00:43:47.177161 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 13 00:43:47.177171 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 13 00:43:47.177185 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 13 00:43:47.177197 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 13 00:43:47.177207 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 13 00:43:47.177216 kernel: iommu: Default domain type: Translated Mar 13 00:43:47.177225 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 13 00:43:47.177235 kernel: efivars: Registered efivars operations Mar 13 00:43:47.177244 kernel: PCI: Using ACPI for IRQ routing Mar 13 00:43:47.177253 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 13 00:43:47.177263 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Mar 13 00:43:47.177276 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Mar 13 00:43:47.177285 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Mar 13 00:43:47.177295 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Mar 13 00:43:47.177305 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Mar 13 00:43:47.177314 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Mar 13 00:43:47.177324 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Mar 13 00:43:47.177334 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Mar 13 00:43:47.178111 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 13 00:43:47.178302 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 13 00:43:47.178752 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 13 00:43:47.179023 kernel: vgaarb: loaded Mar 13 00:43:47.179035 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 13 00:43:47.179045 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 13 00:43:47.179055 kernel: clocksource: Switched to clocksource kvm-clock Mar 13 00:43:47.179065 kernel: VFS: Disk quotas dquot_6.6.0 Mar 13 00:43:47.179075 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 13 00:43:47.179086 kernel: pnp: PnP ACPI init Mar 13 00:43:47.179470 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Mar 13 00:43:47.179496 kernel: pnp: PnP ACPI: found 6 devices Mar 13 00:43:47.179508 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 13 00:43:47.179518 kernel: NET: Registered PF_INET protocol family Mar 13 00:43:47.179529 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 13 00:43:47.179540 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 13 00:43:47.179574 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 13 00:43:47.179588 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 13 00:43:47.179599 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 13 00:43:47.179613 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 13 00:43:47.179624 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 13 00:43:47.179636 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 13 00:43:47.179648 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 13 00:43:47.179660 kernel: NET: Registered PF_XDP protocol family Mar 13 00:43:47.180090 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Mar 13 00:43:47.180280 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Mar 13 00:43:47.180446 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 13 00:43:47.180597 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 13 00:43:47.181480 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 13 00:43:47.182031 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Mar 13 00:43:47.182209 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Mar 13 00:43:47.182362 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Mar 13 00:43:47.182379 kernel: PCI: CLS 0 bytes, default 64 Mar 13 00:43:47.182391 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 13 00:43:47.182403 kernel: Initialise system trusted keyrings Mar 13 00:43:47.182420 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 13 00:43:47.182432 kernel: Key type asymmetric registered Mar 13 00:43:47.182443 kernel: Asymmetric key parser 'x509' registered Mar 13 00:43:47.182455 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 13 00:43:47.182467 kernel: io scheduler mq-deadline registered Mar 13 00:43:47.182478 kernel: io scheduler kyber registered Mar 13 00:43:47.182490 kernel: io scheduler bfq registered Mar 13 00:43:47.182501 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 13 00:43:47.182514 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 13 00:43:47.182532 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 13 00:43:47.182542 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 13 00:43:47.182554 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 13 00:43:47.182568 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 13 00:43:47.182579 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 13 00:43:47.182592 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 13 00:43:47.182610 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 13 00:43:47.183307 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 13 00:43:47.183328 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 13 00:43:47.183494 kernel: rtc_cmos 00:04: registered as rtc0 Mar 13 00:43:47.183653 kernel: rtc_cmos 00:04: setting system clock to 2026-03-13T00:43:44 UTC (1773362624) Mar 13 00:43:47.184523 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Mar 13 00:43:47.184539 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 13 00:43:47.184547 kernel: efifb: probing for efifb Mar 13 00:43:47.184559 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Mar 13 00:43:47.184566 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Mar 13 00:43:47.184573 kernel: efifb: scrolling: redraw Mar 13 00:43:47.184580 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 13 00:43:47.184588 kernel: Console: switching to colour frame buffer device 160x50 Mar 13 00:43:47.184595 kernel: fb0: EFI VGA frame buffer device Mar 13 00:43:47.184602 kernel: pstore: Using crash dump compression: deflate Mar 13 00:43:47.184609 kernel: pstore: Registered efi_pstore as persistent store backend Mar 13 00:43:47.184616 kernel: NET: Registered PF_INET6 protocol family Mar 13 00:43:47.184626 kernel: Segment Routing with IPv6 Mar 13 00:43:47.184633 kernel: In-situ OAM (IOAM) with IPv6 Mar 13 00:43:47.184640 kernel: NET: Registered PF_PACKET protocol family Mar 13 00:43:47.184648 kernel: Key type dns_resolver registered Mar 13 00:43:47.184655 kernel: IPI shorthand broadcast: enabled Mar 13 00:43:47.184662 kernel: sched_clock: Marking stable (13414117536, 2093534873)->(16423323659, -915671250) Mar 13 00:43:47.184669 kernel: registered taskstats version 1 Mar 13 00:43:47.184676 kernel: Loading compiled-in X.509 certificates Mar 13 00:43:47.184683 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: 5aff49df330f42445474818d085d5033fee752d8' Mar 13 00:43:47.184692 kernel: Demotion targets for Node 0: null Mar 13 00:43:47.184699 kernel: Key type .fscrypt registered Mar 13 00:43:47.184706 kernel: Key type fscrypt-provisioning registered Mar 13 00:43:47.184713 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 13 00:43:47.184720 kernel: ima: Allocated hash algorithm: sha1 Mar 13 00:43:47.184729 kernel: ima: No architecture policies found Mar 13 00:43:47.184737 kernel: clk: Disabling unused clocks Mar 13 00:43:47.184744 kernel: Warning: unable to open an initial console. Mar 13 00:43:47.184751 kernel: Freeing unused kernel image (initmem) memory: 46200K Mar 13 00:43:47.185000 kernel: Write protecting the kernel read-only data: 40960k Mar 13 00:43:47.185010 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Mar 13 00:43:47.185017 kernel: Run /init as init process Mar 13 00:43:47.185024 kernel: with arguments: Mar 13 00:43:47.185032 kernel: /init Mar 13 00:43:47.185039 kernel: with environment: Mar 13 00:43:47.185046 kernel: HOME=/ Mar 13 00:43:47.185053 kernel: TERM=linux Mar 13 00:43:47.185133 systemd[1]: Successfully made /usr/ read-only. Mar 13 00:43:47.185151 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 13 00:43:47.185159 systemd[1]: Detected virtualization kvm. Mar 13 00:43:47.185168 systemd[1]: Detected architecture x86-64. Mar 13 00:43:47.185176 systemd[1]: Running in initrd. Mar 13 00:43:47.185183 systemd[1]: No hostname configured, using default hostname. Mar 13 00:43:47.185191 systemd[1]: Hostname set to . Mar 13 00:43:47.185199 systemd[1]: Initializing machine ID from VM UUID. Mar 13 00:43:47.185208 systemd[1]: Queued start job for default target initrd.target. Mar 13 00:43:47.185216 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:43:47.185224 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:43:47.185232 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 13 00:43:47.185240 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 13 00:43:47.185248 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 13 00:43:47.185256 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 13 00:43:47.185269 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 13 00:43:47.185283 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 13 00:43:47.185296 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:43:47.185307 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:43:47.185319 systemd[1]: Reached target paths.target - Path Units. Mar 13 00:43:47.185332 systemd[1]: Reached target slices.target - Slice Units. Mar 13 00:43:47.185344 systemd[1]: Reached target swap.target - Swaps. Mar 13 00:43:47.185358 systemd[1]: Reached target timers.target - Timer Units. Mar 13 00:43:47.185374 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 13 00:43:47.185388 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 13 00:43:47.185400 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 13 00:43:47.185414 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 13 00:43:47.185426 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:43:47.185439 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 13 00:43:47.185453 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:43:47.185466 systemd[1]: Reached target sockets.target - Socket Units. Mar 13 00:43:47.185482 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 13 00:43:47.185496 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 13 00:43:47.185509 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 13 00:43:47.185523 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 13 00:43:47.185535 systemd[1]: Starting systemd-fsck-usr.service... Mar 13 00:43:47.185547 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 13 00:43:47.185560 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 13 00:43:47.185574 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:43:47.185586 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 13 00:43:47.185604 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:43:47.185743 systemd-journald[204]: Collecting audit messages is disabled. Mar 13 00:43:47.186438 systemd[1]: Finished systemd-fsck-usr.service. Mar 13 00:43:47.186448 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 13 00:43:47.186458 systemd-journald[204]: Journal started Mar 13 00:43:47.186553 systemd-journald[204]: Runtime Journal (/run/log/journal/f8dfc79f4fcb460c9414786a18ef97bd) is 6M, max 48.1M, 42.1M free. Mar 13 00:43:47.169360 systemd-modules-load[205]: Inserted module 'overlay' Mar 13 00:43:47.244728 systemd[1]: Started systemd-journald.service - Journal Service. Mar 13 00:43:47.248310 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:43:47.266394 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 13 00:43:47.291668 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 13 00:43:47.358657 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 13 00:43:47.441633 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 13 00:43:47.441672 kernel: Bridge firewalling registered Mar 13 00:43:47.391233 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 13 00:43:47.441418 systemd-modules-load[205]: Inserted module 'br_netfilter' Mar 13 00:43:47.459670 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 13 00:43:47.470685 systemd-tmpfiles[225]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 13 00:43:47.470994 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 00:43:47.482283 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:43:47.493725 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:43:47.509726 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 13 00:43:47.533105 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 00:43:47.625618 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:43:47.705534 dracut-cmdline[238]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:43:47.648628 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 13 00:43:47.868686 systemd-resolved[255]: Positive Trust Anchors: Mar 13 00:43:47.869272 systemd-resolved[255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 13 00:43:47.869310 systemd-resolved[255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 13 00:43:47.886982 systemd-resolved[255]: Defaulting to hostname 'linux'. Mar 13 00:43:47.895186 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 13 00:43:47.976508 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:43:48.299134 kernel: SCSI subsystem initialized Mar 13 00:43:48.342468 kernel: Loading iSCSI transport class v2.0-870. Mar 13 00:43:48.392656 kernel: iscsi: registered transport (tcp) Mar 13 00:43:48.464749 kernel: iscsi: registered transport (qla4xxx) Mar 13 00:43:48.465157 kernel: QLogic iSCSI HBA Driver Mar 13 00:43:48.563618 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 13 00:43:48.627085 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:43:48.641628 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 13 00:43:48.786731 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 13 00:43:48.791309 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 13 00:43:48.955508 kernel: raid6: avx2x4 gen() 26028 MB/s Mar 13 00:43:48.977596 kernel: raid6: avx2x2 gen() 25494 MB/s Mar 13 00:43:49.004699 kernel: raid6: avx2x1 gen() 17481 MB/s Mar 13 00:43:49.005197 kernel: raid6: using algorithm avx2x4 gen() 26028 MB/s Mar 13 00:43:49.033334 kernel: raid6: .... xor() 3648 MB/s, rmw enabled Mar 13 00:43:49.033577 kernel: raid6: using avx2x2 recovery algorithm Mar 13 00:43:49.081356 kernel: xor: automatically using best checksumming function avx Mar 13 00:43:49.523278 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 13 00:43:49.548353 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 13 00:43:49.565463 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:43:49.662635 systemd-udevd[454]: Using default interface naming scheme 'v255'. Mar 13 00:43:49.688697 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:43:49.706472 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 13 00:43:49.780107 dracut-pre-trigger[456]: rd.md=0: removing MD RAID activation Mar 13 00:43:49.886559 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 13 00:43:49.902540 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 13 00:43:50.104151 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:43:50.129284 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 13 00:43:50.216271 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 13 00:43:50.257546 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 13 00:43:50.284227 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 13 00:43:50.284657 kernel: GPT:9289727 != 19775487 Mar 13 00:43:50.284679 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 13 00:43:50.291718 kernel: GPT:9289727 != 19775487 Mar 13 00:43:50.296139 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 13 00:43:50.309040 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 13 00:43:50.317216 kernel: cryptd: max_cpu_qlen set to 1000 Mar 13 00:43:50.361411 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:43:50.362293 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:43:50.755458 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:43:50.787260 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:43:50.831388 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Mar 13 00:43:50.835114 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:43:50.864525 kernel: AES CTR mode by8 optimization enabled Mar 13 00:43:50.883205 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:43:50.883338 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:43:50.932099 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:43:51.024614 kernel: libata version 3.00 loaded. Mar 13 00:43:51.031749 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 13 00:43:51.084079 kernel: ahci 0000:00:1f.2: version 3.0 Mar 13 00:43:51.087089 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 13 00:43:51.122353 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Mar 13 00:43:51.122748 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Mar 13 00:43:51.126353 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 13 00:43:51.163060 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 13 00:43:51.173145 kernel: scsi host0: ahci Mar 13 00:43:51.180069 kernel: scsi host1: ahci Mar 13 00:43:51.181614 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 13 00:43:51.206347 kernel: scsi host2: ahci Mar 13 00:43:51.194510 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 13 00:43:51.221639 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 13 00:43:51.229094 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 13 00:43:51.284041 kernel: scsi host3: ahci Mar 13 00:43:51.290275 kernel: scsi host4: ahci Mar 13 00:43:51.291122 kernel: scsi host5: ahci Mar 13 00:43:51.299126 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:43:51.388398 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Mar 13 00:43:51.388438 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Mar 13 00:43:51.388455 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Mar 13 00:43:51.388469 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 13 00:43:51.388482 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Mar 13 00:43:51.388495 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Mar 13 00:43:51.388509 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Mar 13 00:43:51.388559 disk-uuid[618]: Primary Header is updated. Mar 13 00:43:51.388559 disk-uuid[618]: Secondary Entries is updated. Mar 13 00:43:51.388559 disk-uuid[618]: Secondary Header is updated. Mar 13 00:43:51.726485 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:43:51.756973 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 13 00:43:51.767112 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 13 00:43:51.774416 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 13 00:43:51.785121 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 13 00:43:51.796432 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 13 00:43:51.796469 kernel: ata3.00: LPM support broken, forcing max_power Mar 13 00:43:51.812644 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 13 00:43:51.812713 kernel: ata3.00: applying bridge limits Mar 13 00:43:51.827227 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 13 00:43:51.841731 kernel: ata3.00: LPM support broken, forcing max_power Mar 13 00:43:51.842280 kernel: ata3.00: configured for UDMA/100 Mar 13 00:43:51.861754 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 13 00:43:51.994335 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 13 00:43:51.995108 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 13 00:43:52.036017 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 13 00:43:52.396000 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 13 00:43:52.400282 disk-uuid[619]: The operation has completed successfully. Mar 13 00:43:52.518418 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 13 00:43:52.518715 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 13 00:43:52.531429 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 13 00:43:52.565500 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 13 00:43:52.578733 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:43:52.590730 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 13 00:43:52.606323 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 13 00:43:52.621115 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 13 00:43:52.686982 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 13 00:43:52.722470 sh[653]: Success Mar 13 00:43:52.781229 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 13 00:43:52.781445 kernel: device-mapper: uevent: version 1.0.3 Mar 13 00:43:52.794087 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 13 00:43:52.849161 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Mar 13 00:43:52.950019 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 13 00:43:52.961574 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 13 00:43:53.019577 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 13 00:43:53.071249 kernel: BTRFS: device fsid 503642f8-c59c-4168-97a8-9c3603183fa3 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (667) Mar 13 00:43:53.071314 kernel: BTRFS info (device dm-0): first mount of filesystem 503642f8-c59c-4168-97a8-9c3603183fa3 Mar 13 00:43:53.071325 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:43:53.144213 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 13 00:43:53.144460 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 13 00:43:53.150493 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 13 00:43:53.151621 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 13 00:43:53.188644 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 13 00:43:53.191745 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 13 00:43:53.241760 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 13 00:43:53.319066 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (690) Mar 13 00:43:53.339129 kernel: BTRFS info (device vda6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:43:53.339197 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:43:53.369099 kernel: BTRFS info (device vda6): turning on async discard Mar 13 00:43:53.369161 kernel: BTRFS info (device vda6): enabling free space tree Mar 13 00:43:53.397042 kernel: BTRFS info (device vda6): last unmount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:43:53.407359 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 13 00:43:53.412538 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 13 00:43:54.378032 kernel: hrtimer: interrupt took 4281723 ns Mar 13 00:43:54.644334 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 13 00:43:54.671191 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 13 00:43:54.918116 systemd-networkd[841]: lo: Link UP Mar 13 00:43:54.918186 systemd-networkd[841]: lo: Gained carrier Mar 13 00:43:54.931600 systemd-networkd[841]: Enumeration completed Mar 13 00:43:54.932396 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 13 00:43:54.956720 systemd[1]: Reached target network.target - Network. Mar 13 00:43:54.975107 systemd-networkd[841]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:43:54.975180 systemd-networkd[841]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 00:43:55.021659 systemd-networkd[841]: eth0: Link UP Mar 13 00:43:55.030597 systemd-networkd[841]: eth0: Gained carrier Mar 13 00:43:55.030713 systemd-networkd[841]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:43:55.396168 systemd-networkd[841]: eth0: DHCPv4 address 10.0.0.89/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 13 00:43:55.452438 ignition[744]: Ignition 2.22.0 Mar 13 00:43:55.452598 ignition[744]: Stage: fetch-offline Mar 13 00:43:55.453433 ignition[744]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:43:55.453449 ignition[744]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 13 00:43:55.454248 ignition[744]: parsed url from cmdline: "" Mar 13 00:43:55.454254 ignition[744]: no config URL provided Mar 13 00:43:55.454337 ignition[744]: reading system config file "/usr/lib/ignition/user.ign" Mar 13 00:43:55.454350 ignition[744]: no config at "/usr/lib/ignition/user.ign" Mar 13 00:43:55.454386 ignition[744]: op(1): [started] loading QEMU firmware config module Mar 13 00:43:55.454393 ignition[744]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 13 00:43:55.549638 ignition[744]: op(1): [finished] loading QEMU firmware config module Mar 13 00:43:55.550033 ignition[744]: QEMU firmware config was not found. Ignoring... Mar 13 00:43:56.525708 ignition[744]: parsing config with SHA512: 32d86e20c050e93b742f6c7c68b905cb671b5af34a66bdb05ae37890329cf3fdcf52f93776db482a57b490a6571b98a5b445b87a95ecb2dfbefdbf7e095ef81e Mar 13 00:43:56.593102 unknown[744]: fetched base config from "system" Mar 13 00:43:56.593174 unknown[744]: fetched user config from "qemu" Mar 13 00:43:56.607541 ignition[744]: fetch-offline: fetch-offline passed Mar 13 00:43:56.615111 ignition[744]: Ignition finished successfully Mar 13 00:43:56.622407 systemd-networkd[841]: eth0: Gained IPv6LL Mar 13 00:43:56.633620 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 13 00:43:56.644555 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 13 00:43:56.646274 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 13 00:43:57.038205 ignition[849]: Ignition 2.22.0 Mar 13 00:43:57.038289 ignition[849]: Stage: kargs Mar 13 00:43:57.038573 ignition[849]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:43:57.038587 ignition[849]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 13 00:43:57.095153 ignition[849]: kargs: kargs passed Mar 13 00:43:57.095330 ignition[849]: Ignition finished successfully Mar 13 00:43:57.114563 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 13 00:43:57.129192 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 13 00:43:57.284303 ignition[857]: Ignition 2.22.0 Mar 13 00:43:57.284396 ignition[857]: Stage: disks Mar 13 00:43:57.284663 ignition[857]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:43:57.284678 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 13 00:43:57.303492 ignition[857]: disks: disks passed Mar 13 00:43:57.311576 ignition[857]: Ignition finished successfully Mar 13 00:43:57.323510 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 13 00:43:57.348135 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 13 00:43:57.360241 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 13 00:43:57.376653 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 13 00:43:57.397443 systemd[1]: Reached target sysinit.target - System Initialization. Mar 13 00:43:57.415998 systemd[1]: Reached target basic.target - Basic System. Mar 13 00:43:57.447628 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 13 00:43:57.537464 systemd-fsck[867]: ROOT: clean, 15/553520 files, 52789/553472 blocks Mar 13 00:43:57.548637 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 13 00:43:57.579023 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 13 00:43:58.029200 kernel: EXT4-fs (vda9): mounted filesystem 26348f72-0225-4c06-aedc-823e61beebc6 r/w with ordered data mode. Quota mode: none. Mar 13 00:43:58.030321 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 13 00:43:58.044448 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 13 00:43:58.066622 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 13 00:43:58.076201 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 13 00:43:58.083106 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 13 00:43:58.083167 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 13 00:43:58.083203 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 13 00:43:58.166175 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (876) Mar 13 00:43:58.157775 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 13 00:43:58.186524 kernel: BTRFS info (device vda6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:43:58.186592 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:43:58.203560 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 13 00:43:58.245288 kernel: BTRFS info (device vda6): turning on async discard Mar 13 00:43:58.246078 kernel: BTRFS info (device vda6): enabling free space tree Mar 13 00:43:58.219161 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 13 00:43:58.347519 initrd-setup-root[901]: cut: /sysroot/etc/passwd: No such file or directory Mar 13 00:43:58.363469 initrd-setup-root[908]: cut: /sysroot/etc/group: No such file or directory Mar 13 00:43:58.378708 initrd-setup-root[915]: cut: /sysroot/etc/shadow: No such file or directory Mar 13 00:43:58.391215 initrd-setup-root[922]: cut: /sysroot/etc/gshadow: No such file or directory Mar 13 00:43:58.768330 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 13 00:43:58.770778 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 13 00:43:58.821283 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 13 00:43:58.841554 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 13 00:43:58.857012 kernel: BTRFS info (device vda6): last unmount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:43:58.926653 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 13 00:43:59.042108 ignition[991]: INFO : Ignition 2.22.0 Mar 13 00:43:59.042108 ignition[991]: INFO : Stage: mount Mar 13 00:43:59.042108 ignition[991]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:43:59.042108 ignition[991]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 13 00:43:59.101731 ignition[991]: INFO : mount: mount passed Mar 13 00:43:59.101731 ignition[991]: INFO : Ignition finished successfully Mar 13 00:43:59.106353 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 13 00:43:59.114201 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 13 00:43:59.231414 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 13 00:43:59.850426 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1004) Mar 13 00:43:59.871185 kernel: BTRFS info (device vda6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:43:59.871249 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:43:59.900663 kernel: BTRFS info (device vda6): turning on async discard Mar 13 00:43:59.900743 kernel: BTRFS info (device vda6): enabling free space tree Mar 13 00:43:59.904534 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 13 00:44:00.238253 ignition[1021]: INFO : Ignition 2.22.0 Mar 13 00:44:00.238253 ignition[1021]: INFO : Stage: files Mar 13 00:44:00.258559 ignition[1021]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:44:00.258559 ignition[1021]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 13 00:44:00.258559 ignition[1021]: DEBUG : files: compiled without relabeling support, skipping Mar 13 00:44:00.258559 ignition[1021]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 13 00:44:00.258559 ignition[1021]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 13 00:44:00.369171 ignition[1021]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 13 00:44:00.389017 ignition[1021]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 13 00:44:00.418942 unknown[1021]: wrote ssh authorized keys file for user: core Mar 13 00:44:00.429967 ignition[1021]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 13 00:44:00.447037 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 13 00:44:00.468110 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 13 00:44:00.569715 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 13 00:44:00.975573 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 13 00:44:00.975573 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 13 00:44:01.039694 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 13 00:44:01.039694 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 13 00:44:01.081687 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 13 00:44:01.081687 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 13 00:44:01.081687 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 13 00:44:01.081687 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 13 00:44:01.081687 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 13 00:44:01.081687 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 13 00:44:01.081687 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 13 00:44:01.081687 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 13 00:44:01.081687 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 13 00:44:01.081687 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 13 00:44:01.081687 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Mar 13 00:44:01.400623 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 13 00:44:07.769416 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 13 00:44:07.793775 ignition[1021]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 13 00:44:07.812203 ignition[1021]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 13 00:44:07.812203 ignition[1021]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 13 00:44:07.812203 ignition[1021]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 13 00:44:07.812203 ignition[1021]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 13 00:44:07.812203 ignition[1021]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 13 00:44:07.812203 ignition[1021]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 13 00:44:07.812203 ignition[1021]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 13 00:44:07.812203 ignition[1021]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 13 00:44:07.964242 ignition[1021]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 13 00:44:07.964242 ignition[1021]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 13 00:44:07.964242 ignition[1021]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 13 00:44:07.964242 ignition[1021]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 13 00:44:07.964242 ignition[1021]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 13 00:44:07.964242 ignition[1021]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 13 00:44:07.964242 ignition[1021]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 13 00:44:07.964242 ignition[1021]: INFO : files: files passed Mar 13 00:44:07.964242 ignition[1021]: INFO : Ignition finished successfully Mar 13 00:44:07.965791 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 13 00:44:08.016788 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 13 00:44:08.160640 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 13 00:44:08.170472 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 13 00:44:08.172329 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 13 00:44:08.268597 initrd-setup-root-after-ignition[1050]: grep: /sysroot/oem/oem-release: No such file or directory Mar 13 00:44:08.279196 initrd-setup-root-after-ignition[1052]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:44:08.291035 initrd-setup-root-after-ignition[1052]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf Mar 13 00:44:08.291035 initrd-setup-root-after-ignition[1056]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:44:08.306361 initrd-setup-root-after-ignition[1052]: : No such file or directory Mar 13 00:44:08.302140 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 13 00:44:08.313812 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 13 00:44:08.329760 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 13 00:44:08.446554 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 13 00:44:08.447080 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 13 00:44:08.457786 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 13 00:44:08.484261 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 13 00:44:08.512207 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 13 00:44:08.513437 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 13 00:44:08.627740 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 13 00:44:08.646601 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 13 00:44:08.759038 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:44:08.771788 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:44:08.789605 systemd[1]: Stopped target timers.target - Timer Units. Mar 13 00:44:08.820071 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 13 00:44:08.820650 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 13 00:44:08.877166 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 13 00:44:08.889131 systemd[1]: Stopped target basic.target - Basic System. Mar 13 00:44:08.922398 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 13 00:44:08.937703 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 13 00:44:08.966237 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 13 00:44:09.002490 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 13 00:44:09.031795 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 13 00:44:09.050715 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 13 00:44:09.072418 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 13 00:44:09.095772 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 13 00:44:09.129488 systemd[1]: Stopped target swap.target - Swaps. Mar 13 00:44:09.148452 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 13 00:44:09.149387 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 13 00:44:09.180322 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:44:09.193258 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:44:09.222120 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 13 00:44:09.224772 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:44:09.252390 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 13 00:44:09.269107 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 13 00:44:09.332062 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 13 00:44:09.348482 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 13 00:44:09.362340 systemd[1]: Stopped target paths.target - Path Units. Mar 13 00:44:09.385224 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 13 00:44:09.386684 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:44:09.392548 systemd[1]: Stopped target slices.target - Slice Units. Mar 13 00:44:09.437462 systemd[1]: Stopped target sockets.target - Socket Units. Mar 13 00:44:09.464652 systemd[1]: iscsid.socket: Deactivated successfully. Mar 13 00:44:09.465208 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 13 00:44:09.482288 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 13 00:44:09.482561 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 13 00:44:09.495629 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 13 00:44:09.496215 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 13 00:44:09.507323 systemd[1]: ignition-files.service: Deactivated successfully. Mar 13 00:44:09.507500 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 13 00:44:09.534412 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 13 00:44:09.559105 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 13 00:44:09.559323 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:44:09.630173 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 13 00:44:09.650253 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 13 00:44:09.650502 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:44:09.690168 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 13 00:44:09.690475 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 13 00:44:09.739747 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 13 00:44:09.768516 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 13 00:44:09.769079 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 13 00:44:09.805080 ignition[1076]: INFO : Ignition 2.22.0 Mar 13 00:44:09.805080 ignition[1076]: INFO : Stage: umount Mar 13 00:44:09.805080 ignition[1076]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:44:09.805080 ignition[1076]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 13 00:44:09.841221 ignition[1076]: INFO : umount: umount passed Mar 13 00:44:09.841221 ignition[1076]: INFO : Ignition finished successfully Mar 13 00:44:09.856596 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 13 00:44:09.857160 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 13 00:44:09.864572 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 13 00:44:09.864697 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 13 00:44:09.905327 systemd[1]: Stopped target network.target - Network. Mar 13 00:44:09.905676 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 13 00:44:09.905773 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 13 00:44:09.921209 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 13 00:44:09.921276 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 13 00:44:09.937716 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 13 00:44:09.937792 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 13 00:44:09.954272 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 13 00:44:09.954328 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 13 00:44:09.970127 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 13 00:44:09.970188 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 13 00:44:09.991590 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 13 00:44:10.014192 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 13 00:44:10.071815 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 13 00:44:10.072417 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 13 00:44:10.110069 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 13 00:44:10.127256 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 13 00:44:10.127513 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 13 00:44:10.162798 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 13 00:44:10.164654 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 13 00:44:10.197425 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 13 00:44:10.197559 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:44:10.268671 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 13 00:44:10.269203 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 13 00:44:10.269367 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 13 00:44:10.287604 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 13 00:44:10.287682 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:44:10.354750 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 13 00:44:10.355223 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 13 00:44:10.371236 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 13 00:44:10.371347 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:44:10.408249 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:44:10.435785 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 13 00:44:10.436186 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:44:10.501773 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 13 00:44:10.502510 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:44:10.542594 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 13 00:44:10.543059 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 13 00:44:10.562787 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 13 00:44:10.563119 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 13 00:44:10.580277 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 13 00:44:10.580345 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:44:10.598379 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 13 00:44:10.598472 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 13 00:44:10.628128 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 13 00:44:10.628306 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 13 00:44:10.658625 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 13 00:44:10.658768 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 00:44:10.683367 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 13 00:44:10.720315 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 13 00:44:10.720571 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:44:10.756451 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 13 00:44:10.756541 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:44:10.797537 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:44:10.797676 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:44:10.871167 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Mar 13 00:44:10.871249 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 13 00:44:10.871313 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:44:10.872718 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 13 00:44:10.873225 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 13 00:44:10.878575 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 13 00:44:10.922408 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 13 00:44:10.953771 systemd[1]: Switching root. Mar 13 00:44:11.055702 systemd-journald[204]: Journal stopped Mar 13 00:44:14.885516 systemd-journald[204]: Received SIGTERM from PID 1 (systemd). Mar 13 00:44:14.885607 kernel: SELinux: policy capability network_peer_controls=1 Mar 13 00:44:14.885722 kernel: SELinux: policy capability open_perms=1 Mar 13 00:44:14.885739 kernel: SELinux: policy capability extended_socket_class=1 Mar 13 00:44:14.885755 kernel: SELinux: policy capability always_check_network=0 Mar 13 00:44:14.885769 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 13 00:44:14.885784 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 13 00:44:14.885798 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 13 00:44:14.885813 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 13 00:44:14.886133 kernel: SELinux: policy capability userspace_initial_context=0 Mar 13 00:44:14.886152 kernel: audit: type=1403 audit(1773362651.525:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 13 00:44:14.886170 systemd[1]: Successfully loaded SELinux policy in 226.569ms. Mar 13 00:44:14.886204 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 16.926ms. Mar 13 00:44:14.886222 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 13 00:44:14.886239 systemd[1]: Detected virtualization kvm. Mar 13 00:44:14.886255 systemd[1]: Detected architecture x86-64. Mar 13 00:44:14.886271 systemd[1]: Detected first boot. Mar 13 00:44:14.886292 systemd[1]: Initializing machine ID from VM UUID. Mar 13 00:44:14.886309 zram_generator::config[1120]: No configuration found. Mar 13 00:44:14.886326 kernel: Guest personality initialized and is inactive Mar 13 00:44:14.886342 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Mar 13 00:44:14.886358 kernel: Initialized host personality Mar 13 00:44:14.886380 kernel: NET: Registered PF_VSOCK protocol family Mar 13 00:44:14.886490 systemd[1]: Populated /etc with preset unit settings. Mar 13 00:44:14.886509 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 13 00:44:14.886526 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 13 00:44:14.886553 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 13 00:44:14.886570 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 13 00:44:14.886588 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 13 00:44:14.886605 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 13 00:44:14.886621 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 13 00:44:14.886638 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 13 00:44:14.886654 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 13 00:44:14.886671 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 13 00:44:14.886691 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 13 00:44:14.886708 systemd[1]: Created slice user.slice - User and Session Slice. Mar 13 00:44:14.886731 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:44:14.886748 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:44:14.886765 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 13 00:44:14.886784 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 13 00:44:14.886801 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 13 00:44:14.887130 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 13 00:44:14.887160 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 13 00:44:14.887177 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:44:14.887284 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:44:14.887302 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 13 00:44:14.887319 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 13 00:44:14.887336 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 13 00:44:14.887353 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 13 00:44:14.887370 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:44:14.887387 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 13 00:44:14.887409 systemd[1]: Reached target slices.target - Slice Units. Mar 13 00:44:14.887426 systemd[1]: Reached target swap.target - Swaps. Mar 13 00:44:14.887443 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 13 00:44:14.887459 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 13 00:44:14.887484 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 13 00:44:14.887501 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:44:14.887518 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 13 00:44:14.887534 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:44:14.887551 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 13 00:44:14.887571 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 13 00:44:14.887587 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 13 00:44:14.887604 systemd[1]: Mounting media.mount - External Media Directory... Mar 13 00:44:14.887620 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:44:14.887635 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 13 00:44:14.887750 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 13 00:44:14.887770 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 13 00:44:14.887786 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 13 00:44:14.887801 systemd[1]: Reached target machines.target - Containers. Mar 13 00:44:14.890428 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 13 00:44:14.890492 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:44:14.890511 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 13 00:44:14.890530 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 13 00:44:14.890546 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:44:14.890563 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 13 00:44:14.890581 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:44:14.890604 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 13 00:44:14.890625 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 00:44:14.890642 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 13 00:44:14.890659 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 13 00:44:14.890675 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 13 00:44:14.890692 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 13 00:44:14.890708 systemd[1]: Stopped systemd-fsck-usr.service. Mar 13 00:44:14.890726 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:44:14.890742 kernel: ACPI: bus type drm_connector registered Mar 13 00:44:14.890762 kernel: fuse: init (API version 7.41) Mar 13 00:44:14.891058 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 13 00:44:14.891077 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 13 00:44:14.891093 kernel: loop: module loaded Mar 13 00:44:14.891110 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 13 00:44:14.891126 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 13 00:44:14.891144 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 13 00:44:14.891196 systemd-journald[1205]: Collecting audit messages is disabled. Mar 13 00:44:14.891322 systemd-journald[1205]: Journal started Mar 13 00:44:14.891355 systemd-journald[1205]: Runtime Journal (/run/log/journal/f8dfc79f4fcb460c9414786a18ef97bd) is 6M, max 48.1M, 42.1M free. Mar 13 00:44:13.170784 systemd[1]: Queued start job for default target multi-user.target. Mar 13 00:44:13.201374 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 13 00:44:13.203252 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 13 00:44:13.203768 systemd[1]: systemd-journald.service: Consumed 3.282s CPU time. Mar 13 00:44:14.911143 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 13 00:44:14.938771 systemd[1]: verity-setup.service: Deactivated successfully. Mar 13 00:44:14.939200 systemd[1]: Stopped verity-setup.service. Mar 13 00:44:14.956495 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:44:15.004200 systemd[1]: Started systemd-journald.service - Journal Service. Mar 13 00:44:15.017758 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 13 00:44:15.029802 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 13 00:44:15.044618 systemd[1]: Mounted media.mount - External Media Directory. Mar 13 00:44:15.060413 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 13 00:44:15.073345 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 13 00:44:15.084616 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 13 00:44:15.101102 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 13 00:44:15.121140 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:44:15.133335 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 13 00:44:15.134215 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 13 00:44:15.146522 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:44:15.147569 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:44:15.161408 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 13 00:44:15.162708 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 13 00:44:15.175690 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:44:15.176521 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:44:15.192202 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 13 00:44:15.192649 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 13 00:44:15.214474 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 00:44:15.214796 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 00:44:15.229355 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 13 00:44:15.243110 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:44:15.257587 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 13 00:44:15.273130 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 13 00:44:15.284792 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:44:15.339802 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 13 00:44:15.354180 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 13 00:44:15.386239 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 13 00:44:15.405364 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 13 00:44:15.405717 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 13 00:44:15.431338 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 13 00:44:15.452740 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 13 00:44:15.467492 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:44:15.479809 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 13 00:44:15.499143 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 13 00:44:15.518529 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 13 00:44:15.543237 systemd-journald[1205]: Time spent on flushing to /var/log/journal/f8dfc79f4fcb460c9414786a18ef97bd is 87.246ms for 1064 entries. Mar 13 00:44:15.543237 systemd-journald[1205]: System Journal (/var/log/journal/f8dfc79f4fcb460c9414786a18ef97bd) is 8M, max 195.6M, 187.6M free. Mar 13 00:44:15.669749 systemd-journald[1205]: Received client request to flush runtime journal. Mar 13 00:44:15.524349 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 13 00:44:15.538660 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 13 00:44:15.551340 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 00:44:15.573401 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 13 00:44:15.595640 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 13 00:44:15.619170 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 13 00:44:15.637728 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 13 00:44:15.649789 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 13 00:44:15.663273 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 13 00:44:15.689603 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 13 00:44:15.703236 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 13 00:44:15.725433 kernel: loop0: detected capacity change from 0 to 128560 Mar 13 00:44:15.844625 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 13 00:44:15.892748 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:44:15.955469 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 13 00:44:15.987377 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 13 00:44:16.038307 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 13 00:44:16.067548 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 13 00:44:16.106573 kernel: loop1: detected capacity change from 0 to 110984 Mar 13 00:44:16.264402 kernel: loop2: detected capacity change from 0 to 217752 Mar 13 00:44:16.427340 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Mar 13 00:44:16.433676 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Mar 13 00:44:16.460205 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:44:16.503320 kernel: loop3: detected capacity change from 0 to 128560 Mar 13 00:44:16.584297 kernel: loop4: detected capacity change from 0 to 110984 Mar 13 00:44:16.725214 kernel: loop5: detected capacity change from 0 to 217752 Mar 13 00:44:16.777708 (sd-merge)[1263]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 13 00:44:16.791751 (sd-merge)[1263]: Merged extensions into '/usr'. Mar 13 00:44:16.813261 systemd[1]: Reload requested from client PID 1240 ('systemd-sysext') (unit systemd-sysext.service)... Mar 13 00:44:16.813431 systemd[1]: Reloading... Mar 13 00:44:17.052183 zram_generator::config[1289]: No configuration found. Mar 13 00:44:18.834203 systemd[1]: Reloading finished in 2019 ms. Mar 13 00:44:18.871185 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 13 00:44:18.888461 ldconfig[1235]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 13 00:44:19.060024 systemd[1]: Starting ensure-sysext.service... Mar 13 00:44:19.090658 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 13 00:44:19.139211 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 13 00:44:19.158492 systemd[1]: Reload requested from client PID 1325 ('systemctl') (unit ensure-sysext.service)... Mar 13 00:44:19.158549 systemd[1]: Reloading... Mar 13 00:44:19.159083 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 13 00:44:19.159166 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 13 00:44:19.159800 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 13 00:44:19.160615 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 13 00:44:19.162086 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 13 00:44:19.162376 systemd-tmpfiles[1326]: ACLs are not supported, ignoring. Mar 13 00:44:19.162499 systemd-tmpfiles[1326]: ACLs are not supported, ignoring. Mar 13 00:44:19.168123 systemd-tmpfiles[1326]: Detected autofs mount point /boot during canonicalization of boot. Mar 13 00:44:19.168178 systemd-tmpfiles[1326]: Skipping /boot Mar 13 00:44:19.181348 systemd-tmpfiles[1326]: Detected autofs mount point /boot during canonicalization of boot. Mar 13 00:44:19.181362 systemd-tmpfiles[1326]: Skipping /boot Mar 13 00:44:19.522034 zram_generator::config[1357]: No configuration found. Mar 13 00:44:19.935515 systemd[1]: Reloading finished in 776 ms. Mar 13 00:44:19.952729 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 13 00:44:19.962673 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:44:19.975932 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 13 00:44:19.985416 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 13 00:44:20.003426 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 13 00:44:20.047640 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 13 00:44:20.072482 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:44:20.087729 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 13 00:44:20.143276 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 13 00:44:20.164765 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 13 00:44:20.207756 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 13 00:44:20.257718 systemd-udevd[1401]: Using default interface naming scheme 'v255'. Mar 13 00:44:20.258232 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:44:20.260532 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:44:20.264554 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:44:20.278484 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:44:20.301131 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 00:44:20.312216 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:44:20.317357 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:44:20.333593 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 13 00:44:20.334749 augenrules[1423]: No rules Mar 13 00:44:20.344205 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:44:20.350362 systemd[1]: audit-rules.service: Deactivated successfully. Mar 13 00:44:20.351269 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 13 00:44:20.365805 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:44:20.366309 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:44:20.366597 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:44:20.366722 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:44:20.366795 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:44:20.370668 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:44:20.373346 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 13 00:44:20.377802 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:44:20.380575 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 13 00:44:20.386203 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:44:20.386398 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:44:20.386570 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:44:20.389413 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:44:20.389629 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:44:20.396103 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:44:20.397592 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:44:20.404267 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 13 00:44:20.422709 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 00:44:20.432251 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 00:44:20.443584 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:44:20.451154 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 13 00:44:20.459298 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 13 00:44:20.460233 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 13 00:44:20.485771 systemd[1]: Finished ensure-sysext.service. Mar 13 00:44:20.491626 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 13 00:44:20.494641 augenrules[1431]: /sbin/augenrules: No change Mar 13 00:44:20.514077 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 13 00:44:20.519157 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 13 00:44:20.519333 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 13 00:44:20.524251 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 13 00:44:20.530504 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 13 00:44:20.555196 systemd-resolved[1395]: Positive Trust Anchors: Mar 13 00:44:20.555259 systemd-resolved[1395]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 13 00:44:20.555286 systemd-resolved[1395]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 13 00:44:20.560220 augenrules[1479]: No rules Mar 13 00:44:20.563329 systemd[1]: audit-rules.service: Deactivated successfully. Mar 13 00:44:20.564545 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 13 00:44:20.614471 systemd-resolved[1395]: Defaulting to hostname 'linux'. Mar 13 00:44:20.639155 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 13 00:44:20.645087 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:44:20.760376 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 13 00:44:20.886170 systemd-networkd[1471]: lo: Link UP Mar 13 00:44:20.886186 systemd-networkd[1471]: lo: Gained carrier Mar 13 00:44:20.899058 systemd-networkd[1471]: Enumeration completed Mar 13 00:44:20.908496 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 13 00:44:20.921398 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 13 00:44:20.935798 systemd[1]: Reached target network.target - Network. Mar 13 00:44:20.943116 systemd[1]: Reached target sysinit.target - System Initialization. Mar 13 00:44:20.953638 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 13 00:44:20.966635 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 13 00:44:20.978770 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Mar 13 00:44:20.989570 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 13 00:44:21.011584 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 13 00:44:21.011709 systemd[1]: Reached target paths.target - Path Units. Mar 13 00:44:21.024214 systemd[1]: Reached target time-set.target - System Time Set. Mar 13 00:44:21.031479 systemd-networkd[1471]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:44:21.031552 systemd-networkd[1471]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 00:44:21.051445 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 13 00:44:21.051643 kernel: mousedev: PS/2 mouse device common for all mice Mar 13 00:44:21.035657 systemd-networkd[1471]: eth0: Link UP Mar 13 00:44:21.036232 systemd-networkd[1471]: eth0: Gained carrier Mar 13 00:44:21.036261 systemd-networkd[1471]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:44:21.036431 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 13 00:44:21.068699 kernel: ACPI: button: Power Button [PWRF] Mar 13 00:44:21.070610 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 13 00:44:21.111505 systemd[1]: Reached target timers.target - Timer Units. Mar 13 00:44:21.146493 systemd-networkd[1471]: eth0: DHCPv4 address 10.0.0.89/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 13 00:44:21.160430 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 13 00:44:21.165104 systemd-timesyncd[1473]: Network configuration changed, trying to establish connection. Mar 13 00:44:23.025992 systemd-resolved[1395]: Clock change detected. Flushing caches. Mar 13 00:44:23.026169 systemd-timesyncd[1473]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 13 00:44:23.026235 systemd-timesyncd[1473]: Initial clock synchronization to Fri 2026-03-13 00:44:23.025945 UTC. Mar 13 00:44:23.060628 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 13 00:44:23.103136 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 13 00:44:23.114324 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 13 00:44:23.130456 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 13 00:44:23.145957 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 13 00:44:23.146388 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 13 00:44:23.149879 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 13 00:44:23.162121 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 13 00:44:23.198855 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 13 00:44:23.236085 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 13 00:44:23.262247 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 13 00:44:23.306539 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 13 00:44:23.404318 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 13 00:44:23.420230 systemd[1]: Reached target sockets.target - Socket Units. Mar 13 00:44:23.428976 systemd[1]: Reached target basic.target - Basic System. Mar 13 00:44:23.438973 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 13 00:44:23.439140 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 13 00:44:23.442934 systemd[1]: Starting containerd.service - containerd container runtime... Mar 13 00:44:23.451276 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 13 00:44:23.462112 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 13 00:44:23.501918 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 13 00:44:23.516136 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 13 00:44:23.523912 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 13 00:44:23.526939 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Mar 13 00:44:23.541930 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 13 00:44:23.550368 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 13 00:44:23.643260 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 13 00:44:23.657829 jq[1525]: false Mar 13 00:44:23.681624 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 13 00:44:23.692963 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 13 00:44:23.709977 extend-filesystems[1527]: Found /dev/vda6 Mar 13 00:44:23.710463 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 13 00:44:23.722591 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 13 00:44:23.723413 google_oslogin_nss_cache[1528]: oslogin_cache_refresh[1528]: Refreshing passwd entry cache Mar 13 00:44:23.723284 oslogin_cache_refresh[1528]: Refreshing passwd entry cache Mar 13 00:44:23.723432 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 13 00:44:23.725298 systemd[1]: Starting update-engine.service - Update Engine... Mar 13 00:44:23.732018 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 13 00:44:23.740622 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 13 00:44:23.745458 extend-filesystems[1527]: Found /dev/vda9 Mar 13 00:44:23.755424 google_oslogin_nss_cache[1528]: oslogin_cache_refresh[1528]: Failure getting users, quitting Mar 13 00:44:23.755424 google_oslogin_nss_cache[1528]: oslogin_cache_refresh[1528]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 13 00:44:23.755424 google_oslogin_nss_cache[1528]: oslogin_cache_refresh[1528]: Refreshing group entry cache Mar 13 00:44:23.754883 oslogin_cache_refresh[1528]: Failure getting users, quitting Mar 13 00:44:23.754951 oslogin_cache_refresh[1528]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 13 00:44:23.755005 oslogin_cache_refresh[1528]: Refreshing group entry cache Mar 13 00:44:23.757498 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 13 00:44:23.759793 extend-filesystems[1527]: Checking size of /dev/vda9 Mar 13 00:44:23.769370 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 13 00:44:23.782375 google_oslogin_nss_cache[1528]: oslogin_cache_refresh[1528]: Failure getting groups, quitting Mar 13 00:44:23.782375 google_oslogin_nss_cache[1528]: oslogin_cache_refresh[1528]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 13 00:44:23.780209 oslogin_cache_refresh[1528]: Failure getting groups, quitting Mar 13 00:44:23.769632 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 13 00:44:23.780224 oslogin_cache_refresh[1528]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 13 00:44:23.770186 systemd[1]: motdgen.service: Deactivated successfully. Mar 13 00:44:23.770472 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 13 00:44:23.788562 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Mar 13 00:44:23.788978 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Mar 13 00:44:23.793368 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 13 00:44:23.795098 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 13 00:44:24.150301 systemd-networkd[1471]: eth0: Gained IPv6LL Mar 13 00:44:24.157216 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 13 00:44:24.188024 extend-filesystems[1527]: Resized partition /dev/vda9 Mar 13 00:44:24.194354 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 13 00:44:24.195117 systemd[1]: Reached target network-online.target - Network is Online. Mar 13 00:44:24.208610 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 13 00:44:24.209573 jq[1544]: true Mar 13 00:44:24.219068 extend-filesystems[1574]: resize2fs 1.47.3 (8-Jul-2025) Mar 13 00:44:24.228870 (ntainerd)[1555]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 13 00:44:24.250184 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 13 00:44:24.250992 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:44:24.277574 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 13 00:44:24.314267 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 13 00:44:24.407325 dbus-daemon[1519]: [system] SELinux support is enabled Mar 13 00:44:25.013279 tar[1569]: linux-amd64/LICENSE Mar 13 00:44:25.013279 tar[1569]: linux-amd64/helm Mar 13 00:44:25.014268 jq[1580]: true Mar 13 00:44:24.424439 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:44:24.433612 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 13 00:44:24.505847 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 13 00:44:24.514209 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 13 00:44:24.555186 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 13 00:44:24.576878 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 13 00:44:24.577151 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 13 00:44:24.603551 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 13 00:44:24.603873 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 13 00:44:25.026510 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:44:25.027417 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:44:25.051634 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:44:25.060920 update_engine[1542]: I20260313 00:44:25.054164 1542 main.cc:92] Flatcar Update Engine starting Mar 13 00:44:25.124615 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 13 00:44:25.435151 update_engine[1542]: I20260313 00:44:25.114461 1542 update_check_scheduler.cc:74] Next update check in 6m0s Mar 13 00:44:25.112918 systemd[1]: Started update-engine.service - Update Engine. Mar 13 00:44:25.435442 extend-filesystems[1574]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 13 00:44:25.435442 extend-filesystems[1574]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 13 00:44:25.435442 extend-filesystems[1574]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 13 00:44:25.128550 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 13 00:44:25.493965 sshd_keygen[1568]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 13 00:44:25.494247 extend-filesystems[1527]: Resized filesystem in /dev/vda9 Mar 13 00:44:25.439459 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 13 00:44:25.453618 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 13 00:44:25.454114 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 13 00:44:25.855910 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 13 00:44:25.894233 bash[1622]: Updated "/home/core/.ssh/authorized_keys" Mar 13 00:44:25.905427 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 13 00:44:26.108204 systemd[1]: Started sshd@0-10.0.0.89:22-10.0.0.1:57118.service - OpenSSH per-connection server daemon (10.0.0.1:57118). Mar 13 00:44:26.117188 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 13 00:44:26.135078 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 13 00:44:26.221128 systemd[1]: issuegen.service: Deactivated successfully. Mar 13 00:44:26.221447 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 13 00:44:26.230085 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 13 00:44:26.719480 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 13 00:44:26.725160 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 13 00:44:26.725890 systemd-logind[1540]: Watching system buttons on /dev/input/event2 (Power Button) Mar 13 00:44:26.725917 systemd-logind[1540]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 13 00:44:26.726207 systemd-logind[1540]: New seat seat0. Mar 13 00:44:26.733021 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 13 00:44:26.733340 systemd[1]: Reached target getty.target - Login Prompts. Mar 13 00:44:26.733495 systemd[1]: Started systemd-logind.service - User Login Management. Mar 13 00:44:27.076221 kernel: kvm_amd: TSC scaling supported Mar 13 00:44:27.077282 kernel: kvm_amd: Nested Virtualization enabled Mar 13 00:44:27.077354 kernel: kvm_amd: Nested Paging enabled Mar 13 00:44:27.128168 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 13 00:44:27.201048 kernel: kvm_amd: PMU virtualization is disabled Mar 13 00:44:27.334281 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:44:27.387204 locksmithd[1606]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 13 00:44:27.459085 kernel: EDAC MC: Ver: 3.0.0 Mar 13 00:44:27.894085 sshd[1628]: Accepted publickey for core from 10.0.0.1 port 57118 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:44:27.898854 sshd-session[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:44:27.930103 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 13 00:44:27.941012 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 13 00:44:28.241567 systemd-logind[1540]: New session 1 of user core. Mar 13 00:44:28.314838 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 13 00:44:28.329862 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 13 00:44:28.356576 (systemd)[1659]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 13 00:44:28.377336 systemd-logind[1540]: New session c1 of user core. Mar 13 00:44:28.706874 containerd[1555]: time="2026-03-13T00:44:28Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 13 00:44:28.707478 containerd[1555]: time="2026-03-13T00:44:28.707123631Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 13 00:44:29.078049 containerd[1555]: time="2026-03-13T00:44:29.077346807Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="574.822µs" Mar 13 00:44:29.081331 containerd[1555]: time="2026-03-13T00:44:29.078545705Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 13 00:44:29.081331 containerd[1555]: time="2026-03-13T00:44:29.078650992Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 13 00:44:29.081331 containerd[1555]: time="2026-03-13T00:44:29.079311996Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 13 00:44:29.119894 containerd[1555]: time="2026-03-13T00:44:29.119219621Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 13 00:44:29.129987 containerd[1555]: time="2026-03-13T00:44:29.129142055Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 13 00:44:29.134066 containerd[1555]: time="2026-03-13T00:44:29.131212450Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 13 00:44:29.134066 containerd[1555]: time="2026-03-13T00:44:29.131244190Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 13 00:44:29.134066 containerd[1555]: time="2026-03-13T00:44:29.132434963Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 13 00:44:29.134066 containerd[1555]: time="2026-03-13T00:44:29.132457735Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 13 00:44:29.134066 containerd[1555]: time="2026-03-13T00:44:29.132473074Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 13 00:44:29.134066 containerd[1555]: time="2026-03-13T00:44:29.132486159Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 13 00:44:29.136162 containerd[1555]: time="2026-03-13T00:44:29.136082773Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 13 00:44:29.137409 containerd[1555]: time="2026-03-13T00:44:29.137380887Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 13 00:44:29.137899 containerd[1555]: time="2026-03-13T00:44:29.137867045Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 13 00:44:29.137992 containerd[1555]: time="2026-03-13T00:44:29.137966240Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 13 00:44:29.149579 containerd[1555]: time="2026-03-13T00:44:29.149244105Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 13 00:44:29.196938 containerd[1555]: time="2026-03-13T00:44:29.196184770Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 13 00:44:29.213047 containerd[1555]: time="2026-03-13T00:44:29.212601121Z" level=info msg="metadata content store policy set" policy=shared Mar 13 00:44:29.312800 containerd[1555]: time="2026-03-13T00:44:29.312308569Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 13 00:44:29.320974 containerd[1555]: time="2026-03-13T00:44:29.320553111Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 13 00:44:29.358339 containerd[1555]: time="2026-03-13T00:44:29.358072157Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 13 00:44:29.410997 containerd[1555]: time="2026-03-13T00:44:29.409518714Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 13 00:44:29.425172 containerd[1555]: time="2026-03-13T00:44:29.424595252Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 13 00:44:29.426143 containerd[1555]: time="2026-03-13T00:44:29.426116021Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 13 00:44:29.426419 containerd[1555]: time="2026-03-13T00:44:29.426392006Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 13 00:44:29.426609 containerd[1555]: time="2026-03-13T00:44:29.426585377Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 13 00:44:29.430625 containerd[1555]: time="2026-03-13T00:44:29.430490277Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 13 00:44:29.431437 containerd[1555]: time="2026-03-13T00:44:29.431415505Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 13 00:44:29.431501 containerd[1555]: time="2026-03-13T00:44:29.431486247Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 13 00:44:29.431893 containerd[1555]: time="2026-03-13T00:44:29.431639783Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 13 00:44:29.433024 containerd[1555]: time="2026-03-13T00:44:29.432997999Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 13 00:44:29.434823 containerd[1555]: time="2026-03-13T00:44:29.434644814Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 13 00:44:29.434922 containerd[1555]: time="2026-03-13T00:44:29.434899930Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 13 00:44:29.434994 containerd[1555]: time="2026-03-13T00:44:29.434978185Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 13 00:44:29.435129 containerd[1555]: time="2026-03-13T00:44:29.435102578Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 13 00:44:29.435211 containerd[1555]: time="2026-03-13T00:44:29.435192866Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 13 00:44:29.435314 containerd[1555]: time="2026-03-13T00:44:29.435292312Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 13 00:44:29.435402 containerd[1555]: time="2026-03-13T00:44:29.435380246Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 13 00:44:29.435480 containerd[1555]: time="2026-03-13T00:44:29.435462440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 13 00:44:29.443392 containerd[1555]: time="2026-03-13T00:44:29.435547028Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 13 00:44:29.450267 containerd[1555]: time="2026-03-13T00:44:29.450063088Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 13 00:44:29.455638 containerd[1555]: time="2026-03-13T00:44:29.455551825Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 13 00:44:29.479050 containerd[1555]: time="2026-03-13T00:44:29.478887321Z" level=info msg="Start snapshots syncer" Mar 13 00:44:29.484884 containerd[1555]: time="2026-03-13T00:44:29.482353563Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 13 00:44:29.486621 containerd[1555]: time="2026-03-13T00:44:29.486525802Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 13 00:44:29.488895 containerd[1555]: time="2026-03-13T00:44:29.488053784Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 13 00:44:29.488895 containerd[1555]: time="2026-03-13T00:44:29.488402696Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 13 00:44:29.490391 containerd[1555]: time="2026-03-13T00:44:29.490271244Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 13 00:44:29.490584 containerd[1555]: time="2026-03-13T00:44:29.490492778Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 13 00:44:29.490785 containerd[1555]: time="2026-03-13T00:44:29.490644671Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 13 00:44:29.490889 containerd[1555]: time="2026-03-13T00:44:29.490874771Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 13 00:44:29.491247 containerd[1555]: time="2026-03-13T00:44:29.491229683Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 13 00:44:29.491427 containerd[1555]: time="2026-03-13T00:44:29.491413036Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 13 00:44:29.491600 containerd[1555]: time="2026-03-13T00:44:29.491586280Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 13 00:44:29.492304 containerd[1555]: time="2026-03-13T00:44:29.492211617Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 13 00:44:29.492573 containerd[1555]: time="2026-03-13T00:44:29.492476922Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 13 00:44:29.492846 containerd[1555]: time="2026-03-13T00:44:29.492828849Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 13 00:44:29.493375 containerd[1555]: time="2026-03-13T00:44:29.493358778Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 13 00:44:29.493797 containerd[1555]: time="2026-03-13T00:44:29.493736313Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 13 00:44:29.494813 containerd[1555]: time="2026-03-13T00:44:29.494233411Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 13 00:44:29.494813 containerd[1555]: time="2026-03-13T00:44:29.494256444Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 13 00:44:29.494813 containerd[1555]: time="2026-03-13T00:44:29.494321135Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 13 00:44:29.494813 containerd[1555]: time="2026-03-13T00:44:29.494343096Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 13 00:44:29.494813 containerd[1555]: time="2026-03-13T00:44:29.494429327Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 13 00:44:29.495343 systemd[1659]: Queued start job for default target default.target. Mar 13 00:44:29.496185 containerd[1555]: time="2026-03-13T00:44:29.495377638Z" level=info msg="runtime interface created" Mar 13 00:44:29.496185 containerd[1555]: time="2026-03-13T00:44:29.495392575Z" level=info msg="created NRI interface" Mar 13 00:44:29.496185 containerd[1555]: time="2026-03-13T00:44:29.495403075Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 13 00:44:29.496185 containerd[1555]: time="2026-03-13T00:44:29.495639346Z" level=info msg="Connect containerd service" Mar 13 00:44:29.496185 containerd[1555]: time="2026-03-13T00:44:29.495899562Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 13 00:44:29.501034 containerd[1555]: time="2026-03-13T00:44:29.500646715Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 13 00:44:29.510643 systemd[1659]: Created slice app.slice - User Application Slice. Mar 13 00:44:29.510835 systemd[1659]: Reached target paths.target - Paths. Mar 13 00:44:29.510887 systemd[1659]: Reached target timers.target - Timers. Mar 13 00:44:29.514099 systemd[1659]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 13 00:44:29.568230 systemd[1659]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 13 00:44:29.582991 systemd[1659]: Reached target sockets.target - Sockets. Mar 13 00:44:29.583256 systemd[1659]: Reached target basic.target - Basic System. Mar 13 00:44:29.583428 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 13 00:44:29.592853 systemd[1659]: Reached target default.target - Main User Target. Mar 13 00:44:29.592925 systemd[1659]: Startup finished in 981ms. Mar 13 00:44:29.597166 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 13 00:44:30.038939 systemd[1]: Started sshd@1-10.0.0.89:22-10.0.0.1:44218.service - OpenSSH per-connection server daemon (10.0.0.1:44218). Mar 13 00:44:30.501799 sshd[1678]: Accepted publickey for core from 10.0.0.1 port 44218 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:44:30.502284 tar[1569]: linux-amd64/README.md Mar 13 00:44:30.483196 systemd-logind[1540]: New session 2 of user core. Mar 13 00:44:30.463167 sshd-session[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:44:30.490267 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 13 00:44:30.555437 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 13 00:44:30.581820 sshd[1685]: Connection closed by 10.0.0.1 port 44218 Mar 13 00:44:30.583555 sshd-session[1678]: pam_unix(sshd:session): session closed for user core Mar 13 00:44:30.595410 systemd[1]: sshd@1-10.0.0.89:22-10.0.0.1:44218.service: Deactivated successfully. Mar 13 00:44:30.604281 systemd[1]: session-2.scope: Deactivated successfully. Mar 13 00:44:30.606624 systemd-logind[1540]: Session 2 logged out. Waiting for processes to exit. Mar 13 00:44:30.614189 systemd[1]: Started sshd@2-10.0.0.89:22-10.0.0.1:44230.service - OpenSSH per-connection server daemon (10.0.0.1:44230). Mar 13 00:44:30.924358 systemd-logind[1540]: Removed session 2. Mar 13 00:44:30.983959 containerd[1555]: time="2026-03-13T00:44:30.983341991Z" level=info msg="Start subscribing containerd event" Mar 13 00:44:30.985407 containerd[1555]: time="2026-03-13T00:44:30.983639266Z" level=info msg="Start recovering state" Mar 13 00:44:30.986910 containerd[1555]: time="2026-03-13T00:44:30.986885959Z" level=info msg="Start event monitor" Mar 13 00:44:30.987129 containerd[1555]: time="2026-03-13T00:44:30.987105008Z" level=info msg="Start cni network conf syncer for default" Mar 13 00:44:30.987283 containerd[1555]: time="2026-03-13T00:44:30.987259918Z" level=info msg="Start streaming server" Mar 13 00:44:30.987493 containerd[1555]: time="2026-03-13T00:44:30.987469188Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 13 00:44:30.987650 containerd[1555]: time="2026-03-13T00:44:30.987627244Z" level=info msg="runtime interface starting up..." Mar 13 00:44:30.988543 containerd[1555]: time="2026-03-13T00:44:30.988011992Z" level=info msg="starting plugins..." Mar 13 00:44:30.988543 containerd[1555]: time="2026-03-13T00:44:30.988260966Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 13 00:44:30.988876 containerd[1555]: time="2026-03-13T00:44:30.986918228Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 13 00:44:30.988965 containerd[1555]: time="2026-03-13T00:44:30.988912381Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 13 00:44:30.990604 containerd[1555]: time="2026-03-13T00:44:30.990061178Z" level=info msg="containerd successfully booted in 2.287003s" Mar 13 00:44:30.990448 systemd[1]: Started containerd.service - containerd container runtime. Mar 13 00:44:31.031930 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 44230 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:44:31.033957 sshd-session[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:44:31.043415 systemd-logind[1540]: New session 3 of user core. Mar 13 00:44:31.050088 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 13 00:44:31.322500 sshd[1701]: Connection closed by 10.0.0.1 port 44230 Mar 13 00:44:31.323536 sshd-session[1697]: pam_unix(sshd:session): session closed for user core Mar 13 00:44:31.332956 systemd[1]: sshd@2-10.0.0.89:22-10.0.0.1:44230.service: Deactivated successfully. Mar 13 00:44:31.337122 systemd[1]: session-3.scope: Deactivated successfully. Mar 13 00:44:31.338864 systemd-logind[1540]: Session 3 logged out. Waiting for processes to exit. Mar 13 00:44:31.342222 systemd-logind[1540]: Removed session 3. Mar 13 00:44:34.078151 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:44:34.078952 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 13 00:44:34.080120 systemd[1]: Startup finished in 13.806s (kernel) + 26.121s (initrd) + 20.912s (userspace) = 1min 839ms. Mar 13 00:44:34.126429 (kubelet)[1715]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:44:37.556321 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1228730756 wd_nsec: 1228730700 Mar 13 00:44:37.908595 kubelet[1715]: E0313 00:44:37.908093 1715 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:44:37.914023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:44:37.914251 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:44:37.914986 systemd[1]: kubelet.service: Consumed 8.267s CPU time, 258.1M memory peak. Mar 13 00:44:41.355652 systemd[1]: Started sshd@3-10.0.0.89:22-10.0.0.1:37130.service - OpenSSH per-connection server daemon (10.0.0.1:37130). Mar 13 00:44:41.549586 sshd[1724]: Accepted publickey for core from 10.0.0.1 port 37130 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:44:41.552432 sshd-session[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:44:41.562392 systemd-logind[1540]: New session 4 of user core. Mar 13 00:44:41.572091 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 13 00:44:41.601086 sshd[1727]: Connection closed by 10.0.0.1 port 37130 Mar 13 00:44:41.601382 sshd-session[1724]: pam_unix(sshd:session): session closed for user core Mar 13 00:44:41.621870 systemd[1]: sshd@3-10.0.0.89:22-10.0.0.1:37130.service: Deactivated successfully. Mar 13 00:44:41.625088 systemd[1]: session-4.scope: Deactivated successfully. Mar 13 00:44:41.627043 systemd-logind[1540]: Session 4 logged out. Waiting for processes to exit. Mar 13 00:44:41.633465 systemd[1]: Started sshd@4-10.0.0.89:22-10.0.0.1:37144.service - OpenSSH per-connection server daemon (10.0.0.1:37144). Mar 13 00:44:41.635994 systemd-logind[1540]: Removed session 4. Mar 13 00:44:41.718450 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 37144 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:44:41.720586 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:44:41.730014 systemd-logind[1540]: New session 5 of user core. Mar 13 00:44:41.748139 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 13 00:44:41.768131 sshd[1736]: Connection closed by 10.0.0.1 port 37144 Mar 13 00:44:41.768284 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Mar 13 00:44:41.785985 systemd[1]: sshd@4-10.0.0.89:22-10.0.0.1:37144.service: Deactivated successfully. Mar 13 00:44:41.788293 systemd[1]: session-5.scope: Deactivated successfully. Mar 13 00:44:41.789939 systemd-logind[1540]: Session 5 logged out. Waiting for processes to exit. Mar 13 00:44:41.794047 systemd[1]: Started sshd@5-10.0.0.89:22-10.0.0.1:37152.service - OpenSSH per-connection server daemon (10.0.0.1:37152). Mar 13 00:44:41.795597 systemd-logind[1540]: Removed session 5. Mar 13 00:44:41.867646 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 37152 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:44:41.869952 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:44:41.878061 systemd-logind[1540]: New session 6 of user core. Mar 13 00:44:41.885059 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 13 00:44:41.909103 sshd[1745]: Connection closed by 10.0.0.1 port 37152 Mar 13 00:44:41.909641 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Mar 13 00:44:41.918959 systemd[1]: sshd@5-10.0.0.89:22-10.0.0.1:37152.service: Deactivated successfully. Mar 13 00:44:41.921494 systemd[1]: session-6.scope: Deactivated successfully. Mar 13 00:44:41.923133 systemd-logind[1540]: Session 6 logged out. Waiting for processes to exit. Mar 13 00:44:41.926513 systemd[1]: Started sshd@6-10.0.0.89:22-10.0.0.1:37166.service - OpenSSH per-connection server daemon (10.0.0.1:37166). Mar 13 00:44:41.928146 systemd-logind[1540]: Removed session 6. Mar 13 00:44:42.009598 sshd[1751]: Accepted publickey for core from 10.0.0.1 port 37166 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:44:42.011383 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:44:42.019377 systemd-logind[1540]: New session 7 of user core. Mar 13 00:44:42.030054 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 13 00:44:42.071304 sudo[1755]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 13 00:44:42.072007 sudo[1755]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:44:42.095265 sudo[1755]: pam_unix(sudo:session): session closed for user root Mar 13 00:44:42.099155 sshd[1754]: Connection closed by 10.0.0.1 port 37166 Mar 13 00:44:42.100094 sshd-session[1751]: pam_unix(sshd:session): session closed for user core Mar 13 00:44:42.108998 systemd[1]: sshd@6-10.0.0.89:22-10.0.0.1:37166.service: Deactivated successfully. Mar 13 00:44:42.111147 systemd[1]: session-7.scope: Deactivated successfully. Mar 13 00:44:42.112632 systemd-logind[1540]: Session 7 logged out. Waiting for processes to exit. Mar 13 00:44:42.116522 systemd[1]: Started sshd@7-10.0.0.89:22-10.0.0.1:37176.service - OpenSSH per-connection server daemon (10.0.0.1:37176). Mar 13 00:44:42.119390 systemd-logind[1540]: Removed session 7. Mar 13 00:44:42.212163 sshd[1761]: Accepted publickey for core from 10.0.0.1 port 37176 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:44:42.215015 sshd-session[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:44:42.223278 systemd-logind[1540]: New session 8 of user core. Mar 13 00:44:42.234085 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 13 00:44:42.255019 sudo[1766]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 13 00:44:42.255388 sudo[1766]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:44:42.267617 sudo[1766]: pam_unix(sudo:session): session closed for user root Mar 13 00:44:42.277384 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 13 00:44:42.278087 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:44:42.296306 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 13 00:44:42.384850 augenrules[1788]: No rules Mar 13 00:44:42.386470 systemd[1]: audit-rules.service: Deactivated successfully. Mar 13 00:44:42.387025 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 13 00:44:42.388314 sudo[1765]: pam_unix(sudo:session): session closed for user root Mar 13 00:44:42.390484 sshd[1764]: Connection closed by 10.0.0.1 port 37176 Mar 13 00:44:42.391063 sshd-session[1761]: pam_unix(sshd:session): session closed for user core Mar 13 00:44:42.400907 systemd[1]: sshd@7-10.0.0.89:22-10.0.0.1:37176.service: Deactivated successfully. Mar 13 00:44:42.403059 systemd[1]: session-8.scope: Deactivated successfully. Mar 13 00:44:42.404942 systemd-logind[1540]: Session 8 logged out. Waiting for processes to exit. Mar 13 00:44:42.408154 systemd[1]: Started sshd@8-10.0.0.89:22-10.0.0.1:37178.service - OpenSSH per-connection server daemon (10.0.0.1:37178). Mar 13 00:44:42.410542 systemd-logind[1540]: Removed session 8. Mar 13 00:44:42.478903 sshd[1797]: Accepted publickey for core from 10.0.0.1 port 37178 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:44:42.481866 sshd-session[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:44:42.490214 systemd-logind[1540]: New session 9 of user core. Mar 13 00:44:42.497202 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 13 00:44:42.523855 sudo[1801]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 13 00:44:42.524214 sudo[1801]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:44:47.446036 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 13 00:44:47.492517 (dockerd)[1822]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 13 00:44:48.219511 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 13 00:44:48.227220 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:44:51.233608 dockerd[1822]: time="2026-03-13T00:44:51.233336518Z" level=info msg="Starting up" Mar 13 00:44:51.236469 dockerd[1822]: time="2026-03-13T00:44:51.236139150Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 13 00:44:51.423904 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:44:51.521275 (kubelet)[1853]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:44:52.898917 dockerd[1822]: time="2026-03-13T00:44:52.897981813Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 13 00:44:53.073966 dockerd[1822]: time="2026-03-13T00:44:53.073621389Z" level=info msg="Loading containers: start." Mar 13 00:44:53.094889 kernel: Initializing XFRM netlink socket Mar 13 00:44:53.149913 kubelet[1853]: E0313 00:44:53.149563 1853 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:44:53.161595 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:44:53.162135 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:44:53.168475 systemd[1]: kubelet.service: Consumed 4.143s CPU time, 111.1M memory peak. Mar 13 00:44:55.070797 systemd-networkd[1471]: docker0: Link UP Mar 13 00:44:55.084415 dockerd[1822]: time="2026-03-13T00:44:55.083630551Z" level=info msg="Loading containers: done." Mar 13 00:44:55.216208 dockerd[1822]: time="2026-03-13T00:44:55.215783136Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 13 00:44:55.217079 dockerd[1822]: time="2026-03-13T00:44:55.216455021Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 13 00:44:55.217414 dockerd[1822]: time="2026-03-13T00:44:55.217253762Z" level=info msg="Initializing buildkit" Mar 13 00:44:55.326423 dockerd[1822]: time="2026-03-13T00:44:55.326176196Z" level=info msg="Completed buildkit initialization" Mar 13 00:44:55.348465 dockerd[1822]: time="2026-03-13T00:44:55.348271697Z" level=info msg="Daemon has completed initialization" Mar 13 00:44:55.349645 dockerd[1822]: time="2026-03-13T00:44:55.348831412Z" level=info msg="API listen on /run/docker.sock" Mar 13 00:44:55.349538 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 13 00:44:58.813078 containerd[1555]: time="2026-03-13T00:44:58.811449775Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 13 00:45:00.341982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2888303713.mount: Deactivated successfully. Mar 13 00:45:03.342036 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 13 00:45:03.349040 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:45:04.853936 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:45:04.886576 (kubelet)[2126]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:45:05.123052 kubelet[2126]: E0313 00:45:05.121828 2126 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:45:05.125977 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:45:05.126243 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:45:05.127961 systemd[1]: kubelet.service: Consumed 1.148s CPU time, 109.9M memory peak. Mar 13 00:45:05.803066 containerd[1555]: time="2026-03-13T00:45:05.802618710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:45:05.806644 containerd[1555]: time="2026-03-13T00:45:05.806439861Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=27696467" Mar 13 00:45:05.809593 containerd[1555]: time="2026-03-13T00:45:05.809293944Z" level=info msg="ImageCreate event name:\"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:45:05.817238 containerd[1555]: time="2026-03-13T00:45:05.817081168Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:45:05.819298 containerd[1555]: time="2026-03-13T00:45:05.819143999Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"27693066\" in 7.007369222s" Mar 13 00:45:05.819581 containerd[1555]: time="2026-03-13T00:45:05.819236199Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\"" Mar 13 00:45:05.827958 containerd[1555]: time="2026-03-13T00:45:05.827501179Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 13 00:45:08.881516 containerd[1555]: time="2026-03-13T00:45:08.881032141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:45:08.883150 containerd[1555]: time="2026-03-13T00:45:08.882291613Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=21450700" Mar 13 00:45:08.885027 containerd[1555]: time="2026-03-13T00:45:08.884927745Z" level=info msg="ImageCreate event name:\"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:45:08.889636 containerd[1555]: time="2026-03-13T00:45:08.889452958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:45:08.890469 containerd[1555]: time="2026-03-13T00:45:08.890325786Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"23142311\" in 3.062722639s" Mar 13 00:45:08.890601 containerd[1555]: time="2026-03-13T00:45:08.890494761Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\"" Mar 13 00:45:08.894571 containerd[1555]: time="2026-03-13T00:45:08.894206975Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 13 00:45:10.222502 containerd[1555]: time="2026-03-13T00:45:10.222321727Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:45:10.224887 containerd[1555]: time="2026-03-13T00:45:10.224406929Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=15548429" Mar 13 00:45:10.226844 containerd[1555]: time="2026-03-13T00:45:10.226594715Z" level=info msg="ImageCreate event name:\"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:45:10.232244 containerd[1555]: time="2026-03-13T00:45:10.232034082Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:45:10.233298 containerd[1555]: time="2026-03-13T00:45:10.233137896Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"17240058\" in 1.338643737s" Mar 13 00:45:10.233298 containerd[1555]: time="2026-03-13T00:45:10.233255955Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\"" Mar 13 00:45:10.235314 containerd[1555]: time="2026-03-13T00:45:10.235131918Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 13 00:45:10.258273 update_engine[1542]: I20260313 00:45:10.258104 1542 update_attempter.cc:509] Updating boot flags... Mar 13 00:45:11.498989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1202930978.mount: Deactivated successfully. Mar 13 00:45:12.280336 containerd[1555]: time="2026-03-13T00:45:12.280102052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:45:12.283078 containerd[1555]: time="2026-03-13T00:45:12.282371414Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=25685312" Mar 13 00:45:12.285588 containerd[1555]: time="2026-03-13T00:45:12.285557768Z" level=info msg="ImageCreate event name:\"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:45:12.290517 containerd[1555]: time="2026-03-13T00:45:12.290391727Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:45:12.290657 containerd[1555]: time="2026-03-13T00:45:12.290502498Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"25684331\" in 2.055266587s" Mar 13 00:45:12.290657 containerd[1555]: time="2026-03-13T00:45:12.290539447Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\"" Mar 13 00:45:12.292580 containerd[1555]: time="2026-03-13T00:45:12.292247077Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 13 00:45:12.783157 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount103796553.mount: Deactivated successfully. Mar 13 00:45:14.760588 containerd[1555]: time="2026-03-13T00:45:14.760149287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:45:14.762542 containerd[1555]: time="2026-03-13T00:45:14.762299959Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556542" Mar 13 00:45:14.764420 containerd[1555]: time="2026-03-13T00:45:14.764001100Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:45:14.767864 containerd[1555]: time="2026-03-13T00:45:14.767545800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:45:14.768568 containerd[1555]: time="2026-03-13T00:45:14.768455332Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 2.476109241s" Mar 13 00:45:14.768568 containerd[1555]: time="2026-03-13T00:45:14.768485768Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Mar 13 00:45:14.769991 containerd[1555]: time="2026-03-13T00:45:14.769867850Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 13 00:45:15.233404 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 13 00:45:15.237995 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:45:15.244962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2021833407.mount: Deactivated successfully. Mar 13 00:45:15.264980 containerd[1555]: time="2026-03-13T00:45:15.264864838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:45:15.268527 containerd[1555]: time="2026-03-13T00:45:15.268371251Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 13 00:45:15.273267 containerd[1555]: time="2026-03-13T00:45:15.273150741Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:45:15.284480 containerd[1555]: time="2026-03-13T00:45:15.284327641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:45:15.287835 containerd[1555]: time="2026-03-13T00:45:15.287561318Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 517.66721ms" Mar 13 00:45:15.288222 containerd[1555]: time="2026-03-13T00:45:15.288003190Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 13 00:45:15.290516 containerd[1555]: time="2026-03-13T00:45:15.290377119Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 13 00:45:15.646408 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:45:15.674130 (kubelet)[2235]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:45:15.806272 kubelet[2235]: E0313 00:45:15.806104 2235 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:45:15.812025 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:45:15.812211 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:45:15.813275 systemd[1]: kubelet.service: Consumed 487ms CPU time, 113.2M memory peak. Mar 13 00:45:15.858143 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1929845029.mount: Deactivated successfully. Mar 13 00:45:17.296914 containerd[1555]: time="2026-03-13T00:45:17.296859944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:45:17.298295 containerd[1555]: time="2026-03-13T00:45:17.298182449Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23630322" Mar 13 00:45:17.300633 containerd[1555]: time="2026-03-13T00:45:17.300459184Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:45:17.305191 containerd[1555]: time="2026-03-13T00:45:17.305149893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:45:17.306436 containerd[1555]: time="2026-03-13T00:45:17.306403098Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 2.015931842s" Mar 13 00:45:17.306804 containerd[1555]: time="2026-03-13T00:45:17.306520495Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Mar 13 00:45:18.879998 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:45:18.880259 systemd[1]: kubelet.service: Consumed 487ms CPU time, 113.2M memory peak. Mar 13 00:45:18.883910 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:45:18.919956 systemd[1]: Reload requested from client PID 2336 ('systemctl') (unit session-9.scope)... Mar 13 00:45:18.920031 systemd[1]: Reloading... Mar 13 00:45:19.056910 zram_generator::config[2385]: No configuration found. Mar 13 00:45:19.296012 systemd[1]: Reloading finished in 375 ms. Mar 13 00:45:19.400342 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 13 00:45:19.400529 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 13 00:45:19.401113 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:45:19.401156 systemd[1]: kubelet.service: Consumed 166ms CPU time, 98.1M memory peak. Mar 13 00:45:19.403388 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:45:19.657263 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:45:19.677359 (kubelet)[2427]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 13 00:45:19.868826 kubelet[2427]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:45:20.109375 kubelet[2427]: I0313 00:45:20.109210 2427 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 13 00:45:20.109375 kubelet[2427]: I0313 00:45:20.109310 2427 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 00:45:20.109633 kubelet[2427]: I0313 00:45:20.109557 2427 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 13 00:45:20.109633 kubelet[2427]: I0313 00:45:20.109567 2427 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 13 00:45:20.110100 kubelet[2427]: I0313 00:45:20.110013 2427 server.go:951] "Client rotation is on, will bootstrap in background" Mar 13 00:45:20.203441 kubelet[2427]: I0313 00:45:20.203318 2427 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 13 00:45:20.205958 kubelet[2427]: E0313 00:45:20.205630 2427 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.89:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 13 00:45:20.223826 kubelet[2427]: I0313 00:45:20.223372 2427 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 00:45:20.236821 kubelet[2427]: I0313 00:45:20.236616 2427 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 13 00:45:20.240072 kubelet[2427]: I0313 00:45:20.239918 2427 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 00:45:20.241026 kubelet[2427]: I0313 00:45:20.240007 2427 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 00:45:20.241276 kubelet[2427]: I0313 00:45:20.241240 2427 topology_manager.go:143] "Creating topology manager with none policy" Mar 13 00:45:20.241276 kubelet[2427]: I0313 00:45:20.241257 2427 container_manager_linux.go:308] "Creating device plugin manager" Mar 13 00:45:20.241838 kubelet[2427]: I0313 00:45:20.241577 2427 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 13 00:45:20.247575 kubelet[2427]: I0313 00:45:20.247413 2427 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 13 00:45:20.249118 kubelet[2427]: I0313 00:45:20.248959 2427 kubelet.go:482] "Attempting to sync node with API server" Mar 13 00:45:20.249118 kubelet[2427]: I0313 00:45:20.249097 2427 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 00:45:20.249558 kubelet[2427]: I0313 00:45:20.249541 2427 kubelet.go:394] "Adding apiserver pod source" Mar 13 00:45:20.249902 kubelet[2427]: I0313 00:45:20.249887 2427 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 00:45:20.256536 kubelet[2427]: I0313 00:45:20.256176 2427 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 13 00:45:20.260647 kubelet[2427]: I0313 00:45:20.260499 2427 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 13 00:45:20.260647 kubelet[2427]: I0313 00:45:20.260587 2427 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 13 00:45:20.261302 kubelet[2427]: W0313 00:45:20.261154 2427 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 13 00:45:20.268577 kubelet[2427]: I0313 00:45:20.268342 2427 server.go:1257] "Started kubelet" Mar 13 00:45:20.271510 kubelet[2427]: I0313 00:45:20.271221 2427 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 00:45:20.272047 kubelet[2427]: I0313 00:45:20.271955 2427 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 13 00:45:20.272834 kubelet[2427]: I0313 00:45:20.272562 2427 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 00:45:20.273258 kubelet[2427]: I0313 00:45:20.273094 2427 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 00:45:20.274446 kubelet[2427]: I0313 00:45:20.274328 2427 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 13 00:45:20.276894 kubelet[2427]: I0313 00:45:20.276877 2427 server.go:317] "Adding debug handlers to kubelet server" Mar 13 00:45:20.279834 kubelet[2427]: E0313 00:45:20.279013 2427 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 13 00:45:20.279834 kubelet[2427]: I0313 00:45:20.279313 2427 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 13 00:45:20.280264 kubelet[2427]: I0313 00:45:20.280244 2427 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 13 00:45:20.280327 kubelet[2427]: E0313 00:45:20.276513 2427 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.89:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.89:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189c3fff97810a2b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-13 00:45:20.268159531 +0000 UTC m=+0.580961346,LastTimestamp:2026-03-13 00:45:20.268159531 +0000 UTC m=+0.580961346,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 13 00:45:20.280873 kubelet[2427]: I0313 00:45:20.280855 2427 reconciler.go:29] "Reconciler: start to sync state" Mar 13 00:45:20.281534 kubelet[2427]: E0313 00:45:20.281419 2427 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="200ms" Mar 13 00:45:20.282188 kubelet[2427]: I0313 00:45:20.282084 2427 factory.go:223] Registration of the systemd container factory successfully Mar 13 00:45:20.282235 kubelet[2427]: I0313 00:45:20.282225 2427 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 13 00:45:20.282636 kubelet[2427]: I0313 00:45:20.282612 2427 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 13 00:45:20.288841 kubelet[2427]: I0313 00:45:20.288427 2427 factory.go:223] Registration of the containerd container factory successfully Mar 13 00:45:20.290353 kubelet[2427]: E0313 00:45:20.290237 2427 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 13 00:45:20.325518 kubelet[2427]: I0313 00:45:20.325492 2427 cpu_manager.go:225] "Starting" policy="none" Mar 13 00:45:20.325954 kubelet[2427]: I0313 00:45:20.325629 2427 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 13 00:45:20.326089 kubelet[2427]: I0313 00:45:20.326018 2427 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 13 00:45:20.330889 kubelet[2427]: I0313 00:45:20.330648 2427 policy_none.go:50] "Start" Mar 13 00:45:20.331248 kubelet[2427]: I0313 00:45:20.331154 2427 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 13 00:45:20.331481 kubelet[2427]: I0313 00:45:20.331393 2427 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 13 00:45:20.335055 kubelet[2427]: I0313 00:45:20.334613 2427 policy_none.go:44] "Start" Mar 13 00:45:20.349613 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 13 00:45:20.350515 kubelet[2427]: I0313 00:45:20.350477 2427 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 13 00:45:20.355555 kubelet[2427]: I0313 00:45:20.355535 2427 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 13 00:45:20.356523 kubelet[2427]: I0313 00:45:20.356504 2427 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 13 00:45:20.357016 kubelet[2427]: I0313 00:45:20.356999 2427 kubelet.go:2501] "Starting kubelet main sync loop" Mar 13 00:45:20.357301 kubelet[2427]: E0313 00:45:20.357277 2427 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 00:45:20.369248 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 13 00:45:20.377288 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 13 00:45:20.379321 kubelet[2427]: E0313 00:45:20.379261 2427 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 13 00:45:20.387353 kubelet[2427]: E0313 00:45:20.386973 2427 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 13 00:45:20.387608 kubelet[2427]: I0313 00:45:20.387454 2427 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 13 00:45:20.387864 kubelet[2427]: I0313 00:45:20.387636 2427 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 00:45:20.390277 kubelet[2427]: E0313 00:45:20.389469 2427 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 13 00:45:20.390903 kubelet[2427]: I0313 00:45:20.390537 2427 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 13 00:45:20.390903 kubelet[2427]: E0313 00:45:20.390537 2427 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 13 00:45:20.409317 kubelet[2427]: E0313 00:45:20.409106 2427 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.89:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.89:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189c3fff97810a2b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-13 00:45:20.268159531 +0000 UTC m=+0.580961346,LastTimestamp:2026-03-13 00:45:20.268159531 +0000 UTC m=+0.580961346,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 13 00:45:20.483313 kubelet[2427]: E0313 00:45:20.483211 2427 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="400ms" Mar 13 00:45:20.484874 systemd[1]: Created slice kubepods-burstable-pod467ed38a8e5128cf06a5565ae2185baf.slice - libcontainer container kubepods-burstable-pod467ed38a8e5128cf06a5565ae2185baf.slice. Mar 13 00:45:20.489127 kubelet[2427]: I0313 00:45:20.489052 2427 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 13 00:45:20.489632 kubelet[2427]: E0313 00:45:20.489528 2427 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Mar 13 00:45:20.495271 kubelet[2427]: E0313 00:45:20.495123 2427 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:45:20.500506 systemd[1]: Created slice kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice - libcontainer container kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice. Mar 13 00:45:20.505979 kubelet[2427]: E0313 00:45:20.505597 2427 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:45:20.510482 systemd[1]: Created slice kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice - libcontainer container kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice. Mar 13 00:45:20.516551 kubelet[2427]: E0313 00:45:20.516356 2427 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:45:20.584224 kubelet[2427]: I0313 00:45:20.584010 2427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/467ed38a8e5128cf06a5565ae2185baf-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"467ed38a8e5128cf06a5565ae2185baf\") " pod="kube-system/kube-apiserver-localhost" Mar 13 00:45:20.584459 kubelet[2427]: I0313 00:45:20.584135 2427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/467ed38a8e5128cf06a5565ae2185baf-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"467ed38a8e5128cf06a5565ae2185baf\") " pod="kube-system/kube-apiserver-localhost" Mar 13 00:45:20.584459 kubelet[2427]: I0313 00:45:20.584416 2427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:45:20.585019 kubelet[2427]: I0313 00:45:20.584926 2427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:45:20.585068 kubelet[2427]: I0313 00:45:20.585035 2427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:45:20.586261 kubelet[2427]: I0313 00:45:20.585440 2427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:45:20.586261 kubelet[2427]: I0313 00:45:20.586140 2427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/467ed38a8e5128cf06a5565ae2185baf-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"467ed38a8e5128cf06a5565ae2185baf\") " pod="kube-system/kube-apiserver-localhost" Mar 13 00:45:20.586261 kubelet[2427]: I0313 00:45:20.586168 2427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:45:20.586261 kubelet[2427]: I0313 00:45:20.586194 2427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 13 00:45:20.696999 kubelet[2427]: I0313 00:45:20.696177 2427 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 13 00:45:20.699851 kubelet[2427]: E0313 00:45:20.699824 2427 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Mar 13 00:45:20.802297 kubelet[2427]: E0313 00:45:20.802125 2427 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:45:20.807461 containerd[1555]: time="2026-03-13T00:45:20.807110694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:467ed38a8e5128cf06a5565ae2185baf,Namespace:kube-system,Attempt:0,}" Mar 13 00:45:20.810828 kubelet[2427]: E0313 00:45:20.810577 2427 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:45:20.811608 containerd[1555]: time="2026-03-13T00:45:20.811481474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,}" Mar 13 00:45:20.822196 kubelet[2427]: E0313 00:45:20.822002 2427 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:45:20.824195 containerd[1555]: time="2026-03-13T00:45:20.824064243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,}" Mar 13 00:45:20.885441 kubelet[2427]: E0313 00:45:20.884969 2427 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="800ms" Mar 13 00:45:21.105823 kubelet[2427]: I0313 00:45:21.105623 2427 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 13 00:45:21.106644 kubelet[2427]: E0313 00:45:21.106619 2427 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Mar 13 00:45:21.288250 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount695062927.mount: Deactivated successfully. Mar 13 00:45:21.307313 containerd[1555]: time="2026-03-13T00:45:21.307123108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:45:21.314475 containerd[1555]: time="2026-03-13T00:45:21.314443040Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 13 00:45:21.318344 containerd[1555]: time="2026-03-13T00:45:21.318204517Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:45:21.323462 containerd[1555]: time="2026-03-13T00:45:21.323380096Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:45:21.326779 containerd[1555]: time="2026-03-13T00:45:21.326349173Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:45:21.328897 containerd[1555]: time="2026-03-13T00:45:21.328582036Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 13 00:45:21.330911 containerd[1555]: time="2026-03-13T00:45:21.330520156Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 13 00:45:21.332911 containerd[1555]: time="2026-03-13T00:45:21.332886684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:45:21.334354 containerd[1555]: time="2026-03-13T00:45:21.334167338Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 522.042159ms" Mar 13 00:45:21.342618 containerd[1555]: time="2026-03-13T00:45:21.341464250Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 514.180959ms" Mar 13 00:45:21.342618 containerd[1555]: time="2026-03-13T00:45:21.342046956Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 527.83721ms" Mar 13 00:45:21.430124 containerd[1555]: time="2026-03-13T00:45:21.429900056Z" level=info msg="connecting to shim 0d5f747741e3eb765ecab5dbba30cf0d0ab6a7645e9f7eb2e259ef041f5387cd" address="unix:///run/containerd/s/96d678ac885cfe5424605570757bea82a89d69e9869e95a219815e8581c44772" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:45:21.431119 containerd[1555]: time="2026-03-13T00:45:21.431092888Z" level=info msg="connecting to shim 72721676f0c224b053ac0b588fbd2a2b066d7f301d131a8f8f6c07d9edfa745f" address="unix:///run/containerd/s/b998096c24c0b9673415cce27259cbaec85512c9755491cffc136e724bb27275" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:45:21.436451 containerd[1555]: time="2026-03-13T00:45:21.436427615Z" level=info msg="connecting to shim e7968a11eb7ac8b6a0115996ab6ad193f8d05421da4785dd4e1502d7f6080408" address="unix:///run/containerd/s/0de2dc4c60a0b5264ab13e3461ca45d44e90c490686da1432b27ed3fec61872f" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:45:21.506443 systemd[1]: Started cri-containerd-72721676f0c224b053ac0b588fbd2a2b066d7f301d131a8f8f6c07d9edfa745f.scope - libcontainer container 72721676f0c224b053ac0b588fbd2a2b066d7f301d131a8f8f6c07d9edfa745f. Mar 13 00:45:21.526048 systemd[1]: Started cri-containerd-0d5f747741e3eb765ecab5dbba30cf0d0ab6a7645e9f7eb2e259ef041f5387cd.scope - libcontainer container 0d5f747741e3eb765ecab5dbba30cf0d0ab6a7645e9f7eb2e259ef041f5387cd. Mar 13 00:45:21.529086 systemd[1]: Started cri-containerd-e7968a11eb7ac8b6a0115996ab6ad193f8d05421da4785dd4e1502d7f6080408.scope - libcontainer container e7968a11eb7ac8b6a0115996ab6ad193f8d05421da4785dd4e1502d7f6080408. Mar 13 00:45:21.641019 containerd[1555]: time="2026-03-13T00:45:21.640982859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:467ed38a8e5128cf06a5565ae2185baf,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d5f747741e3eb765ecab5dbba30cf0d0ab6a7645e9f7eb2e259ef041f5387cd\"" Mar 13 00:45:21.646806 kubelet[2427]: E0313 00:45:21.646189 2427 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:45:21.670374 containerd[1555]: time="2026-03-13T00:45:21.669966459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,} returns sandbox id \"e7968a11eb7ac8b6a0115996ab6ad193f8d05421da4785dd4e1502d7f6080408\"" Mar 13 00:45:21.672035 kubelet[2427]: E0313 00:45:21.672017 2427 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:45:21.673547 containerd[1555]: time="2026-03-13T00:45:21.673444156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,} returns sandbox id \"72721676f0c224b053ac0b588fbd2a2b066d7f301d131a8f8f6c07d9edfa745f\"" Mar 13 00:45:21.673985 containerd[1555]: time="2026-03-13T00:45:21.673475635Z" level=info msg="CreateContainer within sandbox \"0d5f747741e3eb765ecab5dbba30cf0d0ab6a7645e9f7eb2e259ef041f5387cd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 13 00:45:21.674596 kubelet[2427]: E0313 00:45:21.674580 2427 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:45:21.684238 containerd[1555]: time="2026-03-13T00:45:21.684032834Z" level=info msg="CreateContainer within sandbox \"e7968a11eb7ac8b6a0115996ab6ad193f8d05421da4785dd4e1502d7f6080408\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 13 00:45:21.687201 kubelet[2427]: E0313 00:45:21.686863 2427 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="1.6s" Mar 13 00:45:21.689078 containerd[1555]: time="2026-03-13T00:45:21.689046798Z" level=info msg="CreateContainer within sandbox \"72721676f0c224b053ac0b588fbd2a2b066d7f301d131a8f8f6c07d9edfa745f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 13 00:45:21.706474 containerd[1555]: time="2026-03-13T00:45:21.706367609Z" level=info msg="Container 52abf87e10d21c982984de846c2a950fbc78213ae08fc9fa65eeec203c5b2cb7: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:45:21.712320 containerd[1555]: time="2026-03-13T00:45:21.712296798Z" level=info msg="Container 66f151dc5507df24b0d2ac5d3a4fa2467b02554559f1146bd950dd7547786943: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:45:21.721636 containerd[1555]: time="2026-03-13T00:45:21.721526963Z" level=info msg="CreateContainer within sandbox \"0d5f747741e3eb765ecab5dbba30cf0d0ab6a7645e9f7eb2e259ef041f5387cd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"52abf87e10d21c982984de846c2a950fbc78213ae08fc9fa65eeec203c5b2cb7\"" Mar 13 00:45:21.724048 containerd[1555]: time="2026-03-13T00:45:21.723961494Z" level=info msg="StartContainer for \"52abf87e10d21c982984de846c2a950fbc78213ae08fc9fa65eeec203c5b2cb7\"" Mar 13 00:45:21.726050 containerd[1555]: time="2026-03-13T00:45:21.726024948Z" level=info msg="connecting to shim 52abf87e10d21c982984de846c2a950fbc78213ae08fc9fa65eeec203c5b2cb7" address="unix:///run/containerd/s/96d678ac885cfe5424605570757bea82a89d69e9869e95a219815e8581c44772" protocol=ttrpc version=3 Mar 13 00:45:21.731473 containerd[1555]: time="2026-03-13T00:45:21.731264027Z" level=info msg="Container 3acfaffcae2328605b94c1d6ef8a2103b71119c354580c4896c8168204a8e6ac: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:45:21.732312 containerd[1555]: time="2026-03-13T00:45:21.732289263Z" level=info msg="CreateContainer within sandbox \"e7968a11eb7ac8b6a0115996ab6ad193f8d05421da4785dd4e1502d7f6080408\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"66f151dc5507df24b0d2ac5d3a4fa2467b02554559f1146bd950dd7547786943\"" Mar 13 00:45:21.733496 containerd[1555]: time="2026-03-13T00:45:21.733474220Z" level=info msg="StartContainer for \"66f151dc5507df24b0d2ac5d3a4fa2467b02554559f1146bd950dd7547786943\"" Mar 13 00:45:21.734960 containerd[1555]: time="2026-03-13T00:45:21.734936584Z" level=info msg="connecting to shim 66f151dc5507df24b0d2ac5d3a4fa2467b02554559f1146bd950dd7547786943" address="unix:///run/containerd/s/0de2dc4c60a0b5264ab13e3461ca45d44e90c490686da1432b27ed3fec61872f" protocol=ttrpc version=3 Mar 13 00:45:21.747192 containerd[1555]: time="2026-03-13T00:45:21.747042466Z" level=info msg="CreateContainer within sandbox \"72721676f0c224b053ac0b588fbd2a2b066d7f301d131a8f8f6c07d9edfa745f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3acfaffcae2328605b94c1d6ef8a2103b71119c354580c4896c8168204a8e6ac\"" Mar 13 00:45:21.750979 containerd[1555]: time="2026-03-13T00:45:21.750596000Z" level=info msg="StartContainer for \"3acfaffcae2328605b94c1d6ef8a2103b71119c354580c4896c8168204a8e6ac\"" Mar 13 00:45:21.752124 containerd[1555]: time="2026-03-13T00:45:21.752016955Z" level=info msg="connecting to shim 3acfaffcae2328605b94c1d6ef8a2103b71119c354580c4896c8168204a8e6ac" address="unix:///run/containerd/s/b998096c24c0b9673415cce27259cbaec85512c9755491cffc136e724bb27275" protocol=ttrpc version=3 Mar 13 00:45:21.773824 systemd[1]: Started cri-containerd-52abf87e10d21c982984de846c2a950fbc78213ae08fc9fa65eeec203c5b2cb7.scope - libcontainer container 52abf87e10d21c982984de846c2a950fbc78213ae08fc9fa65eeec203c5b2cb7. Mar 13 00:45:21.785563 systemd[1]: Started cri-containerd-66f151dc5507df24b0d2ac5d3a4fa2467b02554559f1146bd950dd7547786943.scope - libcontainer container 66f151dc5507df24b0d2ac5d3a4fa2467b02554559f1146bd950dd7547786943. Mar 13 00:45:21.798010 systemd[1]: Started cri-containerd-3acfaffcae2328605b94c1d6ef8a2103b71119c354580c4896c8168204a8e6ac.scope - libcontainer container 3acfaffcae2328605b94c1d6ef8a2103b71119c354580c4896c8168204a8e6ac. Mar 13 00:45:21.909137 kubelet[2427]: I0313 00:45:21.909112 2427 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 13 00:45:21.911139 kubelet[2427]: E0313 00:45:21.911115 2427 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Mar 13 00:45:21.963332 containerd[1555]: time="2026-03-13T00:45:21.963143671Z" level=info msg="StartContainer for \"52abf87e10d21c982984de846c2a950fbc78213ae08fc9fa65eeec203c5b2cb7\" returns successfully" Mar 13 00:45:21.968157 containerd[1555]: time="2026-03-13T00:45:21.968054854Z" level=info msg="StartContainer for \"66f151dc5507df24b0d2ac5d3a4fa2467b02554559f1146bd950dd7547786943\" returns successfully" Mar 13 00:45:21.968615 containerd[1555]: time="2026-03-13T00:45:21.968302835Z" level=info msg="StartContainer for \"3acfaffcae2328605b94c1d6ef8a2103b71119c354580c4896c8168204a8e6ac\" returns successfully" Mar 13 00:45:22.377569 kubelet[2427]: E0313 00:45:22.377442 2427 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:45:22.378091 kubelet[2427]: E0313 00:45:22.377651 2427 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:45:22.383254 kubelet[2427]: E0313 00:45:22.382869 2427 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:45:22.383254 kubelet[2427]: E0313 00:45:22.383002 2427 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:45:22.389456 kubelet[2427]: E0313 00:45:22.389254 2427 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:45:22.389635 kubelet[2427]: E0313 00:45:22.389551 2427 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:45:23.393282 kubelet[2427]: E0313 00:45:23.392956 2427 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:45:23.393282 kubelet[2427]: E0313 00:45:23.393206 2427 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:45:23.396484 kubelet[2427]: E0313 00:45:23.396370 2427 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:45:23.397174 kubelet[2427]: E0313 00:45:23.397000 2427 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:45:23.516033 kubelet[2427]: I0313 00:45:23.514084 2427 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 13 00:45:23.830627 kubelet[2427]: E0313 00:45:23.830110 2427 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 13 00:45:24.038323 kubelet[2427]: I0313 00:45:24.038122 2427 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 13 00:45:24.038323 kubelet[2427]: E0313 00:45:24.038170 2427 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 13 00:45:24.083296 kubelet[2427]: I0313 00:45:24.082417 2427 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 13 00:45:24.105285 kubelet[2427]: E0313 00:45:24.105014 2427 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 13 00:45:24.105285 kubelet[2427]: I0313 00:45:24.105045 2427 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 13 00:45:24.114823 kubelet[2427]: E0313 00:45:24.113606 2427 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 13 00:45:24.114972 kubelet[2427]: I0313 00:45:24.114957 2427 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 13 00:45:24.120281 kubelet[2427]: E0313 00:45:24.120253 2427 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 13 00:45:24.255274 kubelet[2427]: I0313 00:45:24.254991 2427 apiserver.go:52] "Watching apiserver" Mar 13 00:45:24.281287 kubelet[2427]: I0313 00:45:24.281012 2427 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 13 00:45:24.394231 kubelet[2427]: I0313 00:45:24.393947 2427 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 13 00:45:24.397424 kubelet[2427]: E0313 00:45:24.397285 2427 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 13 00:45:24.398004 kubelet[2427]: E0313 00:45:24.397616 2427 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:45:25.607928 kubelet[2427]: I0313 00:45:25.607491 2427 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 13 00:45:25.628129 kubelet[2427]: E0313 00:45:25.627942 2427 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:45:26.156215 systemd[1]: Reload requested from client PID 2725 ('systemctl') (unit session-9.scope)... Mar 13 00:45:26.156318 systemd[1]: Reloading... Mar 13 00:45:26.300868 zram_generator::config[2771]: No configuration found. Mar 13 00:45:26.400992 kubelet[2427]: E0313 00:45:26.400651 2427 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:45:26.595911 systemd[1]: Reloading finished in 438 ms. Mar 13 00:45:26.656579 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:45:26.675199 systemd[1]: kubelet.service: Deactivated successfully. Mar 13 00:45:26.675604 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:45:26.675903 systemd[1]: kubelet.service: Consumed 1.538s CPU time, 127.4M memory peak. Mar 13 00:45:26.679337 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:45:26.942125 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:45:26.955179 (kubelet)[2813]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 13 00:45:27.091477 kubelet[2813]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:45:27.108274 kubelet[2813]: I0313 00:45:27.108142 2813 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 13 00:45:27.108274 kubelet[2813]: I0313 00:45:27.108252 2813 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 00:45:27.108274 kubelet[2813]: I0313 00:45:27.108270 2813 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 13 00:45:27.108274 kubelet[2813]: I0313 00:45:27.108275 2813 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 13 00:45:27.108620 kubelet[2813]: I0313 00:45:27.108533 2813 server.go:951] "Client rotation is on, will bootstrap in background" Mar 13 00:45:27.110237 kubelet[2813]: I0313 00:45:27.110065 2813 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 13 00:45:27.115405 kubelet[2813]: I0313 00:45:27.115041 2813 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 13 00:45:27.126183 kubelet[2813]: I0313 00:45:27.126165 2813 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 00:45:27.147128 kubelet[2813]: I0313 00:45:27.146882 2813 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 13 00:45:27.148058 kubelet[2813]: I0313 00:45:27.147638 2813 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 00:45:27.148165 kubelet[2813]: I0313 00:45:27.147975 2813 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 00:45:27.148165 kubelet[2813]: I0313 00:45:27.148158 2813 topology_manager.go:143] "Creating topology manager with none policy" Mar 13 00:45:27.148165 kubelet[2813]: I0313 00:45:27.148167 2813 container_manager_linux.go:308] "Creating device plugin manager" Mar 13 00:45:27.148517 kubelet[2813]: I0313 00:45:27.148194 2813 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 13 00:45:27.148951 kubelet[2813]: I0313 00:45:27.148619 2813 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 13 00:45:27.149842 kubelet[2813]: I0313 00:45:27.149570 2813 kubelet.go:482] "Attempting to sync node with API server" Mar 13 00:45:27.149919 kubelet[2813]: I0313 00:45:27.149902 2813 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 00:45:27.149957 kubelet[2813]: I0313 00:45:27.149931 2813 kubelet.go:394] "Adding apiserver pod source" Mar 13 00:45:27.149957 kubelet[2813]: I0313 00:45:27.149940 2813 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 00:45:27.153220 kubelet[2813]: I0313 00:45:27.153104 2813 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 13 00:45:27.155520 kubelet[2813]: I0313 00:45:27.155504 2813 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 13 00:45:27.155591 kubelet[2813]: I0313 00:45:27.155581 2813 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 13 00:45:27.164106 kubelet[2813]: I0313 00:45:27.164056 2813 server.go:1257] "Started kubelet" Mar 13 00:45:27.168306 kubelet[2813]: I0313 00:45:27.168180 2813 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 13 00:45:27.179477 kubelet[2813]: I0313 00:45:27.179451 2813 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 00:45:27.189651 kubelet[2813]: I0313 00:45:27.189632 2813 server.go:317] "Adding debug handlers to kubelet server" Mar 13 00:45:27.200145 kubelet[2813]: I0313 00:45:27.197612 2813 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 00:45:27.200145 kubelet[2813]: I0313 00:45:27.198024 2813 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 13 00:45:27.205274 kubelet[2813]: I0313 00:45:27.205258 2813 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 00:45:27.213809 kubelet[2813]: I0313 00:45:27.213418 2813 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 13 00:45:27.215332 kubelet[2813]: I0313 00:45:27.213917 2813 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 13 00:45:27.220095 kubelet[2813]: I0313 00:45:27.218075 2813 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 13 00:45:27.224213 kubelet[2813]: I0313 00:45:27.222316 2813 factory.go:223] Registration of the systemd container factory successfully Mar 13 00:45:27.224213 kubelet[2813]: I0313 00:45:27.223612 2813 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 13 00:45:27.229093 kubelet[2813]: I0313 00:45:27.222433 2813 reconciler.go:29] "Reconciler: start to sync state" Mar 13 00:45:27.233888 kubelet[2813]: E0313 00:45:27.232205 2813 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 13 00:45:27.247984 kubelet[2813]: I0313 00:45:27.247570 2813 factory.go:223] Registration of the containerd container factory successfully Mar 13 00:45:27.268898 kubelet[2813]: I0313 00:45:27.268631 2813 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 13 00:45:27.284797 kubelet[2813]: I0313 00:45:27.284066 2813 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 13 00:45:27.284797 kubelet[2813]: I0313 00:45:27.284088 2813 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 13 00:45:27.284797 kubelet[2813]: I0313 00:45:27.284110 2813 kubelet.go:2501] "Starting kubelet main sync loop" Mar 13 00:45:27.284797 kubelet[2813]: E0313 00:45:27.284159 2813 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 00:45:27.350024 kubelet[2813]: I0313 00:45:27.349567 2813 cpu_manager.go:225] "Starting" policy="none" Mar 13 00:45:27.350024 kubelet[2813]: I0313 00:45:27.349584 2813 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 13 00:45:27.350024 kubelet[2813]: I0313 00:45:27.349602 2813 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 13 00:45:27.350024 kubelet[2813]: I0313 00:45:27.349869 2813 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 13 00:45:27.350024 kubelet[2813]: I0313 00:45:27.349880 2813 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 13 00:45:27.350024 kubelet[2813]: I0313 00:45:27.349896 2813 policy_none.go:50] "Start" Mar 13 00:45:27.350024 kubelet[2813]: I0313 00:45:27.349905 2813 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 13 00:45:27.350024 kubelet[2813]: I0313 00:45:27.349915 2813 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 13 00:45:27.350575 kubelet[2813]: I0313 00:45:27.350416 2813 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 13 00:45:27.350575 kubelet[2813]: I0313 00:45:27.350557 2813 policy_none.go:44] "Start" Mar 13 00:45:27.369261 kubelet[2813]: E0313 00:45:27.369018 2813 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 13 00:45:27.369335 kubelet[2813]: I0313 00:45:27.369274 2813 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 13 00:45:27.369335 kubelet[2813]: I0313 00:45:27.369285 2813 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 00:45:27.370369 kubelet[2813]: I0313 00:45:27.370230 2813 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 13 00:45:27.380019 kubelet[2813]: E0313 00:45:27.379998 2813 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 13 00:45:27.387645 kubelet[2813]: I0313 00:45:27.386930 2813 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 13 00:45:27.392478 kubelet[2813]: I0313 00:45:27.392238 2813 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 13 00:45:27.393084 kubelet[2813]: I0313 00:45:27.392575 2813 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 13 00:45:27.413398 kubelet[2813]: E0313 00:45:27.413086 2813 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 13 00:45:27.432196 kubelet[2813]: I0313 00:45:27.432000 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 13 00:45:27.432196 kubelet[2813]: I0313 00:45:27.432103 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/467ed38a8e5128cf06a5565ae2185baf-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"467ed38a8e5128cf06a5565ae2185baf\") " pod="kube-system/kube-apiserver-localhost" Mar 13 00:45:27.432196 kubelet[2813]: I0313 00:45:27.432128 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:45:27.432196 kubelet[2813]: I0313 00:45:27.432150 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:45:27.432196 kubelet[2813]: I0313 00:45:27.432172 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/467ed38a8e5128cf06a5565ae2185baf-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"467ed38a8e5128cf06a5565ae2185baf\") " pod="kube-system/kube-apiserver-localhost" Mar 13 00:45:27.432365 kubelet[2813]: I0313 00:45:27.432194 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/467ed38a8e5128cf06a5565ae2185baf-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"467ed38a8e5128cf06a5565ae2185baf\") " pod="kube-system/kube-apiserver-localhost" Mar 13 00:45:27.432877 kubelet[2813]: I0313 00:45:27.432213 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:45:27.432877 kubelet[2813]: I0313 00:45:27.432543 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:45:27.432877 kubelet[2813]: I0313 00:45:27.432565 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:45:27.487297 kubelet[2813]: I0313 00:45:27.486646 2813 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 13 00:45:27.499363 kubelet[2813]: I0313 00:45:27.497885 2813 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Mar 13 00:45:27.499363 kubelet[2813]: I0313 00:45:27.498024 2813 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 13 00:45:27.704415 kubelet[2813]: E0313 00:45:27.704069 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:45:27.704415 kubelet[2813]: E0313 00:45:27.704118 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:45:27.713947 kubelet[2813]: E0313 00:45:27.713451 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:45:28.152323 kubelet[2813]: I0313 00:45:28.152209 2813 apiserver.go:52] "Watching apiserver" Mar 13 00:45:28.224858 kubelet[2813]: I0313 00:45:28.224582 2813 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 13 00:45:28.323832 kubelet[2813]: I0313 00:45:28.323628 2813 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 13 00:45:28.325830 kubelet[2813]: I0313 00:45:28.324641 2813 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 13 00:45:28.326460 kubelet[2813]: I0313 00:45:28.326119 2813 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 13 00:45:28.337948 kubelet[2813]: E0313 00:45:28.337564 2813 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 13 00:45:28.337948 kubelet[2813]: E0313 00:45:28.337875 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:45:28.347079 kubelet[2813]: E0313 00:45:28.347039 2813 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 13 00:45:28.347473 kubelet[2813]: E0313 00:45:28.347455 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:45:28.351410 kubelet[2813]: E0313 00:45:28.351095 2813 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 13 00:45:28.351410 kubelet[2813]: E0313 00:45:28.351361 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:45:29.329085 kubelet[2813]: E0313 00:45:29.326240 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:45:29.329085 kubelet[2813]: E0313 00:45:29.327026 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:45:29.330994 kubelet[2813]: E0313 00:45:29.327652 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:45:29.424073 kubelet[2813]: I0313 00:45:29.423867 2813 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.423426209 podStartE2EDuration="4.423426209s" podCreationTimestamp="2026-03-13 00:45:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:45:29.410597527 +0000 UTC m=+2.441382207" watchObservedRunningTime="2026-03-13 00:45:29.423426209 +0000 UTC m=+2.454210889" Mar 13 00:45:29.446053 kubelet[2813]: I0313 00:45:29.445995 2813 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.445979067 podStartE2EDuration="2.445979067s" podCreationTimestamp="2026-03-13 00:45:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:45:29.445478243 +0000 UTC m=+2.476262924" watchObservedRunningTime="2026-03-13 00:45:29.445979067 +0000 UTC m=+2.476763747" Mar 13 00:45:29.446344 kubelet[2813]: I0313 00:45:29.446118 2813 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.446114219 podStartE2EDuration="2.446114219s" podCreationTimestamp="2026-03-13 00:45:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:45:29.42458289 +0000 UTC m=+2.455367570" watchObservedRunningTime="2026-03-13 00:45:29.446114219 +0000 UTC m=+2.476898899" Mar 13 00:45:30.330070 kubelet[2813]: E0313 00:45:30.330004 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:45:31.333400 kubelet[2813]: E0313 00:45:31.333359 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:45:32.326351 kubelet[2813]: E0313 00:45:32.326264 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:45:32.897219 kubelet[2813]: I0313 00:45:32.896892 2813 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 13 00:45:32.899236 kubelet[2813]: I0313 00:45:32.897606 2813 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 13 00:45:32.899268 containerd[1555]: time="2026-03-13T00:45:32.897455972Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 13 00:45:33.683971 systemd[1]: Created slice kubepods-besteffort-pod5b2815b9_021a_4b4f_8f8a_ec5428e7e712.slice - libcontainer container kubepods-besteffort-pod5b2815b9_021a_4b4f_8f8a_ec5428e7e712.slice. Mar 13 00:45:33.689626 kubelet[2813]: I0313 00:45:33.689506 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5b2815b9-021a-4b4f-8f8a-ec5428e7e712-kube-proxy\") pod \"kube-proxy-k2m84\" (UID: \"5b2815b9-021a-4b4f-8f8a-ec5428e7e712\") " pod="kube-system/kube-proxy-k2m84" Mar 13 00:45:33.690174 kubelet[2813]: I0313 00:45:33.690026 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b2815b9-021a-4b4f-8f8a-ec5428e7e712-xtables-lock\") pod \"kube-proxy-k2m84\" (UID: \"5b2815b9-021a-4b4f-8f8a-ec5428e7e712\") " pod="kube-system/kube-proxy-k2m84" Mar 13 00:45:33.690174 kubelet[2813]: I0313 00:45:33.690124 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbkr6\" (UniqueName: \"kubernetes.io/projected/5b2815b9-021a-4b4f-8f8a-ec5428e7e712-kube-api-access-gbkr6\") pod \"kube-proxy-k2m84\" (UID: \"5b2815b9-021a-4b4f-8f8a-ec5428e7e712\") " pod="kube-system/kube-proxy-k2m84" Mar 13 00:45:33.690174 kubelet[2813]: I0313 00:45:33.690144 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b2815b9-021a-4b4f-8f8a-ec5428e7e712-lib-modules\") pod \"kube-proxy-k2m84\" (UID: \"5b2815b9-021a-4b4f-8f8a-ec5428e7e712\") " pod="kube-system/kube-proxy-k2m84" Mar 13 00:45:33.805358 kubelet[2813]: E0313 00:45:33.805184 2813 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 13 00:45:33.805358 kubelet[2813]: E0313 00:45:33.805297 2813 projected.go:196] Error preparing data for projected volume kube-api-access-gbkr6 for pod kube-system/kube-proxy-k2m84: configmap "kube-root-ca.crt" not found Mar 13 00:45:33.805795 kubelet[2813]: E0313 00:45:33.805600 2813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5b2815b9-021a-4b4f-8f8a-ec5428e7e712-kube-api-access-gbkr6 podName:5b2815b9-021a-4b4f-8f8a-ec5428e7e712 nodeName:}" failed. No retries permitted until 2026-03-13 00:45:34.305497141 +0000 UTC m=+7.336281820 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gbkr6" (UniqueName: "kubernetes.io/projected/5b2815b9-021a-4b4f-8f8a-ec5428e7e712-kube-api-access-gbkr6") pod "kube-proxy-k2m84" (UID: "5b2815b9-021a-4b4f-8f8a-ec5428e7e712") : configmap "kube-root-ca.crt" not found Mar 13 00:45:34.108037 systemd[1]: Created slice kubepods-besteffort-pod95a09cb8_bf1c_4724_b528_2db22330846c.slice - libcontainer container kubepods-besteffort-pod95a09cb8_bf1c_4724_b528_2db22330846c.slice. Mar 13 00:45:34.194307 kubelet[2813]: I0313 00:45:34.194078 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn5kc\" (UniqueName: \"kubernetes.io/projected/95a09cb8-bf1c-4724-b528-2db22330846c-kube-api-access-jn5kc\") pod \"tigera-operator-6cf4cccc57-fncnf\" (UID: \"95a09cb8-bf1c-4724-b528-2db22330846c\") " pod="tigera-operator/tigera-operator-6cf4cccc57-fncnf" Mar 13 00:45:34.194307 kubelet[2813]: I0313 00:45:34.194179 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/95a09cb8-bf1c-4724-b528-2db22330846c-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-fncnf\" (UID: \"95a09cb8-bf1c-4724-b528-2db22330846c\") " pod="tigera-operator/tigera-operator-6cf4cccc57-fncnf" Mar 13 00:45:34.422607 containerd[1555]: time="2026-03-13T00:45:34.422058622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-fncnf,Uid:95a09cb8-bf1c-4724-b528-2db22330846c,Namespace:tigera-operator,Attempt:0,}" Mar 13 00:45:34.466432 containerd[1555]: time="2026-03-13T00:45:34.466312995Z" level=info msg="connecting to shim 0c8d4d936645d4ae2e98a3ccd87ebb63d0bd0b6601202fe7bad87a6a0fb11499" address="unix:///run/containerd/s/1adb176205b8d386e19379b078297deb77c074e42eedeb7e60ea16047fc8b1e0" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:45:34.527116 systemd[1]: Started cri-containerd-0c8d4d936645d4ae2e98a3ccd87ebb63d0bd0b6601202fe7bad87a6a0fb11499.scope - libcontainer container 0c8d4d936645d4ae2e98a3ccd87ebb63d0bd0b6601202fe7bad87a6a0fb11499. Mar 13 00:45:34.605926 kubelet[2813]: E0313 00:45:34.605332 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:45:34.611478 containerd[1555]: time="2026-03-13T00:45:34.611064437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k2m84,Uid:5b2815b9-021a-4b4f-8f8a-ec5428e7e712,Namespace:kube-system,Attempt:0,}" Mar 13 00:45:34.630191 containerd[1555]: time="2026-03-13T00:45:34.630031982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-fncnf,Uid:95a09cb8-bf1c-4724-b528-2db22330846c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0c8d4d936645d4ae2e98a3ccd87ebb63d0bd0b6601202fe7bad87a6a0fb11499\"" Mar 13 00:45:34.635927 containerd[1555]: time="2026-03-13T00:45:34.634945958Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 13 00:45:34.664457 containerd[1555]: time="2026-03-13T00:45:34.664402159Z" level=info msg="connecting to shim 6a57be94869ca0e7b200cb3cea93ba3bd65c14d8a11a2e0c73b18d61c51403ee" address="unix:///run/containerd/s/f2a77ae3592e55ec0bfcd13c298dc6ad2480ad5ae2d7bfdc49b1d11656d4bd26" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:45:34.710076 systemd[1]: Started cri-containerd-6a57be94869ca0e7b200cb3cea93ba3bd65c14d8a11a2e0c73b18d61c51403ee.scope - libcontainer container 6a57be94869ca0e7b200cb3cea93ba3bd65c14d8a11a2e0c73b18d61c51403ee. Mar 13 00:45:34.766950 containerd[1555]: time="2026-03-13T00:45:34.766426035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k2m84,Uid:5b2815b9-021a-4b4f-8f8a-ec5428e7e712,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a57be94869ca0e7b200cb3cea93ba3bd65c14d8a11a2e0c73b18d61c51403ee\"" Mar 13 00:45:34.769856 kubelet[2813]: E0313 00:45:34.769420 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:45:34.784224 containerd[1555]: time="2026-03-13T00:45:34.784094412Z" level=info msg="CreateContainer within sandbox \"6a57be94869ca0e7b200cb3cea93ba3bd65c14d8a11a2e0c73b18d61c51403ee\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 13 00:45:34.810261 containerd[1555]: time="2026-03-13T00:45:34.810062034Z" level=info msg="Container cc3f82f8a40afb7873dbf7d658e455da2b98071a6f40f61ba3de08e6d702edb9: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:45:34.828347 containerd[1555]: time="2026-03-13T00:45:34.828184724Z" level=info msg="CreateContainer within sandbox \"6a57be94869ca0e7b200cb3cea93ba3bd65c14d8a11a2e0c73b18d61c51403ee\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cc3f82f8a40afb7873dbf7d658e455da2b98071a6f40f61ba3de08e6d702edb9\"" Mar 13 00:45:34.830283 containerd[1555]: time="2026-03-13T00:45:34.830004668Z" level=info msg="StartContainer for \"cc3f82f8a40afb7873dbf7d658e455da2b98071a6f40f61ba3de08e6d702edb9\"" Mar 13 00:45:34.834922 containerd[1555]: time="2026-03-13T00:45:34.834897363Z" level=info msg="connecting to shim cc3f82f8a40afb7873dbf7d658e455da2b98071a6f40f61ba3de08e6d702edb9" address="unix:///run/containerd/s/f2a77ae3592e55ec0bfcd13c298dc6ad2480ad5ae2d7bfdc49b1d11656d4bd26" protocol=ttrpc version=3 Mar 13 00:45:34.900911 systemd[1]: Started cri-containerd-cc3f82f8a40afb7873dbf7d658e455da2b98071a6f40f61ba3de08e6d702edb9.scope - libcontainer container cc3f82f8a40afb7873dbf7d658e455da2b98071a6f40f61ba3de08e6d702edb9. Mar 13 00:45:35.048013 containerd[1555]: time="2026-03-13T00:45:35.047460758Z" level=info msg="StartContainer for \"cc3f82f8a40afb7873dbf7d658e455da2b98071a6f40f61ba3de08e6d702edb9\" returns successfully" Mar 13 00:45:35.357467 kubelet[2813]: E0313 00:45:35.357362 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:45:35.895995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3339357299.mount: Deactivated successfully. Mar 13 00:45:37.916974 kubelet[2813]: E0313 00:45:37.916517 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:45:37.934648 kubelet[2813]: I0313 00:45:37.934592 2813 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-k2m84" podStartSLOduration=4.934578431 podStartE2EDuration="4.934578431s" podCreationTimestamp="2026-03-13 00:45:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:45:35.38021417 +0000 UTC m=+8.410998850" watchObservedRunningTime="2026-03-13 00:45:37.934578431 +0000 UTC m=+10.965363121" Mar 13 00:45:38.270005 containerd[1555]: time="2026-03-13T00:45:38.269426532Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:45:38.272234 containerd[1555]: time="2026-03-13T00:45:38.272122470Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 13 00:45:38.275155 containerd[1555]: time="2026-03-13T00:45:38.275026907Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:45:38.279970 containerd[1555]: time="2026-03-13T00:45:38.279866129Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:45:38.280925 containerd[1555]: time="2026-03-13T00:45:38.280495694Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 3.645515281s" Mar 13 00:45:38.280925 containerd[1555]: time="2026-03-13T00:45:38.280620026Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 13 00:45:38.292560 containerd[1555]: time="2026-03-13T00:45:38.292530245Z" level=info msg="CreateContainer within sandbox \"0c8d4d936645d4ae2e98a3ccd87ebb63d0bd0b6601202fe7bad87a6a0fb11499\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 13 00:45:38.305270 containerd[1555]: time="2026-03-13T00:45:38.304622162Z" level=info msg="Container 48e2a09139364240a68dcb64849d0cb9f5f4c262b0347ef17d0211ae7a05f43e: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:45:38.319339 containerd[1555]: time="2026-03-13T00:45:38.319134897Z" level=info msg="CreateContainer within sandbox \"0c8d4d936645d4ae2e98a3ccd87ebb63d0bd0b6601202fe7bad87a6a0fb11499\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"48e2a09139364240a68dcb64849d0cb9f5f4c262b0347ef17d0211ae7a05f43e\"" Mar 13 00:45:38.320480 containerd[1555]: time="2026-03-13T00:45:38.320456558Z" level=info msg="StartContainer for \"48e2a09139364240a68dcb64849d0cb9f5f4c262b0347ef17d0211ae7a05f43e\"" Mar 13 00:45:38.322459 containerd[1555]: time="2026-03-13T00:45:38.322328180Z" level=info msg="connecting to shim 48e2a09139364240a68dcb64849d0cb9f5f4c262b0347ef17d0211ae7a05f43e" address="unix:///run/containerd/s/1adb176205b8d386e19379b078297deb77c074e42eedeb7e60ea16047fc8b1e0" protocol=ttrpc version=3 Mar 13 00:45:38.379320 systemd[1]: Started cri-containerd-48e2a09139364240a68dcb64849d0cb9f5f4c262b0347ef17d0211ae7a05f43e.scope - libcontainer container 48e2a09139364240a68dcb64849d0cb9f5f4c262b0347ef17d0211ae7a05f43e. Mar 13 00:45:38.380498 kubelet[2813]: E0313 00:45:38.380384 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:45:38.472356 containerd[1555]: time="2026-03-13T00:45:38.472267815Z" level=info msg="StartContainer for \"48e2a09139364240a68dcb64849d0cb9f5f4c262b0347ef17d0211ae7a05f43e\" returns successfully" Mar 13 00:45:48.225515 kubelet[2813]: E0313 00:45:48.222533 2813 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.442s" Mar 13 00:45:49.708601 kubelet[2813]: E0313 00:45:49.707575 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:45:49.952377 kubelet[2813]: E0313 00:45:49.946341 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:45:50.417649 kubelet[2813]: I0313 00:45:50.356571 2813 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-fncnf" podStartSLOduration=12.707447138 podStartE2EDuration="16.356235829s" podCreationTimestamp="2026-03-13 00:45:34 +0000 UTC" firstStartedPulling="2026-03-13 00:45:34.633443624 +0000 UTC m=+7.664228304" lastFinishedPulling="2026-03-13 00:45:38.282232324 +0000 UTC m=+11.313016995" observedRunningTime="2026-03-13 00:45:50.332593235 +0000 UTC m=+23.363377925" watchObservedRunningTime="2026-03-13 00:45:50.356235829 +0000 UTC m=+23.387020509" Mar 13 00:45:50.417649 kubelet[2813]: E0313 00:45:50.397219 2813 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.106s" Mar 13 00:46:01.235157 sudo[1801]: pam_unix(sudo:session): session closed for user root Mar 13 00:46:01.278072 sshd[1800]: Connection closed by 10.0.0.1 port 37178 Mar 13 00:46:01.309413 sshd-session[1797]: pam_unix(sshd:session): session closed for user core Mar 13 00:46:01.372214 systemd[1]: sshd@8-10.0.0.89:22-10.0.0.1:37178.service: Deactivated successfully. Mar 13 00:46:01.390526 systemd[1]: session-9.scope: Deactivated successfully. Mar 13 00:46:01.392005 systemd[1]: session-9.scope: Consumed 16.212s CPU time, 226.2M memory peak. Mar 13 00:46:01.401648 systemd-logind[1540]: Session 9 logged out. Waiting for processes to exit. Mar 13 00:46:01.416206 systemd-logind[1540]: Removed session 9. Mar 13 00:46:05.110314 systemd[1]: Created slice kubepods-besteffort-podde208e4a_5d62_4a08_a97e_ac154f9ba836.slice - libcontainer container kubepods-besteffort-podde208e4a_5d62_4a08_a97e_ac154f9ba836.slice. Mar 13 00:46:05.221491 systemd[1]: Created slice kubepods-besteffort-podb9cb7136_e274_46a9_a196_906f8c3b3ac7.slice - libcontainer container kubepods-besteffort-podb9cb7136_e274_46a9_a196_906f8c3b3ac7.slice. Mar 13 00:46:05.222644 kubelet[2813]: I0313 00:46:05.221596 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de208e4a-5d62-4a08-a97e-ac154f9ba836-tigera-ca-bundle\") pod \"calico-typha-75d5dbd46-4p99h\" (UID: \"de208e4a-5d62-4a08-a97e-ac154f9ba836\") " pod="calico-system/calico-typha-75d5dbd46-4p99h" Mar 13 00:46:05.222644 kubelet[2813]: I0313 00:46:05.221645 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/de208e4a-5d62-4a08-a97e-ac154f9ba836-typha-certs\") pod \"calico-typha-75d5dbd46-4p99h\" (UID: \"de208e4a-5d62-4a08-a97e-ac154f9ba836\") " pod="calico-system/calico-typha-75d5dbd46-4p99h" Mar 13 00:46:05.222644 kubelet[2813]: I0313 00:46:05.222004 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm8qj\" (UniqueName: \"kubernetes.io/projected/de208e4a-5d62-4a08-a97e-ac154f9ba836-kube-api-access-bm8qj\") pod \"calico-typha-75d5dbd46-4p99h\" (UID: \"de208e4a-5d62-4a08-a97e-ac154f9ba836\") " pod="calico-system/calico-typha-75d5dbd46-4p99h" Mar 13 00:46:05.323250 kubelet[2813]: I0313 00:46:05.322288 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b9cb7136-e274-46a9-a196-906f8c3b3ac7-var-lib-calico\") pod \"calico-node-vvr5x\" (UID: \"b9cb7136-e274-46a9-a196-906f8c3b3ac7\") " pod="calico-system/calico-node-vvr5x" Mar 13 00:46:05.323250 kubelet[2813]: I0313 00:46:05.322363 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b9cb7136-e274-46a9-a196-906f8c3b3ac7-node-certs\") pod \"calico-node-vvr5x\" (UID: \"b9cb7136-e274-46a9-a196-906f8c3b3ac7\") " pod="calico-system/calico-node-vvr5x" Mar 13 00:46:05.323250 kubelet[2813]: I0313 00:46:05.322390 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/b9cb7136-e274-46a9-a196-906f8c3b3ac7-nodeproc\") pod \"calico-node-vvr5x\" (UID: \"b9cb7136-e274-46a9-a196-906f8c3b3ac7\") " pod="calico-system/calico-node-vvr5x" Mar 13 00:46:05.323250 kubelet[2813]: I0313 00:46:05.322412 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b9cb7136-e274-46a9-a196-906f8c3b3ac7-policysync\") pod \"calico-node-vvr5x\" (UID: \"b9cb7136-e274-46a9-a196-906f8c3b3ac7\") " pod="calico-system/calico-node-vvr5x" Mar 13 00:46:05.323250 kubelet[2813]: I0313 00:46:05.322441 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/b9cb7136-e274-46a9-a196-906f8c3b3ac7-bpffs\") pod \"calico-node-vvr5x\" (UID: \"b9cb7136-e274-46a9-a196-906f8c3b3ac7\") " pod="calico-system/calico-node-vvr5x" Mar 13 00:46:05.326272 kubelet[2813]: I0313 00:46:05.322465 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b9cb7136-e274-46a9-a196-906f8c3b3ac7-var-run-calico\") pod \"calico-node-vvr5x\" (UID: \"b9cb7136-e274-46a9-a196-906f8c3b3ac7\") " pod="calico-system/calico-node-vvr5x" Mar 13 00:46:05.326272 kubelet[2813]: I0313 00:46:05.322493 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b9cb7136-e274-46a9-a196-906f8c3b3ac7-cni-log-dir\") pod \"calico-node-vvr5x\" (UID: \"b9cb7136-e274-46a9-a196-906f8c3b3ac7\") " pod="calico-system/calico-node-vvr5x" Mar 13 00:46:05.326272 kubelet[2813]: I0313 00:46:05.322518 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b9cb7136-e274-46a9-a196-906f8c3b3ac7-flexvol-driver-host\") pod \"calico-node-vvr5x\" (UID: \"b9cb7136-e274-46a9-a196-906f8c3b3ac7\") " pod="calico-system/calico-node-vvr5x" Mar 13 00:46:05.326272 kubelet[2813]: I0313 00:46:05.322563 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b9cb7136-e274-46a9-a196-906f8c3b3ac7-cni-bin-dir\") pod \"calico-node-vvr5x\" (UID: \"b9cb7136-e274-46a9-a196-906f8c3b3ac7\") " pod="calico-system/calico-node-vvr5x" Mar 13 00:46:05.326272 kubelet[2813]: I0313 00:46:05.322613 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88lv7\" (UniqueName: \"kubernetes.io/projected/b9cb7136-e274-46a9-a196-906f8c3b3ac7-kube-api-access-88lv7\") pod \"calico-node-vvr5x\" (UID: \"b9cb7136-e274-46a9-a196-906f8c3b3ac7\") " pod="calico-system/calico-node-vvr5x" Mar 13 00:46:05.326653 kubelet[2813]: I0313 00:46:05.322637 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9cb7136-e274-46a9-a196-906f8c3b3ac7-lib-modules\") pod \"calico-node-vvr5x\" (UID: \"b9cb7136-e274-46a9-a196-906f8c3b3ac7\") " pod="calico-system/calico-node-vvr5x" Mar 13 00:46:05.326653 kubelet[2813]: I0313 00:46:05.323000 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b9cb7136-e274-46a9-a196-906f8c3b3ac7-cni-net-dir\") pod \"calico-node-vvr5x\" (UID: \"b9cb7136-e274-46a9-a196-906f8c3b3ac7\") " pod="calico-system/calico-node-vvr5x" Mar 13 00:46:05.326653 kubelet[2813]: I0313 00:46:05.323031 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/b9cb7136-e274-46a9-a196-906f8c3b3ac7-sys-fs\") pod \"calico-node-vvr5x\" (UID: \"b9cb7136-e274-46a9-a196-906f8c3b3ac7\") " pod="calico-system/calico-node-vvr5x" Mar 13 00:46:05.326653 kubelet[2813]: I0313 00:46:05.323049 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9cb7136-e274-46a9-a196-906f8c3b3ac7-tigera-ca-bundle\") pod \"calico-node-vvr5x\" (UID: \"b9cb7136-e274-46a9-a196-906f8c3b3ac7\") " pod="calico-system/calico-node-vvr5x" Mar 13 00:46:05.326653 kubelet[2813]: I0313 00:46:05.323073 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b9cb7136-e274-46a9-a196-906f8c3b3ac7-xtables-lock\") pod \"calico-node-vvr5x\" (UID: \"b9cb7136-e274-46a9-a196-906f8c3b3ac7\") " pod="calico-system/calico-node-vvr5x" Mar 13 00:46:05.352945 kubelet[2813]: E0313 00:46:05.350481 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k9f9b" podUID="4567d070-24e0-470c-b37a-b10f7102b657" Mar 13 00:46:05.534400 kubelet[2813]: E0313 00:46:05.526646 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:05.534400 kubelet[2813]: W0313 00:46:05.534068 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:05.539995 kubelet[2813]: E0313 00:46:05.534539 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:05.543982 kubelet[2813]: E0313 00:46:05.543000 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:05.543982 kubelet[2813]: W0313 00:46:05.543099 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:05.543982 kubelet[2813]: E0313 00:46:05.543117 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:05.546398 kubelet[2813]: E0313 00:46:05.546202 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:05.546398 kubelet[2813]: W0313 00:46:05.546321 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:05.546398 kubelet[2813]: E0313 00:46:05.546358 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:05.547448 kubelet[2813]: E0313 00:46:05.547268 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:05.547943 kubelet[2813]: W0313 00:46:05.547607 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:05.547943 kubelet[2813]: E0313 00:46:05.547634 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:05.548461 kubelet[2813]: I0313 00:46:05.548349 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4567d070-24e0-470c-b37a-b10f7102b657-kubelet-dir\") pod \"csi-node-driver-k9f9b\" (UID: \"4567d070-24e0-470c-b37a-b10f7102b657\") " pod="calico-system/csi-node-driver-k9f9b" Mar 13 00:46:05.549646 kubelet[2813]: E0313 00:46:05.549208 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:05.549646 kubelet[2813]: W0313 00:46:05.549318 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:05.549646 kubelet[2813]: E0313 00:46:05.549335 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:05.556107 kubelet[2813]: I0313 00:46:05.551037 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4567d070-24e0-470c-b37a-b10f7102b657-registration-dir\") pod \"csi-node-driver-k9f9b\" (UID: \"4567d070-24e0-470c-b37a-b10f7102b657\") " pod="calico-system/csi-node-driver-k9f9b" Mar 13 00:46:05.556451 kubelet[2813]: E0313 00:46:05.556432 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:05.556569 kubelet[2813]: W0313 00:46:05.556504 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:05.556569 kubelet[2813]: E0313 00:46:05.556521 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:05.558280 kubelet[2813]: E0313 00:46:05.557552 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:05.558280 kubelet[2813]: W0313 00:46:05.557564 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:05.558280 kubelet[2813]: E0313 00:46:05.557575 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:05.559049 kubelet[2813]: E0313 00:46:05.559030 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:05.559235 kubelet[2813]: W0313 00:46:05.559216 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:05.559409 kubelet[2813]: E0313 00:46:05.559396 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:05.560252 kubelet[2813]: E0313 00:46:05.560235 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:05.560341 kubelet[2813]: W0313 00:46:05.560327 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:05.560528 kubelet[2813]: E0313 00:46:05.560511 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:05.561351 kubelet[2813]: I0313 00:46:05.561127 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4567d070-24e0-470c-b37a-b10f7102b657-socket-dir\") pod \"csi-node-driver-k9f9b\" (UID: \"4567d070-24e0-470c-b37a-b10f7102b657\") " pod="calico-system/csi-node-driver-k9f9b" Mar 13 00:46:05.562470 kubelet[2813]: E0313 00:46:05.562422 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:05.562470 kubelet[2813]: W0313 00:46:05.562438 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:05.562470 kubelet[2813]: E0313 00:46:05.562451 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:05.564389 kubelet[2813]: E0313 00:46:05.564348 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:05.564389 kubelet[2813]: W0313 00:46:05.564363 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:05.564389 kubelet[2813]: E0313 00:46:05.564374 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:05.569136 kubelet[2813]: E0313 00:46:05.568649 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:05.569136 kubelet[2813]: W0313 00:46:05.568991 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:05.569136 kubelet[2813]: E0313 00:46:05.569010 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:05.569570 kubelet[2813]: I0313 00:46:05.569167 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4567d070-24e0-470c-b37a-b10f7102b657-varrun\") pod \"csi-node-driver-k9f9b\" (UID: \"4567d070-24e0-470c-b37a-b10f7102b657\") " pod="calico-system/csi-node-driver-k9f9b" Mar 13 00:46:05.570060 kubelet[2813]: E0313 00:46:05.570023 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:05.570060 kubelet[2813]: W0313 00:46:05.570033 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:05.570060 kubelet[2813]: E0313 00:46:05.570045 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:05.571364 kubelet[2813]: E0313 00:46:05.571232 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:05.571364 kubelet[2813]: W0313 00:46:05.571336 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:05.571364 kubelet[2813]: E0313 00:46:05.571352 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:05.572200 kubelet[2813]: E0313 00:46:05.572068 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:05.572200 kubelet[2813]: W0313 00:46:05.572165 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:05.572200 kubelet[2813]: E0313 00:46:05.572176 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:05.573204 kubelet[2813]: E0313 00:46:05.573078 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:05.573204 kubelet[2813]: W0313 00:46:05.573162 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:05.573204 kubelet[2813]: E0313 00:46:05.573174 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:05.574416 kubelet[2813]: E0313 00:46:05.574175 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:05.574416 kubelet[2813]: W0313 00:46:05.574256 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:05.574416 kubelet[2813]: E0313 00:46:05.574267 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:05.574416 kubelet[2813]: I0313 00:46:05.574354 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvj76\" (UniqueName: \"kubernetes.io/projected/4567d070-24e0-470c-b37a-b10f7102b657-kube-api-access-cvj76\") pod \"csi-node-driver-k9f9b\" (UID: \"4567d070-24e0-470c-b37a-b10f7102b657\") " pod="calico-system/csi-node-driver-k9f9b" Mar 13 00:46:05.575329 kubelet[2813]: E0313 00:46:05.575223 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:05.575329 kubelet[2813]: W0313 00:46:05.575306 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:05.575329 kubelet[2813]: E0313 00:46:05.575316 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:05.576485 kubelet[2813]: E0313 00:46:05.576405 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:05.576485 kubelet[2813]: W0313 00:46:05.576481 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:05.576556 kubelet[2813]: E0313 00:46:05.576492 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:05.577095 kubelet[2813]: E0313 00:46:05.577064 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:05.577095 kubelet[2813]: W0313 00:46:05.577077 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:05.577095 kubelet[2813]: E0313 00:46:05.577087 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:05.678254 kubelet[2813]: E0313 00:46:05.678125 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:05.678534 kubelet[2813]: W0313 00:46:05.678253 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:05.678534 kubelet[2813]: E0313 00:46:05.678294 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:05.679247 kubelet[2813]: E0313 00:46:05.679191 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:05.679247 kubelet[2813]: W0313 00:46:05.679211 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:05.679247 kubelet[2813]: E0313 00:46:05.679228 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:05.679639 kubelet[2813]: E0313 00:46:05.679535 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:05.679639 kubelet[2813]: W0313 00:46:05.679545 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:05.679639 kubelet[2813]: E0313 00:46:05.679560 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:05.681212 kubelet[2813]: E0313 00:46:05.681046 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:05.681212 kubelet[2813]: W0313 00:46:05.681198 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:05.681392 kubelet[2813]: E0313 00:46:05.681214 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:05.682099 kubelet[2813]: E0313 00:46:05.681653 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:05.682099 kubelet[2813]: W0313 00:46:05.681946 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:05.682099 kubelet[2813]: E0313 00:46:05.681957 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:05.683275 kubelet[2813]: E0313 00:46:05.683034 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:05.683275 kubelet[2813]: W0313 00:46:05.683049 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:05.683275 kubelet[2813]: E0313 00:46:05.683059 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:05.683455 kubelet[2813]: E0313 00:46:05.683383 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:05.683455 kubelet[2813]: W0313 00:46:05.683392 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:05.683455 kubelet[2813]: E0313 00:46:05.683402 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:05.684324 kubelet[2813]: E0313 00:46:05.684232 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:05.684366 kubelet[2813]: W0313 00:46:05.684329 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:05.684366 kubelet[2813]: E0313 00:46:05.684351 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:05.686065 kubelet[2813]: E0313 00:46:05.685583 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:05.686065 kubelet[2813]: W0313 00:46:05.685990 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:05.686065 kubelet[2813]: E0313 00:46:05.686010 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:05.687194 kubelet[2813]: E0313 00:46:05.687002 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:05.687194 kubelet[2813]: W0313 00:46:05.687017 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:05.687194 kubelet[2813]: E0313 00:46:05.687030 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:05.688082 kubelet[2813]: E0313 00:46:05.687990 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:05.688217 kubelet[2813]: W0313 00:46:05.688082 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:05.688217 kubelet[2813]: E0313 00:46:05.688099 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:05.690337 kubelet[2813]: E0313 00:46:05.689515 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:05.690337 kubelet[2813]: W0313 00:46:05.689537 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:05.690337 kubelet[2813]: E0313 00:46:05.689555 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:05.691434 kubelet[2813]: E0313 00:46:05.691236 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:05.691434 kubelet[2813]: W0313 00:46:05.691248 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:05.691434 kubelet[2813]: E0313 00:46:05.691260 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:05.692283 kubelet[2813]: E0313 00:46:05.692182 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:05.692283 kubelet[2813]: W0313 00:46:05.692266 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:05.692283 kubelet[2813]: E0313 00:46:05.692277 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:05.692962 kubelet[2813]: E0313 00:46:05.692447 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:05.692962 kubelet[2813]: W0313 00:46:05.692455 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:05.692962 kubelet[2813]: E0313 00:46:05.692462 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:05.695058 kubelet[2813]: E0313 00:46:05.694634 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:05.695058 kubelet[2813]: W0313 00:46:05.695005 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:05.695058 kubelet[2813]: E0313 00:46:05.695025 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:05.696300 kubelet[2813]: E0313 00:46:05.696210 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:05.696450 kubelet[2813]: W0313 00:46:05.696304 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:05.696450 kubelet[2813]: E0313 00:46:05.696325 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:05.698230 kubelet[2813]: E0313 00:46:05.698136 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:05.698360 kubelet[2813]: W0313 00:46:05.698230 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:05.698360 kubelet[2813]: E0313 00:46:05.698252 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:05.699361 kubelet[2813]: E0313 00:46:05.699246 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:05.699361 kubelet[2813]: W0313 00:46:05.699352 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:05.699429 kubelet[2813]: E0313 00:46:05.699368 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:05.701132 kubelet[2813]: E0313 00:46:05.700249 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:05.701132 kubelet[2813]: W0313 00:46:05.700353 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:05.701132 kubelet[2813]: E0313 00:46:05.700371 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:05.702312 kubelet[2813]: E0313 00:46:05.702208 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:05.702312 kubelet[2813]: W0313 00:46:05.702299 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:05.702368 kubelet[2813]: E0313 00:46:05.702315 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:05.703156 kubelet[2813]: E0313 00:46:05.703051 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:05.703193 kubelet[2813]: W0313 00:46:05.703161 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:05.703193 kubelet[2813]: E0313 00:46:05.703178 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:05.705030 kubelet[2813]: E0313 00:46:05.704649 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:05.705161 kubelet[2813]: W0313 00:46:05.705069 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:05.705190 kubelet[2813]: E0313 00:46:05.705169 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:05.706331 kubelet[2813]: E0313 00:46:05.706225 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:05.706331 kubelet[2813]: W0313 00:46:05.706324 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:05.706481 kubelet[2813]: E0313 00:46:05.706339 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:05.707385 kubelet[2813]: E0313 00:46:05.707279 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:05.707385 kubelet[2813]: W0313 00:46:05.707381 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:05.707442 kubelet[2813]: E0313 00:46:05.707397 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:05.733543 kubelet[2813]: E0313 00:46:05.733366 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:05.733543 kubelet[2813]: W0313 00:46:05.733475 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:05.733543 kubelet[2813]: E0313 00:46:05.733493 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:05.744244 kubelet[2813]: E0313 00:46:05.743651 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:05.752821 containerd[1555]: time="2026-03-13T00:46:05.752143277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75d5dbd46-4p99h,Uid:de208e4a-5d62-4a08-a97e-ac154f9ba836,Namespace:calico-system,Attempt:0,}" Mar 13 00:46:05.838477 containerd[1555]: time="2026-03-13T00:46:05.838275446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vvr5x,Uid:b9cb7136-e274-46a9-a196-906f8c3b3ac7,Namespace:calico-system,Attempt:0,}" Mar 13 00:46:05.963188 containerd[1555]: time="2026-03-13T00:46:05.963141053Z" level=info msg="connecting to shim 94d884624cec767ddc7e794504c2cea27680be3b7af21c4e9b0ac45d4c11801f" address="unix:///run/containerd/s/592545d98643822ec81b5480bb30440fd5bfb673475c93155ec4f1737b7b13f5" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:46:06.025049 containerd[1555]: time="2026-03-13T00:46:06.025008510Z" level=info msg="connecting to shim 0167e46815dab4dec8952ca9b56a205a1cee3eb7e49e39354f59ad1ddfaeac1c" address="unix:///run/containerd/s/d889b6ec43d8ce462f9ef7dc79ba0a1a4905e32cd3eb998991252d48462f9404" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:46:06.220355 systemd[1]: Started cri-containerd-0167e46815dab4dec8952ca9b56a205a1cee3eb7e49e39354f59ad1ddfaeac1c.scope - libcontainer container 0167e46815dab4dec8952ca9b56a205a1cee3eb7e49e39354f59ad1ddfaeac1c. Mar 13 00:46:06.225453 systemd[1]: Started cri-containerd-94d884624cec767ddc7e794504c2cea27680be3b7af21c4e9b0ac45d4c11801f.scope - libcontainer container 94d884624cec767ddc7e794504c2cea27680be3b7af21c4e9b0ac45d4c11801f. Mar 13 00:46:06.438110 containerd[1555]: time="2026-03-13T00:46:06.437574922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vvr5x,Uid:b9cb7136-e274-46a9-a196-906f8c3b3ac7,Namespace:calico-system,Attempt:0,} returns sandbox id \"0167e46815dab4dec8952ca9b56a205a1cee3eb7e49e39354f59ad1ddfaeac1c\"" Mar 13 00:46:06.458549 containerd[1555]: time="2026-03-13T00:46:06.457999863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 13 00:46:06.498637 containerd[1555]: time="2026-03-13T00:46:06.498075077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75d5dbd46-4p99h,Uid:de208e4a-5d62-4a08-a97e-ac154f9ba836,Namespace:calico-system,Attempt:0,} returns sandbox id \"94d884624cec767ddc7e794504c2cea27680be3b7af21c4e9b0ac45d4c11801f\"" Mar 13 00:46:06.500165 kubelet[2813]: E0313 00:46:06.499986 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:07.286123 kubelet[2813]: E0313 00:46:07.285551 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k9f9b" podUID="4567d070-24e0-470c-b37a-b10f7102b657" Mar 13 00:46:07.510493 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount839408771.mount: Deactivated successfully. Mar 13 00:46:07.724092 containerd[1555]: time="2026-03-13T00:46:07.723090687Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:07.724092 containerd[1555]: time="2026-03-13T00:46:07.724026463Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=6186433" Mar 13 00:46:07.730153 containerd[1555]: time="2026-03-13T00:46:07.730117875Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:07.737586 containerd[1555]: time="2026-03-13T00:46:07.737561536Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:07.740572 containerd[1555]: time="2026-03-13T00:46:07.739249390Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.281034337s" Mar 13 00:46:07.740629 containerd[1555]: time="2026-03-13T00:46:07.740546409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 13 00:46:07.748970 containerd[1555]: time="2026-03-13T00:46:07.748492422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 13 00:46:07.761097 containerd[1555]: time="2026-03-13T00:46:07.760608750Z" level=info msg="CreateContainer within sandbox \"0167e46815dab4dec8952ca9b56a205a1cee3eb7e49e39354f59ad1ddfaeac1c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 13 00:46:07.802850 containerd[1555]: time="2026-03-13T00:46:07.802306443Z" level=info msg="Container 0c559bc522a09a29e05adcdc604910133b63bd2bdc1f6a7388032d891c228c60: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:46:07.833522 containerd[1555]: time="2026-03-13T00:46:07.833318248Z" level=info msg="CreateContainer within sandbox \"0167e46815dab4dec8952ca9b56a205a1cee3eb7e49e39354f59ad1ddfaeac1c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0c559bc522a09a29e05adcdc604910133b63bd2bdc1f6a7388032d891c228c60\"" Mar 13 00:46:07.835641 containerd[1555]: time="2026-03-13T00:46:07.835569938Z" level=info msg="StartContainer for \"0c559bc522a09a29e05adcdc604910133b63bd2bdc1f6a7388032d891c228c60\"" Mar 13 00:46:07.847436 containerd[1555]: time="2026-03-13T00:46:07.847267228Z" level=info msg="connecting to shim 0c559bc522a09a29e05adcdc604910133b63bd2bdc1f6a7388032d891c228c60" address="unix:///run/containerd/s/d889b6ec43d8ce462f9ef7dc79ba0a1a4905e32cd3eb998991252d48462f9404" protocol=ttrpc version=3 Mar 13 00:46:07.934011 systemd[1]: Started cri-containerd-0c559bc522a09a29e05adcdc604910133b63bd2bdc1f6a7388032d891c228c60.scope - libcontainer container 0c559bc522a09a29e05adcdc604910133b63bd2bdc1f6a7388032d891c228c60. Mar 13 00:46:08.213570 containerd[1555]: time="2026-03-13T00:46:08.212866632Z" level=info msg="StartContainer for \"0c559bc522a09a29e05adcdc604910133b63bd2bdc1f6a7388032d891c228c60\" returns successfully" Mar 13 00:46:08.287521 kubelet[2813]: E0313 00:46:08.287047 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:08.287521 kubelet[2813]: W0313 00:46:08.287154 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:08.287521 kubelet[2813]: E0313 00:46:08.287181 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:08.289135 kubelet[2813]: E0313 00:46:08.289014 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:08.289135 kubelet[2813]: W0313 00:46:08.289119 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:08.289239 kubelet[2813]: E0313 00:46:08.289143 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:08.291369 kubelet[2813]: E0313 00:46:08.291244 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:08.291369 kubelet[2813]: W0313 00:46:08.291350 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:08.291369 kubelet[2813]: E0313 00:46:08.291370 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:08.292306 kubelet[2813]: E0313 00:46:08.292128 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:08.292306 kubelet[2813]: W0313 00:46:08.292232 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:08.292306 kubelet[2813]: E0313 00:46:08.292249 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:08.293311 kubelet[2813]: E0313 00:46:08.293204 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:08.294244 kubelet[2813]: W0313 00:46:08.294057 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:08.294244 kubelet[2813]: E0313 00:46:08.294156 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:08.296212 kubelet[2813]: E0313 00:46:08.296017 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:08.296212 kubelet[2813]: W0313 00:46:08.296106 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:08.296212 kubelet[2813]: E0313 00:46:08.296121 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:08.299368 kubelet[2813]: E0313 00:46:08.299014 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:08.299368 kubelet[2813]: W0313 00:46:08.299115 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:08.299368 kubelet[2813]: E0313 00:46:08.299131 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:08.302354 kubelet[2813]: E0313 00:46:08.302238 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:08.302354 kubelet[2813]: W0313 00:46:08.302343 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:08.302554 kubelet[2813]: E0313 00:46:08.302362 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:08.314185 kubelet[2813]: E0313 00:46:08.314063 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:08.314185 kubelet[2813]: W0313 00:46:08.314084 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:08.314185 kubelet[2813]: E0313 00:46:08.314102 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:08.320179 kubelet[2813]: E0313 00:46:08.319605 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:08.320179 kubelet[2813]: W0313 00:46:08.319997 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:08.320179 kubelet[2813]: E0313 00:46:08.320108 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:08.331209 kubelet[2813]: E0313 00:46:08.330144 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:08.331209 kubelet[2813]: W0313 00:46:08.330163 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:08.331209 kubelet[2813]: E0313 00:46:08.330179 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:08.331530 kubelet[2813]: E0313 00:46:08.331379 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:46:08.331530 kubelet[2813]: W0313 00:46:08.331391 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:46:08.331530 kubelet[2813]: E0313 00:46:08.331404 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:46:08.338542 systemd[1]: cri-containerd-0c559bc522a09a29e05adcdc604910133b63bd2bdc1f6a7388032d891c228c60.scope: Deactivated successfully. Mar 13 00:46:08.367222 containerd[1555]: time="2026-03-13T00:46:08.366632088Z" level=info msg="received container exit event container_id:\"0c559bc522a09a29e05adcdc604910133b63bd2bdc1f6a7388032d891c228c60\" id:\"0c559bc522a09a29e05adcdc604910133b63bd2bdc1f6a7388032d891c228c60\" pid:3397 exited_at:{seconds:1773362768 nanos:360233714}" Mar 13 00:46:08.464238 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c559bc522a09a29e05adcdc604910133b63bd2bdc1f6a7388032d891c228c60-rootfs.mount: Deactivated successfully. Mar 13 00:46:09.286053 kubelet[2813]: E0313 00:46:09.285573 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k9f9b" podUID="4567d070-24e0-470c-b37a-b10f7102b657" Mar 13 00:46:11.287902 kubelet[2813]: E0313 00:46:11.287509 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k9f9b" podUID="4567d070-24e0-470c-b37a-b10f7102b657" Mar 13 00:46:11.752173 containerd[1555]: time="2026-03-13T00:46:11.751972652Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:11.754376 containerd[1555]: time="2026-03-13T00:46:11.754258739Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=34551413" Mar 13 00:46:11.757569 containerd[1555]: time="2026-03-13T00:46:11.757436857Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:11.762118 containerd[1555]: time="2026-03-13T00:46:11.762094257Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:11.763290 containerd[1555]: time="2026-03-13T00:46:11.763132175Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 4.01451953s" Mar 13 00:46:11.763290 containerd[1555]: time="2026-03-13T00:46:11.763162350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 13 00:46:11.771011 containerd[1555]: time="2026-03-13T00:46:11.766605207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 13 00:46:11.816634 containerd[1555]: time="2026-03-13T00:46:11.816229752Z" level=info msg="CreateContainer within sandbox \"94d884624cec767ddc7e794504c2cea27680be3b7af21c4e9b0ac45d4c11801f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 13 00:46:11.836609 containerd[1555]: time="2026-03-13T00:46:11.836569951Z" level=info msg="Container e0cf24af785a3708e3e997398d7d93347430c4ca3e1d9558eda09e200b21bff3: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:46:11.860053 containerd[1555]: time="2026-03-13T00:46:11.859548820Z" level=info msg="CreateContainer within sandbox \"94d884624cec767ddc7e794504c2cea27680be3b7af21c4e9b0ac45d4c11801f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e0cf24af785a3708e3e997398d7d93347430c4ca3e1d9558eda09e200b21bff3\"" Mar 13 00:46:11.862049 containerd[1555]: time="2026-03-13T00:46:11.861645959Z" level=info msg="StartContainer for \"e0cf24af785a3708e3e997398d7d93347430c4ca3e1d9558eda09e200b21bff3\"" Mar 13 00:46:11.864539 containerd[1555]: time="2026-03-13T00:46:11.864158247Z" level=info msg="connecting to shim e0cf24af785a3708e3e997398d7d93347430c4ca3e1d9558eda09e200b21bff3" address="unix:///run/containerd/s/592545d98643822ec81b5480bb30440fd5bfb673475c93155ec4f1737b7b13f5" protocol=ttrpc version=3 Mar 13 00:46:11.924258 systemd[1]: Started cri-containerd-e0cf24af785a3708e3e997398d7d93347430c4ca3e1d9558eda09e200b21bff3.scope - libcontainer container e0cf24af785a3708e3e997398d7d93347430c4ca3e1d9558eda09e200b21bff3. Mar 13 00:46:12.099215 containerd[1555]: time="2026-03-13T00:46:12.099018330Z" level=info msg="StartContainer for \"e0cf24af785a3708e3e997398d7d93347430c4ca3e1d9558eda09e200b21bff3\" returns successfully" Mar 13 00:46:12.307205 kubelet[2813]: E0313 00:46:12.305515 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:13.286615 kubelet[2813]: E0313 00:46:13.286189 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k9f9b" podUID="4567d070-24e0-470c-b37a-b10f7102b657" Mar 13 00:46:13.307902 kubelet[2813]: E0313 00:46:13.307553 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:13.343119 kubelet[2813]: I0313 00:46:13.342364 2813 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-75d5dbd46-4p99h" podStartSLOduration=4.081037191 podStartE2EDuration="9.341646572s" podCreationTimestamp="2026-03-13 00:46:04 +0000 UTC" firstStartedPulling="2026-03-13 00:46:06.503950366 +0000 UTC m=+39.534735046" lastFinishedPulling="2026-03-13 00:46:11.764559747 +0000 UTC m=+44.795344427" observedRunningTime="2026-03-13 00:46:12.350266459 +0000 UTC m=+45.381051139" watchObservedRunningTime="2026-03-13 00:46:13.341646572 +0000 UTC m=+46.372431282" Mar 13 00:46:14.311944 kubelet[2813]: E0313 00:46:14.311430 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:15.287076 kubelet[2813]: E0313 00:46:15.286375 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k9f9b" podUID="4567d070-24e0-470c-b37a-b10f7102b657" Mar 13 00:46:17.285462 kubelet[2813]: E0313 00:46:17.285408 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k9f9b" podUID="4567d070-24e0-470c-b37a-b10f7102b657" Mar 13 00:46:19.293632 kubelet[2813]: E0313 00:46:19.293500 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k9f9b" podUID="4567d070-24e0-470c-b37a-b10f7102b657" Mar 13 00:46:21.290286 kubelet[2813]: E0313 00:46:21.290248 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k9f9b" podUID="4567d070-24e0-470c-b37a-b10f7102b657" Mar 13 00:46:22.833532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3973547253.mount: Deactivated successfully. Mar 13 00:46:23.119575 containerd[1555]: time="2026-03-13T00:46:23.119414939Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:23.130223 containerd[1555]: time="2026-03-13T00:46:23.130087333Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 13 00:46:23.133260 containerd[1555]: time="2026-03-13T00:46:23.133157625Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:23.137256 containerd[1555]: time="2026-03-13T00:46:23.137155249Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:23.175269 containerd[1555]: time="2026-03-13T00:46:23.175210388Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 11.408569464s" Mar 13 00:46:23.175819 containerd[1555]: time="2026-03-13T00:46:23.175564677Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 13 00:46:23.187145 containerd[1555]: time="2026-03-13T00:46:23.187034082Z" level=info msg="CreateContainer within sandbox \"0167e46815dab4dec8952ca9b56a205a1cee3eb7e49e39354f59ad1ddfaeac1c\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 13 00:46:23.225119 containerd[1555]: time="2026-03-13T00:46:23.224966437Z" level=info msg="Container feea466f7853c9f3056f0d38c359f16a0101f30026aa2d732de946a2d4af968b: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:46:23.280070 containerd[1555]: time="2026-03-13T00:46:23.279657580Z" level=info msg="CreateContainer within sandbox \"0167e46815dab4dec8952ca9b56a205a1cee3eb7e49e39354f59ad1ddfaeac1c\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"feea466f7853c9f3056f0d38c359f16a0101f30026aa2d732de946a2d4af968b\"" Mar 13 00:46:23.282123 containerd[1555]: time="2026-03-13T00:46:23.282012912Z" level=info msg="StartContainer for \"feea466f7853c9f3056f0d38c359f16a0101f30026aa2d732de946a2d4af968b\"" Mar 13 00:46:23.284548 containerd[1555]: time="2026-03-13T00:46:23.284457349Z" level=info msg="connecting to shim feea466f7853c9f3056f0d38c359f16a0101f30026aa2d732de946a2d4af968b" address="unix:///run/containerd/s/d889b6ec43d8ce462f9ef7dc79ba0a1a4905e32cd3eb998991252d48462f9404" protocol=ttrpc version=3 Mar 13 00:46:23.296336 kubelet[2813]: E0313 00:46:23.296278 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k9f9b" podUID="4567d070-24e0-470c-b37a-b10f7102b657" Mar 13 00:46:23.330926 systemd[1]: Started cri-containerd-feea466f7853c9f3056f0d38c359f16a0101f30026aa2d732de946a2d4af968b.scope - libcontainer container feea466f7853c9f3056f0d38c359f16a0101f30026aa2d732de946a2d4af968b. Mar 13 00:46:23.471852 containerd[1555]: time="2026-03-13T00:46:23.471564714Z" level=info msg="StartContainer for \"feea466f7853c9f3056f0d38c359f16a0101f30026aa2d732de946a2d4af968b\" returns successfully" Mar 13 00:46:23.549218 systemd[1]: cri-containerd-feea466f7853c9f3056f0d38c359f16a0101f30026aa2d732de946a2d4af968b.scope: Deactivated successfully. Mar 13 00:46:23.555341 containerd[1555]: time="2026-03-13T00:46:23.555305558Z" level=info msg="received container exit event container_id:\"feea466f7853c9f3056f0d38c359f16a0101f30026aa2d732de946a2d4af968b\" id:\"feea466f7853c9f3056f0d38c359f16a0101f30026aa2d732de946a2d4af968b\" pid:3525 exited_at:{seconds:1773362783 nanos:555050051}" Mar 13 00:46:23.833974 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-feea466f7853c9f3056f0d38c359f16a0101f30026aa2d732de946a2d4af968b-rootfs.mount: Deactivated successfully. Mar 13 00:46:24.380126 containerd[1555]: time="2026-03-13T00:46:24.379819403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 13 00:46:25.287191 kubelet[2813]: E0313 00:46:25.285440 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k9f9b" podUID="4567d070-24e0-470c-b37a-b10f7102b657" Mar 13 00:46:27.285363 kubelet[2813]: E0313 00:46:27.285079 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k9f9b" podUID="4567d070-24e0-470c-b37a-b10f7102b657" Mar 13 00:46:29.285000 kubelet[2813]: E0313 00:46:29.284574 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k9f9b" podUID="4567d070-24e0-470c-b37a-b10f7102b657" Mar 13 00:46:30.945783 containerd[1555]: time="2026-03-13T00:46:30.945599336Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:30.947173 containerd[1555]: time="2026-03-13T00:46:30.946978563Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 13 00:46:30.948808 containerd[1555]: time="2026-03-13T00:46:30.948608845Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:30.953447 containerd[1555]: time="2026-03-13T00:46:30.953218005Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:30.953774 containerd[1555]: time="2026-03-13T00:46:30.953620461Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 6.573755153s" Mar 13 00:46:30.953854 containerd[1555]: time="2026-03-13T00:46:30.953651359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 13 00:46:30.972169 containerd[1555]: time="2026-03-13T00:46:30.972049709Z" level=info msg="CreateContainer within sandbox \"0167e46815dab4dec8952ca9b56a205a1cee3eb7e49e39354f59ad1ddfaeac1c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 13 00:46:30.987839 containerd[1555]: time="2026-03-13T00:46:30.987491009Z" level=info msg="Container 0f2de062441ae6a484c4130c85c785a9a4dc537dc71c96e526186040b3476187: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:46:31.002600 containerd[1555]: time="2026-03-13T00:46:31.002526052Z" level=info msg="CreateContainer within sandbox \"0167e46815dab4dec8952ca9b56a205a1cee3eb7e49e39354f59ad1ddfaeac1c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0f2de062441ae6a484c4130c85c785a9a4dc537dc71c96e526186040b3476187\"" Mar 13 00:46:31.003812 containerd[1555]: time="2026-03-13T00:46:31.003637480Z" level=info msg="StartContainer for \"0f2de062441ae6a484c4130c85c785a9a4dc537dc71c96e526186040b3476187\"" Mar 13 00:46:31.006770 containerd[1555]: time="2026-03-13T00:46:31.005470483Z" level=info msg="connecting to shim 0f2de062441ae6a484c4130c85c785a9a4dc537dc71c96e526186040b3476187" address="unix:///run/containerd/s/d889b6ec43d8ce462f9ef7dc79ba0a1a4905e32cd3eb998991252d48462f9404" protocol=ttrpc version=3 Mar 13 00:46:31.045102 systemd[1]: Started cri-containerd-0f2de062441ae6a484c4130c85c785a9a4dc537dc71c96e526186040b3476187.scope - libcontainer container 0f2de062441ae6a484c4130c85c785a9a4dc537dc71c96e526186040b3476187. Mar 13 00:46:31.211099 containerd[1555]: time="2026-03-13T00:46:31.210796698Z" level=info msg="StartContainer for \"0f2de062441ae6a484c4130c85c785a9a4dc537dc71c96e526186040b3476187\" returns successfully" Mar 13 00:46:31.284911 kubelet[2813]: E0313 00:46:31.284483 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k9f9b" podUID="4567d070-24e0-470c-b37a-b10f7102b657" Mar 13 00:46:32.104238 systemd[1]: cri-containerd-0f2de062441ae6a484c4130c85c785a9a4dc537dc71c96e526186040b3476187.scope: Deactivated successfully. Mar 13 00:46:32.105554 systemd[1]: cri-containerd-0f2de062441ae6a484c4130c85c785a9a4dc537dc71c96e526186040b3476187.scope: Consumed 1.030s CPU time, 177.5M memory peak, 4.2M read from disk, 177M written to disk. Mar 13 00:46:32.109144 containerd[1555]: time="2026-03-13T00:46:32.108967172Z" level=info msg="received container exit event container_id:\"0f2de062441ae6a484c4130c85c785a9a4dc537dc71c96e526186040b3476187\" id:\"0f2de062441ae6a484c4130c85c785a9a4dc537dc71c96e526186040b3476187\" pid:3585 exited_at:{seconds:1773362792 nanos:107966796}" Mar 13 00:46:32.150082 kubelet[2813]: I0313 00:46:32.149645 2813 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 13 00:46:32.180862 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f2de062441ae6a484c4130c85c785a9a4dc537dc71c96e526186040b3476187-rootfs.mount: Deactivated successfully. Mar 13 00:46:32.229098 systemd[1]: Created slice kubepods-burstable-pod9747dbd6_53d4_4d32_a6d8_4dc1ff7f2068.slice - libcontainer container kubepods-burstable-pod9747dbd6_53d4_4d32_a6d8_4dc1ff7f2068.slice. Mar 13 00:46:32.244276 systemd[1]: Created slice kubepods-besteffort-pod435654c3_0c48_48a7_b99b_f731c72c3587.slice - libcontainer container kubepods-besteffort-pod435654c3_0c48_48a7_b99b_f731c72c3587.slice. Mar 13 00:46:32.257365 systemd[1]: Created slice kubepods-besteffort-poded18a379_1928_4da8_b306_c7990ac89b7b.slice - libcontainer container kubepods-besteffort-poded18a379_1928_4da8_b306_c7990ac89b7b.slice. Mar 13 00:46:32.268177 systemd[1]: Created slice kubepods-burstable-pod4f6b8b07_c9e5_4a7b_9450_10601e17c08a.slice - libcontainer container kubepods-burstable-pod4f6b8b07_c9e5_4a7b_9450_10601e17c08a.slice. Mar 13 00:46:32.279037 systemd[1]: Created slice kubepods-besteffort-pod878d8c46_17fa_41ad_876f_3483a84be9ce.slice - libcontainer container kubepods-besteffort-pod878d8c46_17fa_41ad_876f_3483a84be9ce.slice. Mar 13 00:46:32.286410 systemd[1]: Created slice kubepods-besteffort-pod296a1e16_2486_488b_be64_98f75fb175a3.slice - libcontainer container kubepods-besteffort-pod296a1e16_2486_488b_be64_98f75fb175a3.slice. Mar 13 00:46:32.299273 systemd[1]: Created slice kubepods-besteffort-podd6a606df_05c4_4e52_9fcc_d5ce0cedadc2.slice - libcontainer container kubepods-besteffort-podd6a606df_05c4_4e52_9fcc_d5ce0cedadc2.slice. Mar 13 00:46:32.337402 kubelet[2813]: I0313 00:46:32.337156 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s65hp\" (UniqueName: \"kubernetes.io/projected/9747dbd6-53d4-4d32-a6d8-4dc1ff7f2068-kube-api-access-s65hp\") pod \"coredns-7d764666f9-9zfs9\" (UID: \"9747dbd6-53d4-4d32-a6d8-4dc1ff7f2068\") " pod="kube-system/coredns-7d764666f9-9zfs9" Mar 13 00:46:32.337992 kubelet[2813]: I0313 00:46:32.337562 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm2bv\" (UniqueName: \"kubernetes.io/projected/d6a606df-05c4-4e52-9fcc-d5ce0cedadc2-kube-api-access-gm2bv\") pod \"calico-apiserver-565bc9487f-2bzm6\" (UID: \"d6a606df-05c4-4e52-9fcc-d5ce0cedadc2\") " pod="calico-system/calico-apiserver-565bc9487f-2bzm6" Mar 13 00:46:32.338023 kubelet[2813]: I0313 00:46:32.337994 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/435654c3-0c48-48a7-b99b-f731c72c3587-nginx-config\") pod \"whisker-69dc4c9c9d-glfmk\" (UID: \"435654c3-0c48-48a7-b99b-f731c72c3587\") " pod="calico-system/whisker-69dc4c9c9d-glfmk" Mar 13 00:46:32.338350 kubelet[2813]: I0313 00:46:32.338169 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/296a1e16-2486-488b-be64-98f75fb175a3-goldmane-key-pair\") pod \"goldmane-9f7667bb8-tdkkp\" (UID: \"296a1e16-2486-488b-be64-98f75fb175a3\") " pod="calico-system/goldmane-9f7667bb8-tdkkp" Mar 13 00:46:32.338350 kubelet[2813]: I0313 00:46:32.338264 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed18a379-1928-4da8-b306-c7990ac89b7b-tigera-ca-bundle\") pod \"calico-kube-controllers-7df5f8b75f-hdlkc\" (UID: \"ed18a379-1928-4da8-b306-c7990ac89b7b\") " pod="calico-system/calico-kube-controllers-7df5f8b75f-hdlkc" Mar 13 00:46:32.338350 kubelet[2813]: I0313 00:46:32.338305 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9747dbd6-53d4-4d32-a6d8-4dc1ff7f2068-config-volume\") pod \"coredns-7d764666f9-9zfs9\" (UID: \"9747dbd6-53d4-4d32-a6d8-4dc1ff7f2068\") " pod="kube-system/coredns-7d764666f9-9zfs9" Mar 13 00:46:32.338350 kubelet[2813]: I0313 00:46:32.338331 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/296a1e16-2486-488b-be64-98f75fb175a3-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-tdkkp\" (UID: \"296a1e16-2486-488b-be64-98f75fb175a3\") " pod="calico-system/goldmane-9f7667bb8-tdkkp" Mar 13 00:46:32.338566 kubelet[2813]: I0313 00:46:32.338353 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/435654c3-0c48-48a7-b99b-f731c72c3587-whisker-backend-key-pair\") pod \"whisker-69dc4c9c9d-glfmk\" (UID: \"435654c3-0c48-48a7-b99b-f731c72c3587\") " pod="calico-system/whisker-69dc4c9c9d-glfmk" Mar 13 00:46:32.338566 kubelet[2813]: I0313 00:46:32.338376 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7wkd\" (UniqueName: \"kubernetes.io/projected/4f6b8b07-c9e5-4a7b-9450-10601e17c08a-kube-api-access-r7wkd\") pod \"coredns-7d764666f9-8blmg\" (UID: \"4f6b8b07-c9e5-4a7b-9450-10601e17c08a\") " pod="kube-system/coredns-7d764666f9-8blmg" Mar 13 00:46:32.338566 kubelet[2813]: I0313 00:46:32.338398 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pthdz\" (UniqueName: \"kubernetes.io/projected/878d8c46-17fa-41ad-876f-3483a84be9ce-kube-api-access-pthdz\") pod \"calico-apiserver-565bc9487f-vs2f7\" (UID: \"878d8c46-17fa-41ad-876f-3483a84be9ce\") " pod="calico-system/calico-apiserver-565bc9487f-vs2f7" Mar 13 00:46:32.338566 kubelet[2813]: I0313 00:46:32.338426 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rzfn\" (UniqueName: \"kubernetes.io/projected/296a1e16-2486-488b-be64-98f75fb175a3-kube-api-access-8rzfn\") pod \"goldmane-9f7667bb8-tdkkp\" (UID: \"296a1e16-2486-488b-be64-98f75fb175a3\") " pod="calico-system/goldmane-9f7667bb8-tdkkp" Mar 13 00:46:32.338566 kubelet[2813]: I0313 00:46:32.338451 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/435654c3-0c48-48a7-b99b-f731c72c3587-whisker-ca-bundle\") pod \"whisker-69dc4c9c9d-glfmk\" (UID: \"435654c3-0c48-48a7-b99b-f731c72c3587\") " pod="calico-system/whisker-69dc4c9c9d-glfmk" Mar 13 00:46:32.338877 kubelet[2813]: I0313 00:46:32.338478 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d6a606df-05c4-4e52-9fcc-d5ce0cedadc2-calico-apiserver-certs\") pod \"calico-apiserver-565bc9487f-2bzm6\" (UID: \"d6a606df-05c4-4e52-9fcc-d5ce0cedadc2\") " pod="calico-system/calico-apiserver-565bc9487f-2bzm6" Mar 13 00:46:32.338877 kubelet[2813]: I0313 00:46:32.338498 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjf86\" (UniqueName: \"kubernetes.io/projected/435654c3-0c48-48a7-b99b-f731c72c3587-kube-api-access-bjf86\") pod \"whisker-69dc4c9c9d-glfmk\" (UID: \"435654c3-0c48-48a7-b99b-f731c72c3587\") " pod="calico-system/whisker-69dc4c9c9d-glfmk" Mar 13 00:46:32.338877 kubelet[2813]: I0313 00:46:32.338521 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgnjm\" (UniqueName: \"kubernetes.io/projected/ed18a379-1928-4da8-b306-c7990ac89b7b-kube-api-access-fgnjm\") pod \"calico-kube-controllers-7df5f8b75f-hdlkc\" (UID: \"ed18a379-1928-4da8-b306-c7990ac89b7b\") " pod="calico-system/calico-kube-controllers-7df5f8b75f-hdlkc" Mar 13 00:46:32.338877 kubelet[2813]: I0313 00:46:32.338543 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f6b8b07-c9e5-4a7b-9450-10601e17c08a-config-volume\") pod \"coredns-7d764666f9-8blmg\" (UID: \"4f6b8b07-c9e5-4a7b-9450-10601e17c08a\") " pod="kube-system/coredns-7d764666f9-8blmg" Mar 13 00:46:32.338877 kubelet[2813]: I0313 00:46:32.338568 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/878d8c46-17fa-41ad-876f-3483a84be9ce-calico-apiserver-certs\") pod \"calico-apiserver-565bc9487f-vs2f7\" (UID: \"878d8c46-17fa-41ad-876f-3483a84be9ce\") " pod="calico-system/calico-apiserver-565bc9487f-vs2f7" Mar 13 00:46:32.339000 kubelet[2813]: I0313 00:46:32.338812 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/296a1e16-2486-488b-be64-98f75fb175a3-config\") pod \"goldmane-9f7667bb8-tdkkp\" (UID: \"296a1e16-2486-488b-be64-98f75fb175a3\") " pod="calico-system/goldmane-9f7667bb8-tdkkp" Mar 13 00:46:32.512634 containerd[1555]: time="2026-03-13T00:46:32.512255277Z" level=info msg="CreateContainer within sandbox \"0167e46815dab4dec8952ca9b56a205a1cee3eb7e49e39354f59ad1ddfaeac1c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 13 00:46:32.534489 containerd[1555]: time="2026-03-13T00:46:32.534382329Z" level=info msg="Container b873fdd3230878c3507908b6d61aa4fac9fa7077bd2732d1e2de97d3f7ee39a5: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:46:32.541875 kubelet[2813]: E0313 00:46:32.541608 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:32.542620 containerd[1555]: time="2026-03-13T00:46:32.542484432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-9zfs9,Uid:9747dbd6-53d4-4d32-a6d8-4dc1ff7f2068,Namespace:kube-system,Attempt:0,}" Mar 13 00:46:32.551956 containerd[1555]: time="2026-03-13T00:46:32.551563864Z" level=info msg="CreateContainer within sandbox \"0167e46815dab4dec8952ca9b56a205a1cee3eb7e49e39354f59ad1ddfaeac1c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b873fdd3230878c3507908b6d61aa4fac9fa7077bd2732d1e2de97d3f7ee39a5\"" Mar 13 00:46:32.552546 containerd[1555]: time="2026-03-13T00:46:32.552461546Z" level=info msg="StartContainer for \"b873fdd3230878c3507908b6d61aa4fac9fa7077bd2732d1e2de97d3f7ee39a5\"" Mar 13 00:46:32.554276 containerd[1555]: time="2026-03-13T00:46:32.554137496Z" level=info msg="connecting to shim b873fdd3230878c3507908b6d61aa4fac9fa7077bd2732d1e2de97d3f7ee39a5" address="unix:///run/containerd/s/d889b6ec43d8ce462f9ef7dc79ba0a1a4905e32cd3eb998991252d48462f9404" protocol=ttrpc version=3 Mar 13 00:46:32.554600 containerd[1555]: time="2026-03-13T00:46:32.554507224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69dc4c9c9d-glfmk,Uid:435654c3-0c48-48a7-b99b-f731c72c3587,Namespace:calico-system,Attempt:0,}" Mar 13 00:46:32.575535 containerd[1555]: time="2026-03-13T00:46:32.575226738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7df5f8b75f-hdlkc,Uid:ed18a379-1928-4da8-b306-c7990ac89b7b,Namespace:calico-system,Attempt:0,}" Mar 13 00:46:32.578201 kubelet[2813]: E0313 00:46:32.578081 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:32.588098 containerd[1555]: time="2026-03-13T00:46:32.587867219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-8blmg,Uid:4f6b8b07-c9e5-4a7b-9450-10601e17c08a,Namespace:kube-system,Attempt:0,}" Mar 13 00:46:32.592299 containerd[1555]: time="2026-03-13T00:46:32.592171883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-565bc9487f-vs2f7,Uid:878d8c46-17fa-41ad-876f-3483a84be9ce,Namespace:calico-system,Attempt:0,}" Mar 13 00:46:32.599818 containerd[1555]: time="2026-03-13T00:46:32.599468125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-tdkkp,Uid:296a1e16-2486-488b-be64-98f75fb175a3,Namespace:calico-system,Attempt:0,}" Mar 13 00:46:32.609832 containerd[1555]: time="2026-03-13T00:46:32.609792966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-565bc9487f-2bzm6,Uid:d6a606df-05c4-4e52-9fcc-d5ce0cedadc2,Namespace:calico-system,Attempt:0,}" Mar 13 00:46:32.616042 systemd[1]: Started cri-containerd-b873fdd3230878c3507908b6d61aa4fac9fa7077bd2732d1e2de97d3f7ee39a5.scope - libcontainer container b873fdd3230878c3507908b6d61aa4fac9fa7077bd2732d1e2de97d3f7ee39a5. Mar 13 00:46:32.879468 containerd[1555]: time="2026-03-13T00:46:32.879370845Z" level=info msg="StartContainer for \"b873fdd3230878c3507908b6d61aa4fac9fa7077bd2732d1e2de97d3f7ee39a5\" returns successfully" Mar 13 00:46:32.931362 containerd[1555]: time="2026-03-13T00:46:32.931318170Z" level=error msg="Failed to destroy network for sandbox \"cf2a7d5d83a5144bf2d9e03374d68fd99ab688767cccb6f9620d56ff7b2f539c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:46:32.939358 containerd[1555]: time="2026-03-13T00:46:32.939317071Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-9zfs9,Uid:9747dbd6-53d4-4d32-a6d8-4dc1ff7f2068,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf2a7d5d83a5144bf2d9e03374d68fd99ab688767cccb6f9620d56ff7b2f539c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:46:32.944903 containerd[1555]: time="2026-03-13T00:46:32.944630794Z" level=error msg="Failed to destroy network for sandbox \"481554f446b77871fcce587f316b6425bb58d7113336458e605b8c211ce2dcd7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:46:32.948563 containerd[1555]: time="2026-03-13T00:46:32.948468177Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-565bc9487f-2bzm6,Uid:d6a606df-05c4-4e52-9fcc-d5ce0cedadc2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"481554f446b77871fcce587f316b6425bb58d7113336458e605b8c211ce2dcd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:46:32.949433 kubelet[2813]: E0313 00:46:32.949275 2813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf2a7d5d83a5144bf2d9e03374d68fd99ab688767cccb6f9620d56ff7b2f539c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:46:32.949816 kubelet[2813]: E0313 00:46:32.949496 2813 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf2a7d5d83a5144bf2d9e03374d68fd99ab688767cccb6f9620d56ff7b2f539c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-9zfs9" Mar 13 00:46:32.950185 kubelet[2813]: E0313 00:46:32.949627 2813 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf2a7d5d83a5144bf2d9e03374d68fd99ab688767cccb6f9620d56ff7b2f539c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-9zfs9" Mar 13 00:46:32.950926 kubelet[2813]: E0313 00:46:32.950848 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-9zfs9_kube-system(9747dbd6-53d4-4d32-a6d8-4dc1ff7f2068)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-9zfs9_kube-system(9747dbd6-53d4-4d32-a6d8-4dc1ff7f2068)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf2a7d5d83a5144bf2d9e03374d68fd99ab688767cccb6f9620d56ff7b2f539c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-9zfs9" podUID="9747dbd6-53d4-4d32-a6d8-4dc1ff7f2068" Mar 13 00:46:32.952480 kubelet[2813]: E0313 00:46:32.952414 2813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"481554f446b77871fcce587f316b6425bb58d7113336458e605b8c211ce2dcd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:46:32.952526 kubelet[2813]: E0313 00:46:32.952487 2813 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"481554f446b77871fcce587f316b6425bb58d7113336458e605b8c211ce2dcd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-565bc9487f-2bzm6" Mar 13 00:46:32.952526 kubelet[2813]: E0313 00:46:32.952502 2813 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"481554f446b77871fcce587f316b6425bb58d7113336458e605b8c211ce2dcd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-565bc9487f-2bzm6" Mar 13 00:46:32.952569 kubelet[2813]: E0313 00:46:32.952533 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-565bc9487f-2bzm6_calico-system(d6a606df-05c4-4e52-9fcc-d5ce0cedadc2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-565bc9487f-2bzm6_calico-system(d6a606df-05c4-4e52-9fcc-d5ce0cedadc2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"481554f446b77871fcce587f316b6425bb58d7113336458e605b8c211ce2dcd7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-565bc9487f-2bzm6" podUID="d6a606df-05c4-4e52-9fcc-d5ce0cedadc2" Mar 13 00:46:32.958205 containerd[1555]: time="2026-03-13T00:46:32.957999152Z" level=error msg="Failed to destroy network for sandbox \"9359f8be733f0ea30217e0adf50081724aad87d78e05a81b2b50ec736befb84e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:46:32.964534 containerd[1555]: time="2026-03-13T00:46:32.964013438Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-8blmg,Uid:4f6b8b07-c9e5-4a7b-9450-10601e17c08a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9359f8be733f0ea30217e0adf50081724aad87d78e05a81b2b50ec736befb84e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:46:32.965004 kubelet[2813]: E0313 00:46:32.964914 2813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9359f8be733f0ea30217e0adf50081724aad87d78e05a81b2b50ec736befb84e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:46:32.965059 kubelet[2813]: E0313 00:46:32.965025 2813 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9359f8be733f0ea30217e0adf50081724aad87d78e05a81b2b50ec736befb84e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-8blmg" Mar 13 00:46:32.965344 kubelet[2813]: E0313 00:46:32.965043 2813 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9359f8be733f0ea30217e0adf50081724aad87d78e05a81b2b50ec736befb84e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-8blmg" Mar 13 00:46:32.965470 kubelet[2813]: E0313 00:46:32.965401 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-8blmg_kube-system(4f6b8b07-c9e5-4a7b-9450-10601e17c08a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-8blmg_kube-system(4f6b8b07-c9e5-4a7b-9450-10601e17c08a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9359f8be733f0ea30217e0adf50081724aad87d78e05a81b2b50ec736befb84e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-8blmg" podUID="4f6b8b07-c9e5-4a7b-9450-10601e17c08a" Mar 13 00:46:32.976182 containerd[1555]: time="2026-03-13T00:46:32.975958773Z" level=error msg="Failed to destroy network for sandbox \"ba58a87d2184ceaecd80ccf06506254bfa78c7ff2f5ed0564099eed528de8d71\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:46:32.979414 containerd[1555]: time="2026-03-13T00:46:32.979384962Z" level=error msg="Failed to destroy network for sandbox \"79d449bdc98e95050f2f1f18e1faab4d3a2ecf2a89d8b07e1e19378065bead32\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:46:32.979852 containerd[1555]: time="2026-03-13T00:46:32.979483768Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-565bc9487f-vs2f7,Uid:878d8c46-17fa-41ad-876f-3483a84be9ce,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba58a87d2184ceaecd80ccf06506254bfa78c7ff2f5ed0564099eed528de8d71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:46:32.980474 kubelet[2813]: E0313 00:46:32.980406 2813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba58a87d2184ceaecd80ccf06506254bfa78c7ff2f5ed0564099eed528de8d71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:46:32.980474 kubelet[2813]: E0313 00:46:32.980458 2813 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba58a87d2184ceaecd80ccf06506254bfa78c7ff2f5ed0564099eed528de8d71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-565bc9487f-vs2f7" Mar 13 00:46:32.980563 kubelet[2813]: E0313 00:46:32.980481 2813 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba58a87d2184ceaecd80ccf06506254bfa78c7ff2f5ed0564099eed528de8d71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-565bc9487f-vs2f7" Mar 13 00:46:32.980810 kubelet[2813]: E0313 00:46:32.980761 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-565bc9487f-vs2f7_calico-system(878d8c46-17fa-41ad-876f-3483a84be9ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-565bc9487f-vs2f7_calico-system(878d8c46-17fa-41ad-876f-3483a84be9ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba58a87d2184ceaecd80ccf06506254bfa78c7ff2f5ed0564099eed528de8d71\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-565bc9487f-vs2f7" podUID="878d8c46-17fa-41ad-876f-3483a84be9ce" Mar 13 00:46:32.982275 containerd[1555]: time="2026-03-13T00:46:32.982246720Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69dc4c9c9d-glfmk,Uid:435654c3-0c48-48a7-b99b-f731c72c3587,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"79d449bdc98e95050f2f1f18e1faab4d3a2ecf2a89d8b07e1e19378065bead32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:46:32.983343 kubelet[2813]: E0313 00:46:32.983251 2813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79d449bdc98e95050f2f1f18e1faab4d3a2ecf2a89d8b07e1e19378065bead32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:46:32.983505 kubelet[2813]: E0313 00:46:32.983439 2813 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79d449bdc98e95050f2f1f18e1faab4d3a2ecf2a89d8b07e1e19378065bead32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-69dc4c9c9d-glfmk" Mar 13 00:46:32.983751 kubelet[2813]: E0313 00:46:32.983631 2813 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79d449bdc98e95050f2f1f18e1faab4d3a2ecf2a89d8b07e1e19378065bead32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-69dc4c9c9d-glfmk" Mar 13 00:46:32.984358 kubelet[2813]: E0313 00:46:32.984050 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-69dc4c9c9d-glfmk_calico-system(435654c3-0c48-48a7-b99b-f731c72c3587)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-69dc4c9c9d-glfmk_calico-system(435654c3-0c48-48a7-b99b-f731c72c3587)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"79d449bdc98e95050f2f1f18e1faab4d3a2ecf2a89d8b07e1e19378065bead32\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-69dc4c9c9d-glfmk" podUID="435654c3-0c48-48a7-b99b-f731c72c3587" Mar 13 00:46:33.002143 containerd[1555]: time="2026-03-13T00:46:33.002047601Z" level=error msg="Failed to destroy network for sandbox \"42a38e22e184949ca1a62ccb681369d2c7ac104a478842dc642a74b1557e8796\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:46:33.004264 containerd[1555]: time="2026-03-13T00:46:33.004180020Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7df5f8b75f-hdlkc,Uid:ed18a379-1928-4da8-b306-c7990ac89b7b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"42a38e22e184949ca1a62ccb681369d2c7ac104a478842dc642a74b1557e8796\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:46:33.005763 kubelet[2813]: E0313 00:46:33.004424 2813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42a38e22e184949ca1a62ccb681369d2c7ac104a478842dc642a74b1557e8796\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:46:33.005763 kubelet[2813]: E0313 00:46:33.004468 2813 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42a38e22e184949ca1a62ccb681369d2c7ac104a478842dc642a74b1557e8796\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7df5f8b75f-hdlkc" Mar 13 00:46:33.005763 kubelet[2813]: E0313 00:46:33.004488 2813 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42a38e22e184949ca1a62ccb681369d2c7ac104a478842dc642a74b1557e8796\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7df5f8b75f-hdlkc" Mar 13 00:46:33.005872 kubelet[2813]: E0313 00:46:33.004528 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7df5f8b75f-hdlkc_calico-system(ed18a379-1928-4da8-b306-c7990ac89b7b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7df5f8b75f-hdlkc_calico-system(ed18a379-1928-4da8-b306-c7990ac89b7b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"42a38e22e184949ca1a62ccb681369d2c7ac104a478842dc642a74b1557e8796\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7df5f8b75f-hdlkc" podUID="ed18a379-1928-4da8-b306-c7990ac89b7b" Mar 13 00:46:33.011345 containerd[1555]: time="2026-03-13T00:46:33.011223774Z" level=error msg="Failed to destroy network for sandbox \"d323a1fbfbd922b182a4281b7c8b55102ba29951d03367256cd08ed1db732b61\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:46:33.013976 containerd[1555]: time="2026-03-13T00:46:33.013871384Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-tdkkp,Uid:296a1e16-2486-488b-be64-98f75fb175a3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d323a1fbfbd922b182a4281b7c8b55102ba29951d03367256cd08ed1db732b61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:46:33.015201 kubelet[2813]: E0313 00:46:33.014599 2813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d323a1fbfbd922b182a4281b7c8b55102ba29951d03367256cd08ed1db732b61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:46:33.015201 kubelet[2813]: E0313 00:46:33.014796 2813 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d323a1fbfbd922b182a4281b7c8b55102ba29951d03367256cd08ed1db732b61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-tdkkp" Mar 13 00:46:33.015201 kubelet[2813]: E0313 00:46:33.014890 2813 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d323a1fbfbd922b182a4281b7c8b55102ba29951d03367256cd08ed1db732b61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-tdkkp" Mar 13 00:46:33.015624 kubelet[2813]: E0313 00:46:33.014933 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-tdkkp_calico-system(296a1e16-2486-488b-be64-98f75fb175a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-tdkkp_calico-system(296a1e16-2486-488b-be64-98f75fb175a3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d323a1fbfbd922b182a4281b7c8b55102ba29951d03367256cd08ed1db732b61\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-tdkkp" podUID="296a1e16-2486-488b-be64-98f75fb175a3" Mar 13 00:46:33.295621 systemd[1]: Created slice kubepods-besteffort-pod4567d070_24e0_470c_b37a_b10f7102b657.slice - libcontainer container kubepods-besteffort-pod4567d070_24e0_470c_b37a_b10f7102b657.slice. Mar 13 00:46:33.307478 containerd[1555]: time="2026-03-13T00:46:33.307300019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k9f9b,Uid:4567d070-24e0-470c-b37a-b10f7102b657,Namespace:calico-system,Attempt:0,}" Mar 13 00:46:33.507831 kubelet[2813]: I0313 00:46:33.507629 2813 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-vvr5x" podStartSLOduration=2.533371275 podStartE2EDuration="28.507616519s" podCreationTimestamp="2026-03-13 00:46:05 +0000 UTC" firstStartedPulling="2026-03-13 00:46:06.453393213 +0000 UTC m=+39.484177903" lastFinishedPulling="2026-03-13 00:46:32.427638446 +0000 UTC m=+65.458423147" observedRunningTime="2026-03-13 00:46:33.502027855 +0000 UTC m=+66.532812535" watchObservedRunningTime="2026-03-13 00:46:33.507616519 +0000 UTC m=+66.538401199" Mar 13 00:46:33.566270 kubelet[2813]: I0313 00:46:33.566115 2813 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/435654c3-0c48-48a7-b99b-f731c72c3587-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/435654c3-0c48-48a7-b99b-f731c72c3587-whisker-backend-key-pair\") pod \"435654c3-0c48-48a7-b99b-f731c72c3587\" (UID: \"435654c3-0c48-48a7-b99b-f731c72c3587\") " Mar 13 00:46:33.566270 kubelet[2813]: I0313 00:46:33.566195 2813 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/435654c3-0c48-48a7-b99b-f731c72c3587-nginx-config\" (UniqueName: \"kubernetes.io/configmap/435654c3-0c48-48a7-b99b-f731c72c3587-nginx-config\") pod \"435654c3-0c48-48a7-b99b-f731c72c3587\" (UID: \"435654c3-0c48-48a7-b99b-f731c72c3587\") " Mar 13 00:46:33.566270 kubelet[2813]: I0313 00:46:33.566235 2813 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/435654c3-0c48-48a7-b99b-f731c72c3587-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/435654c3-0c48-48a7-b99b-f731c72c3587-whisker-ca-bundle\") pod \"435654c3-0c48-48a7-b99b-f731c72c3587\" (UID: \"435654c3-0c48-48a7-b99b-f731c72c3587\") " Mar 13 00:46:33.566270 kubelet[2813]: I0313 00:46:33.566270 2813 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/435654c3-0c48-48a7-b99b-f731c72c3587-kube-api-access-bjf86\" (UniqueName: \"kubernetes.io/projected/435654c3-0c48-48a7-b99b-f731c72c3587-kube-api-access-bjf86\") pod \"435654c3-0c48-48a7-b99b-f731c72c3587\" (UID: \"435654c3-0c48-48a7-b99b-f731c72c3587\") " Mar 13 00:46:33.568068 kubelet[2813]: I0313 00:46:33.568013 2813 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/435654c3-0c48-48a7-b99b-f731c72c3587-nginx-config" pod "435654c3-0c48-48a7-b99b-f731c72c3587" (UID: "435654c3-0c48-48a7-b99b-f731c72c3587"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 13 00:46:33.568891 kubelet[2813]: I0313 00:46:33.568847 2813 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/435654c3-0c48-48a7-b99b-f731c72c3587-whisker-ca-bundle" pod "435654c3-0c48-48a7-b99b-f731c72c3587" (UID: "435654c3-0c48-48a7-b99b-f731c72c3587"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 13 00:46:33.580790 systemd[1]: var-lib-kubelet-pods-435654c3\x2d0c48\x2d48a7\x2db99b\x2df731c72c3587-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 13 00:46:33.585100 kubelet[2813]: I0313 00:46:33.584984 2813 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/435654c3-0c48-48a7-b99b-f731c72c3587-whisker-backend-key-pair" pod "435654c3-0c48-48a7-b99b-f731c72c3587" (UID: "435654c3-0c48-48a7-b99b-f731c72c3587"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 13 00:46:33.585170 kubelet[2813]: I0313 00:46:33.585136 2813 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/435654c3-0c48-48a7-b99b-f731c72c3587-kube-api-access-bjf86" pod "435654c3-0c48-48a7-b99b-f731c72c3587" (UID: "435654c3-0c48-48a7-b99b-f731c72c3587"). InnerVolumeSpecName "kube-api-access-bjf86". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 13 00:46:33.589950 systemd[1]: var-lib-kubelet-pods-435654c3\x2d0c48\x2d48a7\x2db99b\x2df731c72c3587-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbjf86.mount: Deactivated successfully. Mar 13 00:46:33.594623 systemd-networkd[1471]: calic16f8a197a8: Link UP Mar 13 00:46:33.596637 systemd-networkd[1471]: calic16f8a197a8: Gained carrier Mar 13 00:46:33.620761 containerd[1555]: 2026-03-13 00:46:33.370 [ERROR][3902] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 13 00:46:33.620761 containerd[1555]: 2026-03-13 00:46:33.408 [INFO][3902] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--k9f9b-eth0 csi-node-driver- calico-system 4567d070-24e0-470c-b37a-b10f7102b657 734 0 2026-03-13 00:46:05 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-k9f9b eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic16f8a197a8 [] [] }} ContainerID="caa0b748fbda97d8dd94e35f3dff5e56eb6ca798cf2d8550d2a7ab4c3bd56b33" Namespace="calico-system" Pod="csi-node-driver-k9f9b" WorkloadEndpoint="localhost-k8s-csi--node--driver--k9f9b-" Mar 13 00:46:33.620761 containerd[1555]: 2026-03-13 00:46:33.408 [INFO][3902] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="caa0b748fbda97d8dd94e35f3dff5e56eb6ca798cf2d8550d2a7ab4c3bd56b33" Namespace="calico-system" Pod="csi-node-driver-k9f9b" WorkloadEndpoint="localhost-k8s-csi--node--driver--k9f9b-eth0" Mar 13 00:46:33.620761 containerd[1555]: 2026-03-13 00:46:33.483 [INFO][3917] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="caa0b748fbda97d8dd94e35f3dff5e56eb6ca798cf2d8550d2a7ab4c3bd56b33" HandleID="k8s-pod-network.caa0b748fbda97d8dd94e35f3dff5e56eb6ca798cf2d8550d2a7ab4c3bd56b33" Workload="localhost-k8s-csi--node--driver--k9f9b-eth0" Mar 13 00:46:33.621047 containerd[1555]: 2026-03-13 00:46:33.492 [INFO][3917] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="caa0b748fbda97d8dd94e35f3dff5e56eb6ca798cf2d8550d2a7ab4c3bd56b33" HandleID="k8s-pod-network.caa0b748fbda97d8dd94e35f3dff5e56eb6ca798cf2d8550d2a7ab4c3bd56b33" Workload="localhost-k8s-csi--node--driver--k9f9b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004a73b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-k9f9b", "timestamp":"2026-03-13 00:46:33.483829293 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00067e160)} Mar 13 00:46:33.621047 containerd[1555]: 2026-03-13 00:46:33.492 [INFO][3917] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:46:33.621047 containerd[1555]: 2026-03-13 00:46:33.492 [INFO][3917] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:46:33.621047 containerd[1555]: 2026-03-13 00:46:33.492 [INFO][3917] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 13 00:46:33.621047 containerd[1555]: 2026-03-13 00:46:33.497 [INFO][3917] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.caa0b748fbda97d8dd94e35f3dff5e56eb6ca798cf2d8550d2a7ab4c3bd56b33" host="localhost" Mar 13 00:46:33.621047 containerd[1555]: 2026-03-13 00:46:33.514 [INFO][3917] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 13 00:46:33.621047 containerd[1555]: 2026-03-13 00:46:33.525 [INFO][3917] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 13 00:46:33.621047 containerd[1555]: 2026-03-13 00:46:33.531 [INFO][3917] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 13 00:46:33.621047 containerd[1555]: 2026-03-13 00:46:33.535 [INFO][3917] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 13 00:46:33.621047 containerd[1555]: 2026-03-13 00:46:33.535 [INFO][3917] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.caa0b748fbda97d8dd94e35f3dff5e56eb6ca798cf2d8550d2a7ab4c3bd56b33" host="localhost" Mar 13 00:46:33.621396 containerd[1555]: 2026-03-13 00:46:33.541 [INFO][3917] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.caa0b748fbda97d8dd94e35f3dff5e56eb6ca798cf2d8550d2a7ab4c3bd56b33 Mar 13 00:46:33.621396 containerd[1555]: 2026-03-13 00:46:33.548 [INFO][3917] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.caa0b748fbda97d8dd94e35f3dff5e56eb6ca798cf2d8550d2a7ab4c3bd56b33" host="localhost" Mar 13 00:46:33.621396 containerd[1555]: 2026-03-13 00:46:33.556 [INFO][3917] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.caa0b748fbda97d8dd94e35f3dff5e56eb6ca798cf2d8550d2a7ab4c3bd56b33" host="localhost" Mar 13 00:46:33.621396 containerd[1555]: 2026-03-13 00:46:33.557 [INFO][3917] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.caa0b748fbda97d8dd94e35f3dff5e56eb6ca798cf2d8550d2a7ab4c3bd56b33" host="localhost" Mar 13 00:46:33.621396 containerd[1555]: 2026-03-13 00:46:33.558 [INFO][3917] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:46:33.621396 containerd[1555]: 2026-03-13 00:46:33.558 [INFO][3917] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="caa0b748fbda97d8dd94e35f3dff5e56eb6ca798cf2d8550d2a7ab4c3bd56b33" HandleID="k8s-pod-network.caa0b748fbda97d8dd94e35f3dff5e56eb6ca798cf2d8550d2a7ab4c3bd56b33" Workload="localhost-k8s-csi--node--driver--k9f9b-eth0" Mar 13 00:46:33.621556 containerd[1555]: 2026-03-13 00:46:33.567 [INFO][3902] cni-plugin/k8s.go 418: Populated endpoint ContainerID="caa0b748fbda97d8dd94e35f3dff5e56eb6ca798cf2d8550d2a7ab4c3bd56b33" Namespace="calico-system" Pod="csi-node-driver-k9f9b" WorkloadEndpoint="localhost-k8s-csi--node--driver--k9f9b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--k9f9b-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4567d070-24e0-470c-b37a-b10f7102b657", ResourceVersion:"734", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 46, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-k9f9b", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic16f8a197a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:46:33.621746 containerd[1555]: 2026-03-13 00:46:33.567 [INFO][3902] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="caa0b748fbda97d8dd94e35f3dff5e56eb6ca798cf2d8550d2a7ab4c3bd56b33" Namespace="calico-system" Pod="csi-node-driver-k9f9b" WorkloadEndpoint="localhost-k8s-csi--node--driver--k9f9b-eth0" Mar 13 00:46:33.621746 containerd[1555]: 2026-03-13 00:46:33.567 [INFO][3902] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic16f8a197a8 ContainerID="caa0b748fbda97d8dd94e35f3dff5e56eb6ca798cf2d8550d2a7ab4c3bd56b33" Namespace="calico-system" Pod="csi-node-driver-k9f9b" WorkloadEndpoint="localhost-k8s-csi--node--driver--k9f9b-eth0" Mar 13 00:46:33.621746 containerd[1555]: 2026-03-13 00:46:33.593 [INFO][3902] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="caa0b748fbda97d8dd94e35f3dff5e56eb6ca798cf2d8550d2a7ab4c3bd56b33" Namespace="calico-system" Pod="csi-node-driver-k9f9b" WorkloadEndpoint="localhost-k8s-csi--node--driver--k9f9b-eth0" Mar 13 00:46:33.621818 containerd[1555]: 2026-03-13 00:46:33.594 [INFO][3902] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="caa0b748fbda97d8dd94e35f3dff5e56eb6ca798cf2d8550d2a7ab4c3bd56b33" Namespace="calico-system" Pod="csi-node-driver-k9f9b" WorkloadEndpoint="localhost-k8s-csi--node--driver--k9f9b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--k9f9b-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4567d070-24e0-470c-b37a-b10f7102b657", ResourceVersion:"734", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 46, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"caa0b748fbda97d8dd94e35f3dff5e56eb6ca798cf2d8550d2a7ab4c3bd56b33", Pod:"csi-node-driver-k9f9b", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic16f8a197a8", MAC:"6e:66:56:cf:6b:c1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:46:33.621921 containerd[1555]: 2026-03-13 00:46:33.613 [INFO][3902] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="caa0b748fbda97d8dd94e35f3dff5e56eb6ca798cf2d8550d2a7ab4c3bd56b33" Namespace="calico-system" Pod="csi-node-driver-k9f9b" WorkloadEndpoint="localhost-k8s-csi--node--driver--k9f9b-eth0" Mar 13 00:46:33.652215 containerd[1555]: time="2026-03-13T00:46:33.651963046Z" level=info msg="connecting to shim caa0b748fbda97d8dd94e35f3dff5e56eb6ca798cf2d8550d2a7ab4c3bd56b33" address="unix:///run/containerd/s/15f8c2a01c5591370500a740c2ae34202544fe65da68d8499dd1f7c37af327c7" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:46:33.666797 kubelet[2813]: I0313 00:46:33.666568 2813 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/435654c3-0c48-48a7-b99b-f731c72c3587-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 13 00:46:33.666797 kubelet[2813]: I0313 00:46:33.666643 2813 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/435654c3-0c48-48a7-b99b-f731c72c3587-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 13 00:46:33.666797 kubelet[2813]: I0313 00:46:33.666742 2813 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/435654c3-0c48-48a7-b99b-f731c72c3587-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 13 00:46:33.666797 kubelet[2813]: I0313 00:46:33.666750 2813 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bjf86\" (UniqueName: \"kubernetes.io/projected/435654c3-0c48-48a7-b99b-f731c72c3587-kube-api-access-bjf86\") on node \"localhost\" DevicePath \"\"" Mar 13 00:46:33.697911 systemd[1]: Started cri-containerd-caa0b748fbda97d8dd94e35f3dff5e56eb6ca798cf2d8550d2a7ab4c3bd56b33.scope - libcontainer container caa0b748fbda97d8dd94e35f3dff5e56eb6ca798cf2d8550d2a7ab4c3bd56b33. Mar 13 00:46:33.718321 systemd-resolved[1395]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 13 00:46:33.745373 containerd[1555]: time="2026-03-13T00:46:33.745185184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k9f9b,Uid:4567d070-24e0-470c-b37a-b10f7102b657,Namespace:calico-system,Attempt:0,} returns sandbox id \"caa0b748fbda97d8dd94e35f3dff5e56eb6ca798cf2d8550d2a7ab4c3bd56b33\"" Mar 13 00:46:33.748318 containerd[1555]: time="2026-03-13T00:46:33.748247787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 13 00:46:34.478418 systemd[1]: Removed slice kubepods-besteffort-pod435654c3_0c48_48a7_b99b_f731c72c3587.slice - libcontainer container kubepods-besteffort-pod435654c3_0c48_48a7_b99b_f731c72c3587.slice. Mar 13 00:46:34.500220 containerd[1555]: time="2026-03-13T00:46:34.500109479Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:34.502504 containerd[1555]: time="2026-03-13T00:46:34.502441060Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 13 00:46:34.504379 containerd[1555]: time="2026-03-13T00:46:34.504253978Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:34.509581 containerd[1555]: time="2026-03-13T00:46:34.509493382Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:34.513305 containerd[1555]: time="2026-03-13T00:46:34.513164237Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 764.814851ms" Mar 13 00:46:34.513305 containerd[1555]: time="2026-03-13T00:46:34.513234979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 13 00:46:34.527124 containerd[1555]: time="2026-03-13T00:46:34.526921160Z" level=info msg="CreateContainer within sandbox \"caa0b748fbda97d8dd94e35f3dff5e56eb6ca798cf2d8550d2a7ab4c3bd56b33\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 13 00:46:34.556347 containerd[1555]: time="2026-03-13T00:46:34.556247372Z" level=info msg="Container 39ed31bda0d938fcf425dfa977e0839a39fec89f150aa3d4be03799e14c75439: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:46:34.575833 containerd[1555]: time="2026-03-13T00:46:34.575578974Z" level=info msg="CreateContainer within sandbox \"caa0b748fbda97d8dd94e35f3dff5e56eb6ca798cf2d8550d2a7ab4c3bd56b33\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"39ed31bda0d938fcf425dfa977e0839a39fec89f150aa3d4be03799e14c75439\"" Mar 13 00:46:34.579482 containerd[1555]: time="2026-03-13T00:46:34.579317134Z" level=info msg="StartContainer for \"39ed31bda0d938fcf425dfa977e0839a39fec89f150aa3d4be03799e14c75439\"" Mar 13 00:46:34.581283 containerd[1555]: time="2026-03-13T00:46:34.581118237Z" level=info msg="connecting to shim 39ed31bda0d938fcf425dfa977e0839a39fec89f150aa3d4be03799e14c75439" address="unix:///run/containerd/s/15f8c2a01c5591370500a740c2ae34202544fe65da68d8499dd1f7c37af327c7" protocol=ttrpc version=3 Mar 13 00:46:34.620467 systemd[1]: Created slice kubepods-besteffort-pod02ca9253_bfe0_4ccb_9412_1480c5cc7232.slice - libcontainer container kubepods-besteffort-pod02ca9253_bfe0_4ccb_9412_1480c5cc7232.slice. Mar 13 00:46:34.647073 systemd[1]: Started cri-containerd-39ed31bda0d938fcf425dfa977e0839a39fec89f150aa3d4be03799e14c75439.scope - libcontainer container 39ed31bda0d938fcf425dfa977e0839a39fec89f150aa3d4be03799e14c75439. Mar 13 00:46:34.679046 kubelet[2813]: I0313 00:46:34.678643 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/02ca9253-bfe0-4ccb-9412-1480c5cc7232-whisker-ca-bundle\") pod \"whisker-85b797c859-8t8kp\" (UID: \"02ca9253-bfe0-4ccb-9412-1480c5cc7232\") " pod="calico-system/whisker-85b797c859-8t8kp" Mar 13 00:46:34.679046 kubelet[2813]: I0313 00:46:34.678823 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/02ca9253-bfe0-4ccb-9412-1480c5cc7232-nginx-config\") pod \"whisker-85b797c859-8t8kp\" (UID: \"02ca9253-bfe0-4ccb-9412-1480c5cc7232\") " pod="calico-system/whisker-85b797c859-8t8kp" Mar 13 00:46:34.679046 kubelet[2813]: I0313 00:46:34.678846 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/02ca9253-bfe0-4ccb-9412-1480c5cc7232-whisker-backend-key-pair\") pod \"whisker-85b797c859-8t8kp\" (UID: \"02ca9253-bfe0-4ccb-9412-1480c5cc7232\") " pod="calico-system/whisker-85b797c859-8t8kp" Mar 13 00:46:34.679046 kubelet[2813]: I0313 00:46:34.678862 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbpc9\" (UniqueName: \"kubernetes.io/projected/02ca9253-bfe0-4ccb-9412-1480c5cc7232-kube-api-access-pbpc9\") pod \"whisker-85b797c859-8t8kp\" (UID: \"02ca9253-bfe0-4ccb-9412-1480c5cc7232\") " pod="calico-system/whisker-85b797c859-8t8kp" Mar 13 00:46:34.904571 containerd[1555]: time="2026-03-13T00:46:34.903074950Z" level=info msg="StartContainer for \"39ed31bda0d938fcf425dfa977e0839a39fec89f150aa3d4be03799e14c75439\" returns successfully" Mar 13 00:46:34.906386 containerd[1555]: time="2026-03-13T00:46:34.905553094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 13 00:46:34.934112 containerd[1555]: time="2026-03-13T00:46:34.933953346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-85b797c859-8t8kp,Uid:02ca9253-bfe0-4ccb-9412-1480c5cc7232,Namespace:calico-system,Attempt:0,}" Mar 13 00:46:35.214132 systemd-networkd[1471]: calied6fb50cbfa: Link UP Mar 13 00:46:35.218827 systemd-networkd[1471]: calied6fb50cbfa: Gained carrier Mar 13 00:46:35.251428 containerd[1555]: 2026-03-13 00:46:35.013 [ERROR][4184] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 13 00:46:35.251428 containerd[1555]: 2026-03-13 00:46:35.047 [INFO][4184] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--85b797c859--8t8kp-eth0 whisker-85b797c859- calico-system 02ca9253-bfe0-4ccb-9412-1480c5cc7232 966 0 2026-03-13 00:46:34 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:85b797c859 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-85b797c859-8t8kp eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calied6fb50cbfa [] [] }} ContainerID="9b23e31af631a41e94083681eb823573196de27a36281729b6610f0fb3e13903" Namespace="calico-system" Pod="whisker-85b797c859-8t8kp" WorkloadEndpoint="localhost-k8s-whisker--85b797c859--8t8kp-" Mar 13 00:46:35.251428 containerd[1555]: 2026-03-13 00:46:35.047 [INFO][4184] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9b23e31af631a41e94083681eb823573196de27a36281729b6610f0fb3e13903" Namespace="calico-system" Pod="whisker-85b797c859-8t8kp" WorkloadEndpoint="localhost-k8s-whisker--85b797c859--8t8kp-eth0" Mar 13 00:46:35.251428 containerd[1555]: 2026-03-13 00:46:35.136 [INFO][4202] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9b23e31af631a41e94083681eb823573196de27a36281729b6610f0fb3e13903" HandleID="k8s-pod-network.9b23e31af631a41e94083681eb823573196de27a36281729b6610f0fb3e13903" Workload="localhost-k8s-whisker--85b797c859--8t8kp-eth0" Mar 13 00:46:35.251895 containerd[1555]: 2026-03-13 00:46:35.149 [INFO][4202] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9b23e31af631a41e94083681eb823573196de27a36281729b6610f0fb3e13903" HandleID="k8s-pod-network.9b23e31af631a41e94083681eb823573196de27a36281729b6610f0fb3e13903" Workload="localhost-k8s-whisker--85b797c859--8t8kp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00040e5b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-85b797c859-8t8kp", "timestamp":"2026-03-13 00:46:35.136091348 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00062af20)} Mar 13 00:46:35.251895 containerd[1555]: 2026-03-13 00:46:35.149 [INFO][4202] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:46:35.251895 containerd[1555]: 2026-03-13 00:46:35.150 [INFO][4202] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:46:35.251895 containerd[1555]: 2026-03-13 00:46:35.150 [INFO][4202] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 13 00:46:35.251895 containerd[1555]: 2026-03-13 00:46:35.155 [INFO][4202] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9b23e31af631a41e94083681eb823573196de27a36281729b6610f0fb3e13903" host="localhost" Mar 13 00:46:35.251895 containerd[1555]: 2026-03-13 00:46:35.161 [INFO][4202] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 13 00:46:35.251895 containerd[1555]: 2026-03-13 00:46:35.168 [INFO][4202] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 13 00:46:35.251895 containerd[1555]: 2026-03-13 00:46:35.172 [INFO][4202] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 13 00:46:35.251895 containerd[1555]: 2026-03-13 00:46:35.178 [INFO][4202] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 13 00:46:35.251895 containerd[1555]: 2026-03-13 00:46:35.178 [INFO][4202] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9b23e31af631a41e94083681eb823573196de27a36281729b6610f0fb3e13903" host="localhost" Mar 13 00:46:35.252266 containerd[1555]: 2026-03-13 00:46:35.183 [INFO][4202] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9b23e31af631a41e94083681eb823573196de27a36281729b6610f0fb3e13903 Mar 13 00:46:35.252266 containerd[1555]: 2026-03-13 00:46:35.190 [INFO][4202] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9b23e31af631a41e94083681eb823573196de27a36281729b6610f0fb3e13903" host="localhost" Mar 13 00:46:35.252266 containerd[1555]: 2026-03-13 00:46:35.198 [INFO][4202] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.9b23e31af631a41e94083681eb823573196de27a36281729b6610f0fb3e13903" host="localhost" Mar 13 00:46:35.252266 containerd[1555]: 2026-03-13 00:46:35.198 [INFO][4202] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.9b23e31af631a41e94083681eb823573196de27a36281729b6610f0fb3e13903" host="localhost" Mar 13 00:46:35.252266 containerd[1555]: 2026-03-13 00:46:35.199 [INFO][4202] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:46:35.252266 containerd[1555]: 2026-03-13 00:46:35.199 [INFO][4202] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="9b23e31af631a41e94083681eb823573196de27a36281729b6610f0fb3e13903" HandleID="k8s-pod-network.9b23e31af631a41e94083681eb823573196de27a36281729b6610f0fb3e13903" Workload="localhost-k8s-whisker--85b797c859--8t8kp-eth0" Mar 13 00:46:35.252424 containerd[1555]: 2026-03-13 00:46:35.207 [INFO][4184] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9b23e31af631a41e94083681eb823573196de27a36281729b6610f0fb3e13903" Namespace="calico-system" Pod="whisker-85b797c859-8t8kp" WorkloadEndpoint="localhost-k8s-whisker--85b797c859--8t8kp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--85b797c859--8t8kp-eth0", GenerateName:"whisker-85b797c859-", Namespace:"calico-system", SelfLink:"", UID:"02ca9253-bfe0-4ccb-9412-1480c5cc7232", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 46, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"85b797c859", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-85b797c859-8t8kp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calied6fb50cbfa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:46:35.252424 containerd[1555]: 2026-03-13 00:46:35.207 [INFO][4184] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="9b23e31af631a41e94083681eb823573196de27a36281729b6610f0fb3e13903" Namespace="calico-system" Pod="whisker-85b797c859-8t8kp" WorkloadEndpoint="localhost-k8s-whisker--85b797c859--8t8kp-eth0" Mar 13 00:46:35.252585 containerd[1555]: 2026-03-13 00:46:35.207 [INFO][4184] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calied6fb50cbfa ContainerID="9b23e31af631a41e94083681eb823573196de27a36281729b6610f0fb3e13903" Namespace="calico-system" Pod="whisker-85b797c859-8t8kp" WorkloadEndpoint="localhost-k8s-whisker--85b797c859--8t8kp-eth0" Mar 13 00:46:35.252585 containerd[1555]: 2026-03-13 00:46:35.216 [INFO][4184] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9b23e31af631a41e94083681eb823573196de27a36281729b6610f0fb3e13903" Namespace="calico-system" Pod="whisker-85b797c859-8t8kp" WorkloadEndpoint="localhost-k8s-whisker--85b797c859--8t8kp-eth0" Mar 13 00:46:35.252630 containerd[1555]: 2026-03-13 00:46:35.223 [INFO][4184] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9b23e31af631a41e94083681eb823573196de27a36281729b6610f0fb3e13903" Namespace="calico-system" Pod="whisker-85b797c859-8t8kp" WorkloadEndpoint="localhost-k8s-whisker--85b797c859--8t8kp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--85b797c859--8t8kp-eth0", GenerateName:"whisker-85b797c859-", Namespace:"calico-system", SelfLink:"", UID:"02ca9253-bfe0-4ccb-9412-1480c5cc7232", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 46, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"85b797c859", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9b23e31af631a41e94083681eb823573196de27a36281729b6610f0fb3e13903", Pod:"whisker-85b797c859-8t8kp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calied6fb50cbfa", MAC:"e2:f5:50:31:28:39", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:46:35.252813 containerd[1555]: 2026-03-13 00:46:35.247 [INFO][4184] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9b23e31af631a41e94083681eb823573196de27a36281729b6610f0fb3e13903" Namespace="calico-system" Pod="whisker-85b797c859-8t8kp" WorkloadEndpoint="localhost-k8s-whisker--85b797c859--8t8kp-eth0" Mar 13 00:46:35.279232 systemd-networkd[1471]: calic16f8a197a8: Gained IPv6LL Mar 13 00:46:35.301283 kubelet[2813]: I0313 00:46:35.301199 2813 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="435654c3-0c48-48a7-b99b-f731c72c3587" path="/var/lib/kubelet/pods/435654c3-0c48-48a7-b99b-f731c72c3587/volumes" Mar 13 00:46:35.311447 containerd[1555]: time="2026-03-13T00:46:35.311376489Z" level=info msg="connecting to shim 9b23e31af631a41e94083681eb823573196de27a36281729b6610f0fb3e13903" address="unix:///run/containerd/s/a9a39b24159b8dc26d38933f6cab34ed6d954b5956e7e1ad3a332489a4966566" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:46:35.366977 systemd[1]: Started cri-containerd-9b23e31af631a41e94083681eb823573196de27a36281729b6610f0fb3e13903.scope - libcontainer container 9b23e31af631a41e94083681eb823573196de27a36281729b6610f0fb3e13903. Mar 13 00:46:35.413785 systemd-resolved[1395]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 13 00:46:35.511135 containerd[1555]: time="2026-03-13T00:46:35.510520228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-85b797c859-8t8kp,Uid:02ca9253-bfe0-4ccb-9412-1480c5cc7232,Namespace:calico-system,Attempt:0,} returns sandbox id \"9b23e31af631a41e94083681eb823573196de27a36281729b6610f0fb3e13903\"" Mar 13 00:46:35.957296 containerd[1555]: time="2026-03-13T00:46:35.957112207Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:35.958941 containerd[1555]: time="2026-03-13T00:46:35.958645067Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 13 00:46:35.960910 containerd[1555]: time="2026-03-13T00:46:35.960783448Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:35.964303 containerd[1555]: time="2026-03-13T00:46:35.964146199Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:35.964951 containerd[1555]: time="2026-03-13T00:46:35.964876534Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.059297381s" Mar 13 00:46:35.965026 containerd[1555]: time="2026-03-13T00:46:35.964960650Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 13 00:46:35.968217 containerd[1555]: time="2026-03-13T00:46:35.968057156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 13 00:46:35.976129 containerd[1555]: time="2026-03-13T00:46:35.975959705Z" level=info msg="CreateContainer within sandbox \"caa0b748fbda97d8dd94e35f3dff5e56eb6ca798cf2d8550d2a7ab4c3bd56b33\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 13 00:46:35.990783 containerd[1555]: time="2026-03-13T00:46:35.990044676Z" level=info msg="Container a2577aad511ada4f505846f949ebf809d15265c939d0a43bc51a822ac8851a00: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:46:36.006053 containerd[1555]: time="2026-03-13T00:46:36.005971847Z" level=info msg="CreateContainer within sandbox \"caa0b748fbda97d8dd94e35f3dff5e56eb6ca798cf2d8550d2a7ab4c3bd56b33\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a2577aad511ada4f505846f949ebf809d15265c939d0a43bc51a822ac8851a00\"" Mar 13 00:46:36.006822 containerd[1555]: time="2026-03-13T00:46:36.006583687Z" level=info msg="StartContainer for \"a2577aad511ada4f505846f949ebf809d15265c939d0a43bc51a822ac8851a00\"" Mar 13 00:46:36.008465 containerd[1555]: time="2026-03-13T00:46:36.008270468Z" level=info msg="connecting to shim a2577aad511ada4f505846f949ebf809d15265c939d0a43bc51a822ac8851a00" address="unix:///run/containerd/s/15f8c2a01c5591370500a740c2ae34202544fe65da68d8499dd1f7c37af327c7" protocol=ttrpc version=3 Mar 13 00:46:36.051147 systemd[1]: Started cri-containerd-a2577aad511ada4f505846f949ebf809d15265c939d0a43bc51a822ac8851a00.scope - libcontainer container a2577aad511ada4f505846f949ebf809d15265c939d0a43bc51a822ac8851a00. Mar 13 00:46:36.124247 systemd-networkd[1471]: vxlan.calico: Link UP Mar 13 00:46:36.124257 systemd-networkd[1471]: vxlan.calico: Gained carrier Mar 13 00:46:36.191035 containerd[1555]: time="2026-03-13T00:46:36.190928566Z" level=info msg="StartContainer for \"a2577aad511ada4f505846f949ebf809d15265c939d0a43bc51a822ac8851a00\" returns successfully" Mar 13 00:46:36.512826 kubelet[2813]: I0313 00:46:36.512439 2813 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-k9f9b" podStartSLOduration=29.292159262 podStartE2EDuration="31.512424112s" podCreationTimestamp="2026-03-13 00:46:05 +0000 UTC" firstStartedPulling="2026-03-13 00:46:33.747292243 +0000 UTC m=+66.778076924" lastFinishedPulling="2026-03-13 00:46:35.967557095 +0000 UTC m=+68.998341774" observedRunningTime="2026-03-13 00:46:36.512273442 +0000 UTC m=+69.543058121" watchObservedRunningTime="2026-03-13 00:46:36.512424112 +0000 UTC m=+69.543208792" Mar 13 00:46:36.781091 containerd[1555]: time="2026-03-13T00:46:36.780545836Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:36.781763 containerd[1555]: time="2026-03-13T00:46:36.781628762Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 13 00:46:36.784380 containerd[1555]: time="2026-03-13T00:46:36.784120261Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:36.789238 containerd[1555]: time="2026-03-13T00:46:36.789131884Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:36.792232 containerd[1555]: time="2026-03-13T00:46:36.792125989Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 823.979016ms" Mar 13 00:46:36.792232 containerd[1555]: time="2026-03-13T00:46:36.792196521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 13 00:46:36.801821 containerd[1555]: time="2026-03-13T00:46:36.801762598Z" level=info msg="CreateContainer within sandbox \"9b23e31af631a41e94083681eb823573196de27a36281729b6610f0fb3e13903\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 13 00:46:36.814641 containerd[1555]: time="2026-03-13T00:46:36.814515971Z" level=info msg="Container 8e99063eaf129b75e2b5bb27db20ca3be78720b74878391a51da547f265ee0e7: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:46:36.815023 systemd-networkd[1471]: calied6fb50cbfa: Gained IPv6LL Mar 13 00:46:36.833913 containerd[1555]: time="2026-03-13T00:46:36.833555226Z" level=info msg="CreateContainer within sandbox \"9b23e31af631a41e94083681eb823573196de27a36281729b6610f0fb3e13903\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"8e99063eaf129b75e2b5bb27db20ca3be78720b74878391a51da547f265ee0e7\"" Mar 13 00:46:36.834630 containerd[1555]: time="2026-03-13T00:46:36.834522076Z" level=info msg="StartContainer for \"8e99063eaf129b75e2b5bb27db20ca3be78720b74878391a51da547f265ee0e7\"" Mar 13 00:46:36.836552 containerd[1555]: time="2026-03-13T00:46:36.836465414Z" level=info msg="connecting to shim 8e99063eaf129b75e2b5bb27db20ca3be78720b74878391a51da547f265ee0e7" address="unix:///run/containerd/s/a9a39b24159b8dc26d38933f6cab34ed6d954b5956e7e1ad3a332489a4966566" protocol=ttrpc version=3 Mar 13 00:46:36.886186 systemd[1]: Started cri-containerd-8e99063eaf129b75e2b5bb27db20ca3be78720b74878391a51da547f265ee0e7.scope - libcontainer container 8e99063eaf129b75e2b5bb27db20ca3be78720b74878391a51da547f265ee0e7. Mar 13 00:46:36.969605 containerd[1555]: time="2026-03-13T00:46:36.969426251Z" level=info msg="StartContainer for \"8e99063eaf129b75e2b5bb27db20ca3be78720b74878391a51da547f265ee0e7\" returns successfully" Mar 13 00:46:36.974292 containerd[1555]: time="2026-03-13T00:46:36.974246191Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 13 00:46:37.154202 kubelet[2813]: I0313 00:46:37.154091 2813 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 13 00:46:37.154202 kubelet[2813]: I0313 00:46:37.154198 2813 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 13 00:46:37.263126 systemd-networkd[1471]: vxlan.calico: Gained IPv6LL Mar 13 00:46:37.917022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4043522.mount: Deactivated successfully. Mar 13 00:46:37.949359 containerd[1555]: time="2026-03-13T00:46:37.949263220Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:37.951361 containerd[1555]: time="2026-03-13T00:46:37.951210666Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 13 00:46:37.952718 containerd[1555]: time="2026-03-13T00:46:37.952597358Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:37.956096 containerd[1555]: time="2026-03-13T00:46:37.955967120Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:37.956556 containerd[1555]: time="2026-03-13T00:46:37.956513841Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 982.231191ms" Mar 13 00:46:37.956556 containerd[1555]: time="2026-03-13T00:46:37.956543075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 13 00:46:37.966068 containerd[1555]: time="2026-03-13T00:46:37.965921096Z" level=info msg="CreateContainer within sandbox \"9b23e31af631a41e94083681eb823573196de27a36281729b6610f0fb3e13903\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 13 00:46:37.978175 containerd[1555]: time="2026-03-13T00:46:37.978056889Z" level=info msg="Container b9aefdfa17e84f3e4b74aec12f7fc9008c7a2dc3689fed2f2c99faf72fac53a1: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:46:37.989917 containerd[1555]: time="2026-03-13T00:46:37.989831826Z" level=info msg="CreateContainer within sandbox \"9b23e31af631a41e94083681eb823573196de27a36281729b6610f0fb3e13903\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"b9aefdfa17e84f3e4b74aec12f7fc9008c7a2dc3689fed2f2c99faf72fac53a1\"" Mar 13 00:46:37.991032 containerd[1555]: time="2026-03-13T00:46:37.990932776Z" level=info msg="StartContainer for \"b9aefdfa17e84f3e4b74aec12f7fc9008c7a2dc3689fed2f2c99faf72fac53a1\"" Mar 13 00:46:37.992140 containerd[1555]: time="2026-03-13T00:46:37.992116480Z" level=info msg="connecting to shim b9aefdfa17e84f3e4b74aec12f7fc9008c7a2dc3689fed2f2c99faf72fac53a1" address="unix:///run/containerd/s/a9a39b24159b8dc26d38933f6cab34ed6d954b5956e7e1ad3a332489a4966566" protocol=ttrpc version=3 Mar 13 00:46:38.030021 systemd[1]: Started cri-containerd-b9aefdfa17e84f3e4b74aec12f7fc9008c7a2dc3689fed2f2c99faf72fac53a1.scope - libcontainer container b9aefdfa17e84f3e4b74aec12f7fc9008c7a2dc3689fed2f2c99faf72fac53a1. Mar 13 00:46:38.102224 containerd[1555]: time="2026-03-13T00:46:38.102130288Z" level=info msg="StartContainer for \"b9aefdfa17e84f3e4b74aec12f7fc9008c7a2dc3689fed2f2c99faf72fac53a1\" returns successfully" Mar 13 00:46:38.522588 kubelet[2813]: I0313 00:46:38.522354 2813 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-85b797c859-8t8kp" podStartSLOduration=2.094804259 podStartE2EDuration="4.522339991s" podCreationTimestamp="2026-03-13 00:46:34 +0000 UTC" firstStartedPulling="2026-03-13 00:46:35.53036373 +0000 UTC m=+68.561148410" lastFinishedPulling="2026-03-13 00:46:37.957899462 +0000 UTC m=+70.988684142" observedRunningTime="2026-03-13 00:46:38.521158803 +0000 UTC m=+71.551943484" watchObservedRunningTime="2026-03-13 00:46:38.522339991 +0000 UTC m=+71.553124672" Mar 13 00:46:41.286893 kubelet[2813]: E0313 00:46:41.286367 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:43.293589 containerd[1555]: time="2026-03-13T00:46:43.293496714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-tdkkp,Uid:296a1e16-2486-488b-be64-98f75fb175a3,Namespace:calico-system,Attempt:0,}" Mar 13 00:46:43.498496 systemd-networkd[1471]: caliee84807cc31: Link UP Mar 13 00:46:43.499622 systemd-networkd[1471]: caliee84807cc31: Gained carrier Mar 13 00:46:43.523595 containerd[1555]: 2026-03-13 00:46:43.365 [INFO][4529] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--9f7667bb8--tdkkp-eth0 goldmane-9f7667bb8- calico-system 296a1e16-2486-488b-be64-98f75fb175a3 902 0 2026-03-13 00:46:04 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-9f7667bb8-tdkkp eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliee84807cc31 [] [] }} ContainerID="8d8f6747114801abdc79887449f353732c5058fa772d486b3c144b713908d312" Namespace="calico-system" Pod="goldmane-9f7667bb8-tdkkp" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--tdkkp-" Mar 13 00:46:43.523595 containerd[1555]: 2026-03-13 00:46:43.365 [INFO][4529] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8d8f6747114801abdc79887449f353732c5058fa772d486b3c144b713908d312" Namespace="calico-system" Pod="goldmane-9f7667bb8-tdkkp" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--tdkkp-eth0" Mar 13 00:46:43.523595 containerd[1555]: 2026-03-13 00:46:43.422 [INFO][4543] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8d8f6747114801abdc79887449f353732c5058fa772d486b3c144b713908d312" HandleID="k8s-pod-network.8d8f6747114801abdc79887449f353732c5058fa772d486b3c144b713908d312" Workload="localhost-k8s-goldmane--9f7667bb8--tdkkp-eth0" Mar 13 00:46:43.524113 containerd[1555]: 2026-03-13 00:46:43.434 [INFO][4543] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8d8f6747114801abdc79887449f353732c5058fa772d486b3c144b713908d312" HandleID="k8s-pod-network.8d8f6747114801abdc79887449f353732c5058fa772d486b3c144b713908d312" Workload="localhost-k8s-goldmane--9f7667bb8--tdkkp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037df30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-9f7667bb8-tdkkp", "timestamp":"2026-03-13 00:46:43.422889864 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0005b06e0)} Mar 13 00:46:43.524113 containerd[1555]: 2026-03-13 00:46:43.434 [INFO][4543] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:46:43.524113 containerd[1555]: 2026-03-13 00:46:43.434 [INFO][4543] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:46:43.524113 containerd[1555]: 2026-03-13 00:46:43.434 [INFO][4543] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 13 00:46:43.524113 containerd[1555]: 2026-03-13 00:46:43.439 [INFO][4543] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8d8f6747114801abdc79887449f353732c5058fa772d486b3c144b713908d312" host="localhost" Mar 13 00:46:43.524113 containerd[1555]: 2026-03-13 00:46:43.447 [INFO][4543] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 13 00:46:43.524113 containerd[1555]: 2026-03-13 00:46:43.454 [INFO][4543] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 13 00:46:43.524113 containerd[1555]: 2026-03-13 00:46:43.457 [INFO][4543] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 13 00:46:43.524113 containerd[1555]: 2026-03-13 00:46:43.461 [INFO][4543] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 13 00:46:43.524113 containerd[1555]: 2026-03-13 00:46:43.461 [INFO][4543] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8d8f6747114801abdc79887449f353732c5058fa772d486b3c144b713908d312" host="localhost" Mar 13 00:46:43.524832 containerd[1555]: 2026-03-13 00:46:43.469 [INFO][4543] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8d8f6747114801abdc79887449f353732c5058fa772d486b3c144b713908d312 Mar 13 00:46:43.524832 containerd[1555]: 2026-03-13 00:46:43.475 [INFO][4543] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8d8f6747114801abdc79887449f353732c5058fa772d486b3c144b713908d312" host="localhost" Mar 13 00:46:43.524832 containerd[1555]: 2026-03-13 00:46:43.482 [INFO][4543] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.8d8f6747114801abdc79887449f353732c5058fa772d486b3c144b713908d312" host="localhost" Mar 13 00:46:43.524832 containerd[1555]: 2026-03-13 00:46:43.482 [INFO][4543] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.8d8f6747114801abdc79887449f353732c5058fa772d486b3c144b713908d312" host="localhost" Mar 13 00:46:43.524832 containerd[1555]: 2026-03-13 00:46:43.482 [INFO][4543] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:46:43.524832 containerd[1555]: 2026-03-13 00:46:43.483 [INFO][4543] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="8d8f6747114801abdc79887449f353732c5058fa772d486b3c144b713908d312" HandleID="k8s-pod-network.8d8f6747114801abdc79887449f353732c5058fa772d486b3c144b713908d312" Workload="localhost-k8s-goldmane--9f7667bb8--tdkkp-eth0" Mar 13 00:46:43.525040 containerd[1555]: 2026-03-13 00:46:43.486 [INFO][4529] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8d8f6747114801abdc79887449f353732c5058fa772d486b3c144b713908d312" Namespace="calico-system" Pod="goldmane-9f7667bb8-tdkkp" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--tdkkp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--tdkkp-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"296a1e16-2486-488b-be64-98f75fb175a3", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 46, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-9f7667bb8-tdkkp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliee84807cc31", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:46:43.525040 containerd[1555]: 2026-03-13 00:46:43.486 [INFO][4529] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="8d8f6747114801abdc79887449f353732c5058fa772d486b3c144b713908d312" Namespace="calico-system" Pod="goldmane-9f7667bb8-tdkkp" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--tdkkp-eth0" Mar 13 00:46:43.525251 containerd[1555]: 2026-03-13 00:46:43.487 [INFO][4529] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliee84807cc31 ContainerID="8d8f6747114801abdc79887449f353732c5058fa772d486b3c144b713908d312" Namespace="calico-system" Pod="goldmane-9f7667bb8-tdkkp" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--tdkkp-eth0" Mar 13 00:46:43.525251 containerd[1555]: 2026-03-13 00:46:43.500 [INFO][4529] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8d8f6747114801abdc79887449f353732c5058fa772d486b3c144b713908d312" Namespace="calico-system" Pod="goldmane-9f7667bb8-tdkkp" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--tdkkp-eth0" Mar 13 00:46:43.525324 containerd[1555]: 2026-03-13 00:46:43.502 [INFO][4529] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8d8f6747114801abdc79887449f353732c5058fa772d486b3c144b713908d312" Namespace="calico-system" Pod="goldmane-9f7667bb8-tdkkp" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--tdkkp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--tdkkp-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"296a1e16-2486-488b-be64-98f75fb175a3", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 46, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8d8f6747114801abdc79887449f353732c5058fa772d486b3c144b713908d312", Pod:"goldmane-9f7667bb8-tdkkp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliee84807cc31", MAC:"62:f4:fa:cc:a4:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:46:43.525491 containerd[1555]: 2026-03-13 00:46:43.518 [INFO][4529] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8d8f6747114801abdc79887449f353732c5058fa772d486b3c144b713908d312" Namespace="calico-system" Pod="goldmane-9f7667bb8-tdkkp" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--tdkkp-eth0" Mar 13 00:46:43.591968 containerd[1555]: time="2026-03-13T00:46:43.591848250Z" level=info msg="connecting to shim 8d8f6747114801abdc79887449f353732c5058fa772d486b3c144b713908d312" address="unix:///run/containerd/s/7cf8e4498edd251f0945bfada779584ec9bc3547a2accc10c31b49ec68d57e64" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:46:43.636038 systemd[1]: Started cri-containerd-8d8f6747114801abdc79887449f353732c5058fa772d486b3c144b713908d312.scope - libcontainer container 8d8f6747114801abdc79887449f353732c5058fa772d486b3c144b713908d312. Mar 13 00:46:43.655615 systemd-resolved[1395]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 13 00:46:43.716825 containerd[1555]: time="2026-03-13T00:46:43.716790045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-tdkkp,Uid:296a1e16-2486-488b-be64-98f75fb175a3,Namespace:calico-system,Attempt:0,} returns sandbox id \"8d8f6747114801abdc79887449f353732c5058fa772d486b3c144b713908d312\"" Mar 13 00:46:43.720832 containerd[1555]: time="2026-03-13T00:46:43.720479086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 13 00:46:44.299778 kubelet[2813]: E0313 00:46:44.299475 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:44.300381 containerd[1555]: time="2026-03-13T00:46:44.300233347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-8blmg,Uid:4f6b8b07-c9e5-4a7b-9450-10601e17c08a,Namespace:kube-system,Attempt:0,}" Mar 13 00:46:44.587846 systemd-networkd[1471]: cali56c39f2b45c: Link UP Mar 13 00:46:44.593506 systemd-networkd[1471]: cali56c39f2b45c: Gained carrier Mar 13 00:46:44.629162 containerd[1555]: 2026-03-13 00:46:44.392 [INFO][4622] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--8blmg-eth0 coredns-7d764666f9- kube-system 4f6b8b07-c9e5-4a7b-9450-10601e17c08a 901 0 2026-03-13 00:45:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-8blmg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali56c39f2b45c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="5164184f8fc5b3959f78bf953fd74b0df7a3988f1130aa046528806bef339b11" Namespace="kube-system" Pod="coredns-7d764666f9-8blmg" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--8blmg-" Mar 13 00:46:44.629162 containerd[1555]: 2026-03-13 00:46:44.392 [INFO][4622] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5164184f8fc5b3959f78bf953fd74b0df7a3988f1130aa046528806bef339b11" Namespace="kube-system" Pod="coredns-7d764666f9-8blmg" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--8blmg-eth0" Mar 13 00:46:44.629162 containerd[1555]: 2026-03-13 00:46:44.468 [INFO][4637] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5164184f8fc5b3959f78bf953fd74b0df7a3988f1130aa046528806bef339b11" HandleID="k8s-pod-network.5164184f8fc5b3959f78bf953fd74b0df7a3988f1130aa046528806bef339b11" Workload="localhost-k8s-coredns--7d764666f9--8blmg-eth0" Mar 13 00:46:44.629438 containerd[1555]: 2026-03-13 00:46:44.478 [INFO][4637] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5164184f8fc5b3959f78bf953fd74b0df7a3988f1130aa046528806bef339b11" HandleID="k8s-pod-network.5164184f8fc5b3959f78bf953fd74b0df7a3988f1130aa046528806bef339b11" Workload="localhost-k8s-coredns--7d764666f9--8blmg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000118130), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-8blmg", "timestamp":"2026-03-13 00:46:44.468405677 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00075e000)} Mar 13 00:46:44.629438 containerd[1555]: 2026-03-13 00:46:44.478 [INFO][4637] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:46:44.629438 containerd[1555]: 2026-03-13 00:46:44.478 [INFO][4637] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:46:44.629438 containerd[1555]: 2026-03-13 00:46:44.478 [INFO][4637] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 13 00:46:44.629438 containerd[1555]: 2026-03-13 00:46:44.486 [INFO][4637] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5164184f8fc5b3959f78bf953fd74b0df7a3988f1130aa046528806bef339b11" host="localhost" Mar 13 00:46:44.629438 containerd[1555]: 2026-03-13 00:46:44.497 [INFO][4637] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 13 00:46:44.629438 containerd[1555]: 2026-03-13 00:46:44.507 [INFO][4637] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 13 00:46:44.629438 containerd[1555]: 2026-03-13 00:46:44.511 [INFO][4637] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 13 00:46:44.629438 containerd[1555]: 2026-03-13 00:46:44.516 [INFO][4637] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 13 00:46:44.629438 containerd[1555]: 2026-03-13 00:46:44.516 [INFO][4637] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5164184f8fc5b3959f78bf953fd74b0df7a3988f1130aa046528806bef339b11" host="localhost" Mar 13 00:46:44.629833 containerd[1555]: 2026-03-13 00:46:44.520 [INFO][4637] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5164184f8fc5b3959f78bf953fd74b0df7a3988f1130aa046528806bef339b11 Mar 13 00:46:44.629833 containerd[1555]: 2026-03-13 00:46:44.526 [INFO][4637] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5164184f8fc5b3959f78bf953fd74b0df7a3988f1130aa046528806bef339b11" host="localhost" Mar 13 00:46:44.629833 containerd[1555]: 2026-03-13 00:46:44.539 [INFO][4637] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.5164184f8fc5b3959f78bf953fd74b0df7a3988f1130aa046528806bef339b11" host="localhost" Mar 13 00:46:44.629833 containerd[1555]: 2026-03-13 00:46:44.540 [INFO][4637] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.5164184f8fc5b3959f78bf953fd74b0df7a3988f1130aa046528806bef339b11" host="localhost" Mar 13 00:46:44.629833 containerd[1555]: 2026-03-13 00:46:44.540 [INFO][4637] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:46:44.629833 containerd[1555]: 2026-03-13 00:46:44.540 [INFO][4637] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="5164184f8fc5b3959f78bf953fd74b0df7a3988f1130aa046528806bef339b11" HandleID="k8s-pod-network.5164184f8fc5b3959f78bf953fd74b0df7a3988f1130aa046528806bef339b11" Workload="localhost-k8s-coredns--7d764666f9--8blmg-eth0" Mar 13 00:46:44.629948 containerd[1555]: 2026-03-13 00:46:44.559 [INFO][4622] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5164184f8fc5b3959f78bf953fd74b0df7a3988f1130aa046528806bef339b11" Namespace="kube-system" Pod="coredns-7d764666f9-8blmg" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--8blmg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--8blmg-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"4f6b8b07-c9e5-4a7b-9450-10601e17c08a", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 45, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-8blmg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali56c39f2b45c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:46:44.629948 containerd[1555]: 2026-03-13 00:46:44.560 [INFO][4622] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="5164184f8fc5b3959f78bf953fd74b0df7a3988f1130aa046528806bef339b11" Namespace="kube-system" Pod="coredns-7d764666f9-8blmg" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--8blmg-eth0" Mar 13 00:46:44.629948 containerd[1555]: 2026-03-13 00:46:44.560 [INFO][4622] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali56c39f2b45c ContainerID="5164184f8fc5b3959f78bf953fd74b0df7a3988f1130aa046528806bef339b11" Namespace="kube-system" Pod="coredns-7d764666f9-8blmg" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--8blmg-eth0" Mar 13 00:46:44.629948 containerd[1555]: 2026-03-13 00:46:44.596 [INFO][4622] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5164184f8fc5b3959f78bf953fd74b0df7a3988f1130aa046528806bef339b11" Namespace="kube-system" Pod="coredns-7d764666f9-8blmg" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--8blmg-eth0" Mar 13 00:46:44.629948 containerd[1555]: 2026-03-13 00:46:44.600 [INFO][4622] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5164184f8fc5b3959f78bf953fd74b0df7a3988f1130aa046528806bef339b11" Namespace="kube-system" Pod="coredns-7d764666f9-8blmg" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--8blmg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--8blmg-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"4f6b8b07-c9e5-4a7b-9450-10601e17c08a", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 45, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5164184f8fc5b3959f78bf953fd74b0df7a3988f1130aa046528806bef339b11", Pod:"coredns-7d764666f9-8blmg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali56c39f2b45c", MAC:"fa:d7:12:22:63:93", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:46:44.629948 containerd[1555]: 2026-03-13 00:46:44.624 [INFO][4622] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5164184f8fc5b3959f78bf953fd74b0df7a3988f1130aa046528806bef339b11" Namespace="kube-system" Pod="coredns-7d764666f9-8blmg" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--8blmg-eth0" Mar 13 00:46:44.675442 containerd[1555]: time="2026-03-13T00:46:44.675378671Z" level=info msg="connecting to shim 5164184f8fc5b3959f78bf953fd74b0df7a3988f1130aa046528806bef339b11" address="unix:///run/containerd/s/e1876db6bf45e0c818c27b7a7eef366e9b4f39e678b81c7e0140c7a560b7e96f" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:46:44.754176 systemd[1]: Started cri-containerd-5164184f8fc5b3959f78bf953fd74b0df7a3988f1130aa046528806bef339b11.scope - libcontainer container 5164184f8fc5b3959f78bf953fd74b0df7a3988f1130aa046528806bef339b11. Mar 13 00:46:44.780518 systemd-resolved[1395]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 13 00:46:44.873052 containerd[1555]: time="2026-03-13T00:46:44.869536080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-8blmg,Uid:4f6b8b07-c9e5-4a7b-9450-10601e17c08a,Namespace:kube-system,Attempt:0,} returns sandbox id \"5164184f8fc5b3959f78bf953fd74b0df7a3988f1130aa046528806bef339b11\"" Mar 13 00:46:44.875779 kubelet[2813]: E0313 00:46:44.875573 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:44.886765 containerd[1555]: time="2026-03-13T00:46:44.886581524Z" level=info msg="CreateContainer within sandbox \"5164184f8fc5b3959f78bf953fd74b0df7a3988f1130aa046528806bef339b11\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 13 00:46:44.930854 containerd[1555]: time="2026-03-13T00:46:44.930566859Z" level=info msg="Container 254087e4d62158163f2c8abba576c5d1fc86db6682540082b69ddac2db06b857: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:46:44.942568 containerd[1555]: time="2026-03-13T00:46:44.942463510Z" level=info msg="CreateContainer within sandbox \"5164184f8fc5b3959f78bf953fd74b0df7a3988f1130aa046528806bef339b11\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"254087e4d62158163f2c8abba576c5d1fc86db6682540082b69ddac2db06b857\"" Mar 13 00:46:44.943967 containerd[1555]: time="2026-03-13T00:46:44.943609164Z" level=info msg="StartContainer for \"254087e4d62158163f2c8abba576c5d1fc86db6682540082b69ddac2db06b857\"" Mar 13 00:46:44.946834 containerd[1555]: time="2026-03-13T00:46:44.946638818Z" level=info msg="connecting to shim 254087e4d62158163f2c8abba576c5d1fc86db6682540082b69ddac2db06b857" address="unix:///run/containerd/s/e1876db6bf45e0c818c27b7a7eef366e9b4f39e678b81c7e0140c7a560b7e96f" protocol=ttrpc version=3 Mar 13 00:46:45.000956 systemd[1]: Started cri-containerd-254087e4d62158163f2c8abba576c5d1fc86db6682540082b69ddac2db06b857.scope - libcontainer container 254087e4d62158163f2c8abba576c5d1fc86db6682540082b69ddac2db06b857. Mar 13 00:46:45.099435 containerd[1555]: time="2026-03-13T00:46:45.099347717Z" level=info msg="StartContainer for \"254087e4d62158163f2c8abba576c5d1fc86db6682540082b69ddac2db06b857\" returns successfully" Mar 13 00:46:45.318314 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1248673293.mount: Deactivated successfully. Mar 13 00:46:45.333071 containerd[1555]: time="2026-03-13T00:46:45.332968074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-565bc9487f-2bzm6,Uid:d6a606df-05c4-4e52-9fcc-d5ce0cedadc2,Namespace:calico-system,Attempt:0,}" Mar 13 00:46:45.392792 systemd-networkd[1471]: caliee84807cc31: Gained IPv6LL Mar 13 00:46:45.568960 kubelet[2813]: E0313 00:46:45.565349 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:45.589248 kubelet[2813]: I0313 00:46:45.589188 2813 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-8blmg" podStartSLOduration=71.589164129 podStartE2EDuration="1m11.589164129s" podCreationTimestamp="2026-03-13 00:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:46:45.584965988 +0000 UTC m=+78.615750678" watchObservedRunningTime="2026-03-13 00:46:45.589164129 +0000 UTC m=+78.619948809" Mar 13 00:46:45.643847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount304386394.mount: Deactivated successfully. Mar 13 00:46:45.671562 systemd-networkd[1471]: cali3b3e2c8b0cb: Link UP Mar 13 00:46:45.671969 systemd-networkd[1471]: cali3b3e2c8b0cb: Gained carrier Mar 13 00:46:45.696511 containerd[1555]: 2026-03-13 00:46:45.502 [INFO][4746] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--565bc9487f--2bzm6-eth0 calico-apiserver-565bc9487f- calico-system d6a606df-05c4-4e52-9fcc-d5ce0cedadc2 903 0 2026-03-13 00:46:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:565bc9487f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-565bc9487f-2bzm6 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali3b3e2c8b0cb [] [] }} ContainerID="92eb82e5f0cb8914548d680eb5e29517e2b15e3bf45616f7e6f75352f11a1e2e" Namespace="calico-system" Pod="calico-apiserver-565bc9487f-2bzm6" WorkloadEndpoint="localhost-k8s-calico--apiserver--565bc9487f--2bzm6-" Mar 13 00:46:45.696511 containerd[1555]: 2026-03-13 00:46:45.502 [INFO][4746] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="92eb82e5f0cb8914548d680eb5e29517e2b15e3bf45616f7e6f75352f11a1e2e" Namespace="calico-system" Pod="calico-apiserver-565bc9487f-2bzm6" WorkloadEndpoint="localhost-k8s-calico--apiserver--565bc9487f--2bzm6-eth0" Mar 13 00:46:45.696511 containerd[1555]: 2026-03-13 00:46:45.560 [INFO][4771] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="92eb82e5f0cb8914548d680eb5e29517e2b15e3bf45616f7e6f75352f11a1e2e" HandleID="k8s-pod-network.92eb82e5f0cb8914548d680eb5e29517e2b15e3bf45616f7e6f75352f11a1e2e" Workload="localhost-k8s-calico--apiserver--565bc9487f--2bzm6-eth0" Mar 13 00:46:45.696511 containerd[1555]: 2026-03-13 00:46:45.575 [INFO][4771] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="92eb82e5f0cb8914548d680eb5e29517e2b15e3bf45616f7e6f75352f11a1e2e" HandleID="k8s-pod-network.92eb82e5f0cb8914548d680eb5e29517e2b15e3bf45616f7e6f75352f11a1e2e" Workload="localhost-k8s-calico--apiserver--565bc9487f--2bzm6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fbb0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-565bc9487f-2bzm6", "timestamp":"2026-03-13 00:46:45.560427875 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001906e0)} Mar 13 00:46:45.696511 containerd[1555]: 2026-03-13 00:46:45.575 [INFO][4771] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:46:45.696511 containerd[1555]: 2026-03-13 00:46:45.575 [INFO][4771] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:46:45.696511 containerd[1555]: 2026-03-13 00:46:45.575 [INFO][4771] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 13 00:46:45.696511 containerd[1555]: 2026-03-13 00:46:45.581 [INFO][4771] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.92eb82e5f0cb8914548d680eb5e29517e2b15e3bf45616f7e6f75352f11a1e2e" host="localhost" Mar 13 00:46:45.696511 containerd[1555]: 2026-03-13 00:46:45.600 [INFO][4771] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 13 00:46:45.696511 containerd[1555]: 2026-03-13 00:46:45.618 [INFO][4771] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 13 00:46:45.696511 containerd[1555]: 2026-03-13 00:46:45.626 [INFO][4771] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 13 00:46:45.696511 containerd[1555]: 2026-03-13 00:46:45.634 [INFO][4771] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 13 00:46:45.696511 containerd[1555]: 2026-03-13 00:46:45.634 [INFO][4771] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.92eb82e5f0cb8914548d680eb5e29517e2b15e3bf45616f7e6f75352f11a1e2e" host="localhost" Mar 13 00:46:45.696511 containerd[1555]: 2026-03-13 00:46:45.641 [INFO][4771] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.92eb82e5f0cb8914548d680eb5e29517e2b15e3bf45616f7e6f75352f11a1e2e Mar 13 00:46:45.696511 containerd[1555]: 2026-03-13 00:46:45.650 [INFO][4771] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.92eb82e5f0cb8914548d680eb5e29517e2b15e3bf45616f7e6f75352f11a1e2e" host="localhost" Mar 13 00:46:45.696511 containerd[1555]: 2026-03-13 00:46:45.659 [INFO][4771] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.92eb82e5f0cb8914548d680eb5e29517e2b15e3bf45616f7e6f75352f11a1e2e" host="localhost" Mar 13 00:46:45.696511 containerd[1555]: 2026-03-13 00:46:45.659 [INFO][4771] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.92eb82e5f0cb8914548d680eb5e29517e2b15e3bf45616f7e6f75352f11a1e2e" host="localhost" Mar 13 00:46:45.696511 containerd[1555]: 2026-03-13 00:46:45.659 [INFO][4771] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:46:45.696511 containerd[1555]: 2026-03-13 00:46:45.659 [INFO][4771] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="92eb82e5f0cb8914548d680eb5e29517e2b15e3bf45616f7e6f75352f11a1e2e" HandleID="k8s-pod-network.92eb82e5f0cb8914548d680eb5e29517e2b15e3bf45616f7e6f75352f11a1e2e" Workload="localhost-k8s-calico--apiserver--565bc9487f--2bzm6-eth0" Mar 13 00:46:45.697279 containerd[1555]: 2026-03-13 00:46:45.666 [INFO][4746] cni-plugin/k8s.go 418: Populated endpoint ContainerID="92eb82e5f0cb8914548d680eb5e29517e2b15e3bf45616f7e6f75352f11a1e2e" Namespace="calico-system" Pod="calico-apiserver-565bc9487f-2bzm6" WorkloadEndpoint="localhost-k8s-calico--apiserver--565bc9487f--2bzm6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--565bc9487f--2bzm6-eth0", GenerateName:"calico-apiserver-565bc9487f-", Namespace:"calico-system", SelfLink:"", UID:"d6a606df-05c4-4e52-9fcc-d5ce0cedadc2", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 46, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"565bc9487f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-565bc9487f-2bzm6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali3b3e2c8b0cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:46:45.697279 containerd[1555]: 2026-03-13 00:46:45.667 [INFO][4746] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="92eb82e5f0cb8914548d680eb5e29517e2b15e3bf45616f7e6f75352f11a1e2e" Namespace="calico-system" Pod="calico-apiserver-565bc9487f-2bzm6" WorkloadEndpoint="localhost-k8s-calico--apiserver--565bc9487f--2bzm6-eth0" Mar 13 00:46:45.697279 containerd[1555]: 2026-03-13 00:46:45.667 [INFO][4746] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3b3e2c8b0cb ContainerID="92eb82e5f0cb8914548d680eb5e29517e2b15e3bf45616f7e6f75352f11a1e2e" Namespace="calico-system" Pod="calico-apiserver-565bc9487f-2bzm6" WorkloadEndpoint="localhost-k8s-calico--apiserver--565bc9487f--2bzm6-eth0" Mar 13 00:46:45.697279 containerd[1555]: 2026-03-13 00:46:45.672 [INFO][4746] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="92eb82e5f0cb8914548d680eb5e29517e2b15e3bf45616f7e6f75352f11a1e2e" Namespace="calico-system" Pod="calico-apiserver-565bc9487f-2bzm6" WorkloadEndpoint="localhost-k8s-calico--apiserver--565bc9487f--2bzm6-eth0" Mar 13 00:46:45.697279 containerd[1555]: 2026-03-13 00:46:45.674 [INFO][4746] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="92eb82e5f0cb8914548d680eb5e29517e2b15e3bf45616f7e6f75352f11a1e2e" Namespace="calico-system" Pod="calico-apiserver-565bc9487f-2bzm6" WorkloadEndpoint="localhost-k8s-calico--apiserver--565bc9487f--2bzm6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--565bc9487f--2bzm6-eth0", GenerateName:"calico-apiserver-565bc9487f-", Namespace:"calico-system", SelfLink:"", UID:"d6a606df-05c4-4e52-9fcc-d5ce0cedadc2", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 46, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"565bc9487f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"92eb82e5f0cb8914548d680eb5e29517e2b15e3bf45616f7e6f75352f11a1e2e", Pod:"calico-apiserver-565bc9487f-2bzm6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali3b3e2c8b0cb", MAC:"3e:d4:b1:fb:ad:e7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:46:45.697279 containerd[1555]: 2026-03-13 00:46:45.689 [INFO][4746] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="92eb82e5f0cb8914548d680eb5e29517e2b15e3bf45616f7e6f75352f11a1e2e" Namespace="calico-system" Pod="calico-apiserver-565bc9487f-2bzm6" WorkloadEndpoint="localhost-k8s-calico--apiserver--565bc9487f--2bzm6-eth0" Mar 13 00:46:45.747933 containerd[1555]: time="2026-03-13T00:46:45.747877155Z" level=info msg="connecting to shim 92eb82e5f0cb8914548d680eb5e29517e2b15e3bf45616f7e6f75352f11a1e2e" address="unix:///run/containerd/s/c7d03b2a30cec2b0b46d82bc49fb11968b93e0216595814660a0f8f1a0c07c8d" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:46:45.830139 systemd[1]: Started cri-containerd-92eb82e5f0cb8914548d680eb5e29517e2b15e3bf45616f7e6f75352f11a1e2e.scope - libcontainer container 92eb82e5f0cb8914548d680eb5e29517e2b15e3bf45616f7e6f75352f11a1e2e. Mar 13 00:46:45.868858 systemd-resolved[1395]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 13 00:46:45.952971 containerd[1555]: time="2026-03-13T00:46:45.952822260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-565bc9487f-2bzm6,Uid:d6a606df-05c4-4e52-9fcc-d5ce0cedadc2,Namespace:calico-system,Attempt:0,} returns sandbox id \"92eb82e5f0cb8914548d680eb5e29517e2b15e3bf45616f7e6f75352f11a1e2e\"" Mar 13 00:46:46.223160 systemd-networkd[1471]: cali56c39f2b45c: Gained IPv6LL Mar 13 00:46:46.471052 containerd[1555]: time="2026-03-13T00:46:46.470829659Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:46.473143 containerd[1555]: time="2026-03-13T00:46:46.473022058Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 13 00:46:46.475363 containerd[1555]: time="2026-03-13T00:46:46.475149186Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:46.478822 containerd[1555]: time="2026-03-13T00:46:46.478566684Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:46.479386 containerd[1555]: time="2026-03-13T00:46:46.479356750Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 2.758786825s" Mar 13 00:46:46.480168 containerd[1555]: time="2026-03-13T00:46:46.479387246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 13 00:46:46.485078 containerd[1555]: time="2026-03-13T00:46:46.484858727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 13 00:46:46.493796 containerd[1555]: time="2026-03-13T00:46:46.493589586Z" level=info msg="CreateContainer within sandbox \"8d8f6747114801abdc79887449f353732c5058fa772d486b3c144b713908d312\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 13 00:46:46.508806 containerd[1555]: time="2026-03-13T00:46:46.508548693Z" level=info msg="Container 28ecf6ac86e89b94eab02ee4b466149cc1612e4e80ae5836915b5042b28e76fe: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:46:46.521365 containerd[1555]: time="2026-03-13T00:46:46.521246009Z" level=info msg="CreateContainer within sandbox \"8d8f6747114801abdc79887449f353732c5058fa772d486b3c144b713908d312\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"28ecf6ac86e89b94eab02ee4b466149cc1612e4e80ae5836915b5042b28e76fe\"" Mar 13 00:46:46.524041 containerd[1555]: time="2026-03-13T00:46:46.523962763Z" level=info msg="StartContainer for \"28ecf6ac86e89b94eab02ee4b466149cc1612e4e80ae5836915b5042b28e76fe\"" Mar 13 00:46:46.525149 containerd[1555]: time="2026-03-13T00:46:46.525109789Z" level=info msg="connecting to shim 28ecf6ac86e89b94eab02ee4b466149cc1612e4e80ae5836915b5042b28e76fe" address="unix:///run/containerd/s/7cf8e4498edd251f0945bfada779584ec9bc3547a2accc10c31b49ec68d57e64" protocol=ttrpc version=3 Mar 13 00:46:46.576174 systemd[1]: Started cri-containerd-28ecf6ac86e89b94eab02ee4b466149cc1612e4e80ae5836915b5042b28e76fe.scope - libcontainer container 28ecf6ac86e89b94eab02ee4b466149cc1612e4e80ae5836915b5042b28e76fe. Mar 13 00:46:46.578338 kubelet[2813]: E0313 00:46:46.578238 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:46.688085 containerd[1555]: time="2026-03-13T00:46:46.687804020Z" level=info msg="StartContainer for \"28ecf6ac86e89b94eab02ee4b466149cc1612e4e80ae5836915b5042b28e76fe\" returns successfully" Mar 13 00:46:46.863114 systemd-networkd[1471]: cali3b3e2c8b0cb: Gained IPv6LL Mar 13 00:46:47.586612 kubelet[2813]: E0313 00:46:47.586509 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:47.602556 kubelet[2813]: I0313 00:46:47.602435 2813 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-tdkkp" podStartSLOduration=40.837797271 podStartE2EDuration="43.602417834s" podCreationTimestamp="2026-03-13 00:46:04 +0000 UTC" firstStartedPulling="2026-03-13 00:46:43.719544355 +0000 UTC m=+76.750329035" lastFinishedPulling="2026-03-13 00:46:46.484164918 +0000 UTC m=+79.514949598" observedRunningTime="2026-03-13 00:46:47.60137026 +0000 UTC m=+80.632154950" watchObservedRunningTime="2026-03-13 00:46:47.602417834 +0000 UTC m=+80.633202514" Mar 13 00:46:48.291021 containerd[1555]: time="2026-03-13T00:46:48.290858438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7df5f8b75f-hdlkc,Uid:ed18a379-1928-4da8-b306-c7990ac89b7b,Namespace:calico-system,Attempt:0,}" Mar 13 00:46:48.294061 containerd[1555]: time="2026-03-13T00:46:48.294003736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-565bc9487f-vs2f7,Uid:878d8c46-17fa-41ad-876f-3483a84be9ce,Namespace:calico-system,Attempt:0,}" Mar 13 00:46:48.296371 kubelet[2813]: E0313 00:46:48.296245 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:48.298333 containerd[1555]: time="2026-03-13T00:46:48.298120122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-9zfs9,Uid:9747dbd6-53d4-4d32-a6d8-4dc1ff7f2068,Namespace:kube-system,Attempt:0,}" Mar 13 00:46:48.616197 systemd-networkd[1471]: cali63a8cbadc09: Link UP Mar 13 00:46:48.618901 systemd-networkd[1471]: cali63a8cbadc09: Gained carrier Mar 13 00:46:48.664796 containerd[1555]: 2026-03-13 00:46:48.394 [INFO][4936] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7df5f8b75f--hdlkc-eth0 calico-kube-controllers-7df5f8b75f- calico-system ed18a379-1928-4da8-b306-c7990ac89b7b 896 0 2026-03-13 00:46:05 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7df5f8b75f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7df5f8b75f-hdlkc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali63a8cbadc09 [] [] }} ContainerID="3b321bf604379315af077906982affe4971d9cb0fd40d6b4e623d559b0a7d4a0" Namespace="calico-system" Pod="calico-kube-controllers-7df5f8b75f-hdlkc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7df5f8b75f--hdlkc-" Mar 13 00:46:48.664796 containerd[1555]: 2026-03-13 00:46:48.394 [INFO][4936] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3b321bf604379315af077906982affe4971d9cb0fd40d6b4e623d559b0a7d4a0" Namespace="calico-system" Pod="calico-kube-controllers-7df5f8b75f-hdlkc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7df5f8b75f--hdlkc-eth0" Mar 13 00:46:48.664796 containerd[1555]: 2026-03-13 00:46:48.487 [INFO][4979] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3b321bf604379315af077906982affe4971d9cb0fd40d6b4e623d559b0a7d4a0" HandleID="k8s-pod-network.3b321bf604379315af077906982affe4971d9cb0fd40d6b4e623d559b0a7d4a0" Workload="localhost-k8s-calico--kube--controllers--7df5f8b75f--hdlkc-eth0" Mar 13 00:46:48.664796 containerd[1555]: 2026-03-13 00:46:48.520 [INFO][4979] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3b321bf604379315af077906982affe4971d9cb0fd40d6b4e623d559b0a7d4a0" HandleID="k8s-pod-network.3b321bf604379315af077906982affe4971d9cb0fd40d6b4e623d559b0a7d4a0" Workload="localhost-k8s-calico--kube--controllers--7df5f8b75f--hdlkc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00059fd80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7df5f8b75f-hdlkc", "timestamp":"2026-03-13 00:46:48.487974075 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00073c580)} Mar 13 00:46:48.664796 containerd[1555]: 2026-03-13 00:46:48.520 [INFO][4979] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:46:48.664796 containerd[1555]: 2026-03-13 00:46:48.520 [INFO][4979] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:46:48.664796 containerd[1555]: 2026-03-13 00:46:48.520 [INFO][4979] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 13 00:46:48.664796 containerd[1555]: 2026-03-13 00:46:48.532 [INFO][4979] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3b321bf604379315af077906982affe4971d9cb0fd40d6b4e623d559b0a7d4a0" host="localhost" Mar 13 00:46:48.664796 containerd[1555]: 2026-03-13 00:46:48.544 [INFO][4979] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 13 00:46:48.664796 containerd[1555]: 2026-03-13 00:46:48.554 [INFO][4979] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 13 00:46:48.664796 containerd[1555]: 2026-03-13 00:46:48.560 [INFO][4979] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 13 00:46:48.664796 containerd[1555]: 2026-03-13 00:46:48.567 [INFO][4979] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 13 00:46:48.664796 containerd[1555]: 2026-03-13 00:46:48.567 [INFO][4979] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3b321bf604379315af077906982affe4971d9cb0fd40d6b4e623d559b0a7d4a0" host="localhost" Mar 13 00:46:48.664796 containerd[1555]: 2026-03-13 00:46:48.573 [INFO][4979] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3b321bf604379315af077906982affe4971d9cb0fd40d6b4e623d559b0a7d4a0 Mar 13 00:46:48.664796 containerd[1555]: 2026-03-13 00:46:48.587 [INFO][4979] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3b321bf604379315af077906982affe4971d9cb0fd40d6b4e623d559b0a7d4a0" host="localhost" Mar 13 00:46:48.664796 containerd[1555]: 2026-03-13 00:46:48.602 [INFO][4979] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.3b321bf604379315af077906982affe4971d9cb0fd40d6b4e623d559b0a7d4a0" host="localhost" Mar 13 00:46:48.664796 containerd[1555]: 2026-03-13 00:46:48.602 [INFO][4979] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.3b321bf604379315af077906982affe4971d9cb0fd40d6b4e623d559b0a7d4a0" host="localhost" Mar 13 00:46:48.664796 containerd[1555]: 2026-03-13 00:46:48.602 [INFO][4979] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:46:48.664796 containerd[1555]: 2026-03-13 00:46:48.602 [INFO][4979] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="3b321bf604379315af077906982affe4971d9cb0fd40d6b4e623d559b0a7d4a0" HandleID="k8s-pod-network.3b321bf604379315af077906982affe4971d9cb0fd40d6b4e623d559b0a7d4a0" Workload="localhost-k8s-calico--kube--controllers--7df5f8b75f--hdlkc-eth0" Mar 13 00:46:48.666000 containerd[1555]: 2026-03-13 00:46:48.609 [INFO][4936] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3b321bf604379315af077906982affe4971d9cb0fd40d6b4e623d559b0a7d4a0" Namespace="calico-system" Pod="calico-kube-controllers-7df5f8b75f-hdlkc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7df5f8b75f--hdlkc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7df5f8b75f--hdlkc-eth0", GenerateName:"calico-kube-controllers-7df5f8b75f-", Namespace:"calico-system", SelfLink:"", UID:"ed18a379-1928-4da8-b306-c7990ac89b7b", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 46, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7df5f8b75f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7df5f8b75f-hdlkc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali63a8cbadc09", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:46:48.666000 containerd[1555]: 2026-03-13 00:46:48.609 [INFO][4936] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="3b321bf604379315af077906982affe4971d9cb0fd40d6b4e623d559b0a7d4a0" Namespace="calico-system" Pod="calico-kube-controllers-7df5f8b75f-hdlkc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7df5f8b75f--hdlkc-eth0" Mar 13 00:46:48.666000 containerd[1555]: 2026-03-13 00:46:48.609 [INFO][4936] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali63a8cbadc09 ContainerID="3b321bf604379315af077906982affe4971d9cb0fd40d6b4e623d559b0a7d4a0" Namespace="calico-system" Pod="calico-kube-controllers-7df5f8b75f-hdlkc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7df5f8b75f--hdlkc-eth0" Mar 13 00:46:48.666000 containerd[1555]: 2026-03-13 00:46:48.620 [INFO][4936] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3b321bf604379315af077906982affe4971d9cb0fd40d6b4e623d559b0a7d4a0" Namespace="calico-system" Pod="calico-kube-controllers-7df5f8b75f-hdlkc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7df5f8b75f--hdlkc-eth0" Mar 13 00:46:48.666000 containerd[1555]: 2026-03-13 00:46:48.622 [INFO][4936] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3b321bf604379315af077906982affe4971d9cb0fd40d6b4e623d559b0a7d4a0" Namespace="calico-system" Pod="calico-kube-controllers-7df5f8b75f-hdlkc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7df5f8b75f--hdlkc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7df5f8b75f--hdlkc-eth0", GenerateName:"calico-kube-controllers-7df5f8b75f-", Namespace:"calico-system", SelfLink:"", UID:"ed18a379-1928-4da8-b306-c7990ac89b7b", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 46, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7df5f8b75f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3b321bf604379315af077906982affe4971d9cb0fd40d6b4e623d559b0a7d4a0", Pod:"calico-kube-controllers-7df5f8b75f-hdlkc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali63a8cbadc09", MAC:"ca:eb:86:f0:43:3b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:46:48.666000 containerd[1555]: 2026-03-13 00:46:48.647 [INFO][4936] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3b321bf604379315af077906982affe4971d9cb0fd40d6b4e623d559b0a7d4a0" Namespace="calico-system" Pod="calico-kube-controllers-7df5f8b75f-hdlkc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7df5f8b75f--hdlkc-eth0" Mar 13 00:46:48.730551 systemd-networkd[1471]: calib2c7d126386: Link UP Mar 13 00:46:48.733315 systemd-networkd[1471]: calib2c7d126386: Gained carrier Mar 13 00:46:48.771624 containerd[1555]: 2026-03-13 00:46:48.421 [INFO][4951] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--9zfs9-eth0 coredns-7d764666f9- kube-system 9747dbd6-53d4-4d32-a6d8-4dc1ff7f2068 893 0 2026-03-13 00:45:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-9zfs9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib2c7d126386 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="655f0e3a3344e91efe2a955d19973e0b746f30c70f0d86d4c4058a42130406a9" Namespace="kube-system" Pod="coredns-7d764666f9-9zfs9" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--9zfs9-" Mar 13 00:46:48.771624 containerd[1555]: 2026-03-13 00:46:48.421 [INFO][4951] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="655f0e3a3344e91efe2a955d19973e0b746f30c70f0d86d4c4058a42130406a9" Namespace="kube-system" Pod="coredns-7d764666f9-9zfs9" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--9zfs9-eth0" Mar 13 00:46:48.771624 containerd[1555]: 2026-03-13 00:46:48.522 [INFO][4982] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="655f0e3a3344e91efe2a955d19973e0b746f30c70f0d86d4c4058a42130406a9" HandleID="k8s-pod-network.655f0e3a3344e91efe2a955d19973e0b746f30c70f0d86d4c4058a42130406a9" Workload="localhost-k8s-coredns--7d764666f9--9zfs9-eth0" Mar 13 00:46:48.771624 containerd[1555]: 2026-03-13 00:46:48.536 [INFO][4982] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="655f0e3a3344e91efe2a955d19973e0b746f30c70f0d86d4c4058a42130406a9" HandleID="k8s-pod-network.655f0e3a3344e91efe2a955d19973e0b746f30c70f0d86d4c4058a42130406a9" Workload="localhost-k8s-coredns--7d764666f9--9zfs9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004de140), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-9zfs9", "timestamp":"2026-03-13 00:46:48.522197889 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000fa160)} Mar 13 00:46:48.771624 containerd[1555]: 2026-03-13 00:46:48.536 [INFO][4982] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:46:48.771624 containerd[1555]: 2026-03-13 00:46:48.603 [INFO][4982] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:46:48.771624 containerd[1555]: 2026-03-13 00:46:48.603 [INFO][4982] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 13 00:46:48.771624 containerd[1555]: 2026-03-13 00:46:48.635 [INFO][4982] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.655f0e3a3344e91efe2a955d19973e0b746f30c70f0d86d4c4058a42130406a9" host="localhost" Mar 13 00:46:48.771624 containerd[1555]: 2026-03-13 00:46:48.649 [INFO][4982] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 13 00:46:48.771624 containerd[1555]: 2026-03-13 00:46:48.658 [INFO][4982] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 13 00:46:48.771624 containerd[1555]: 2026-03-13 00:46:48.663 [INFO][4982] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 13 00:46:48.771624 containerd[1555]: 2026-03-13 00:46:48.672 [INFO][4982] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 13 00:46:48.771624 containerd[1555]: 2026-03-13 00:46:48.672 [INFO][4982] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.655f0e3a3344e91efe2a955d19973e0b746f30c70f0d86d4c4058a42130406a9" host="localhost" Mar 13 00:46:48.771624 containerd[1555]: 2026-03-13 00:46:48.680 [INFO][4982] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.655f0e3a3344e91efe2a955d19973e0b746f30c70f0d86d4c4058a42130406a9 Mar 13 00:46:48.771624 containerd[1555]: 2026-03-13 00:46:48.699 [INFO][4982] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.655f0e3a3344e91efe2a955d19973e0b746f30c70f0d86d4c4058a42130406a9" host="localhost" Mar 13 00:46:48.771624 containerd[1555]: 2026-03-13 00:46:48.714 [INFO][4982] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.655f0e3a3344e91efe2a955d19973e0b746f30c70f0d86d4c4058a42130406a9" host="localhost" Mar 13 00:46:48.771624 containerd[1555]: 2026-03-13 00:46:48.714 [INFO][4982] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.655f0e3a3344e91efe2a955d19973e0b746f30c70f0d86d4c4058a42130406a9" host="localhost" Mar 13 00:46:48.771624 containerd[1555]: 2026-03-13 00:46:48.714 [INFO][4982] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:46:48.771624 containerd[1555]: 2026-03-13 00:46:48.714 [INFO][4982] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="655f0e3a3344e91efe2a955d19973e0b746f30c70f0d86d4c4058a42130406a9" HandleID="k8s-pod-network.655f0e3a3344e91efe2a955d19973e0b746f30c70f0d86d4c4058a42130406a9" Workload="localhost-k8s-coredns--7d764666f9--9zfs9-eth0" Mar 13 00:46:48.773429 containerd[1555]: 2026-03-13 00:46:48.724 [INFO][4951] cni-plugin/k8s.go 418: Populated endpoint ContainerID="655f0e3a3344e91efe2a955d19973e0b746f30c70f0d86d4c4058a42130406a9" Namespace="kube-system" Pod="coredns-7d764666f9-9zfs9" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--9zfs9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--9zfs9-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"9747dbd6-53d4-4d32-a6d8-4dc1ff7f2068", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 45, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-9zfs9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib2c7d126386", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:46:48.773429 containerd[1555]: 2026-03-13 00:46:48.724 [INFO][4951] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="655f0e3a3344e91efe2a955d19973e0b746f30c70f0d86d4c4058a42130406a9" Namespace="kube-system" Pod="coredns-7d764666f9-9zfs9" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--9zfs9-eth0" Mar 13 00:46:48.773429 containerd[1555]: 2026-03-13 00:46:48.724 [INFO][4951] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib2c7d126386 ContainerID="655f0e3a3344e91efe2a955d19973e0b746f30c70f0d86d4c4058a42130406a9" Namespace="kube-system" Pod="coredns-7d764666f9-9zfs9" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--9zfs9-eth0" Mar 13 00:46:48.773429 containerd[1555]: 2026-03-13 00:46:48.737 [INFO][4951] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="655f0e3a3344e91efe2a955d19973e0b746f30c70f0d86d4c4058a42130406a9" Namespace="kube-system" Pod="coredns-7d764666f9-9zfs9" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--9zfs9-eth0" Mar 13 00:46:48.773429 containerd[1555]: 2026-03-13 00:46:48.739 [INFO][4951] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="655f0e3a3344e91efe2a955d19973e0b746f30c70f0d86d4c4058a42130406a9" Namespace="kube-system" Pod="coredns-7d764666f9-9zfs9" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--9zfs9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--9zfs9-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"9747dbd6-53d4-4d32-a6d8-4dc1ff7f2068", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 45, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"655f0e3a3344e91efe2a955d19973e0b746f30c70f0d86d4c4058a42130406a9", Pod:"coredns-7d764666f9-9zfs9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib2c7d126386", MAC:"b2:81:1a:c8:12:c8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:46:48.773429 containerd[1555]: 2026-03-13 00:46:48.755 [INFO][4951] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="655f0e3a3344e91efe2a955d19973e0b746f30c70f0d86d4c4058a42130406a9" Namespace="kube-system" Pod="coredns-7d764666f9-9zfs9" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--9zfs9-eth0" Mar 13 00:46:48.808068 containerd[1555]: time="2026-03-13T00:46:48.807930859Z" level=info msg="connecting to shim 3b321bf604379315af077906982affe4971d9cb0fd40d6b4e623d559b0a7d4a0" address="unix:///run/containerd/s/1930a802f55eaae27a92c05527db3861220750559f7f5fcd1406115eb7ac7ab0" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:46:48.828951 systemd-networkd[1471]: caliddd560b7770: Link UP Mar 13 00:46:48.837426 systemd-networkd[1471]: caliddd560b7770: Gained carrier Mar 13 00:46:48.860223 containerd[1555]: time="2026-03-13T00:46:48.859998309Z" level=info msg="connecting to shim 655f0e3a3344e91efe2a955d19973e0b746f30c70f0d86d4c4058a42130406a9" address="unix:///run/containerd/s/7fc3531397d28b9b5d2f95898c8b73522b3133908e74afb3388d26043d2d2aeb" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:46:48.899868 containerd[1555]: 2026-03-13 00:46:48.454 [INFO][4948] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--565bc9487f--vs2f7-eth0 calico-apiserver-565bc9487f- calico-system 878d8c46-17fa-41ad-876f-3483a84be9ce 904 0 2026-03-13 00:46:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:565bc9487f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-565bc9487f-vs2f7 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] caliddd560b7770 [] [] }} ContainerID="e773ae6c3a35b918a1f8e437d374108cc447634865d5c57fd37f1308ff6bcd7d" Namespace="calico-system" Pod="calico-apiserver-565bc9487f-vs2f7" WorkloadEndpoint="localhost-k8s-calico--apiserver--565bc9487f--vs2f7-" Mar 13 00:46:48.899868 containerd[1555]: 2026-03-13 00:46:48.454 [INFO][4948] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e773ae6c3a35b918a1f8e437d374108cc447634865d5c57fd37f1308ff6bcd7d" Namespace="calico-system" Pod="calico-apiserver-565bc9487f-vs2f7" WorkloadEndpoint="localhost-k8s-calico--apiserver--565bc9487f--vs2f7-eth0" Mar 13 00:46:48.899868 containerd[1555]: 2026-03-13 00:46:48.565 [INFO][4994] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e773ae6c3a35b918a1f8e437d374108cc447634865d5c57fd37f1308ff6bcd7d" HandleID="k8s-pod-network.e773ae6c3a35b918a1f8e437d374108cc447634865d5c57fd37f1308ff6bcd7d" Workload="localhost-k8s-calico--apiserver--565bc9487f--vs2f7-eth0" Mar 13 00:46:48.899868 containerd[1555]: 2026-03-13 00:46:48.578 [INFO][4994] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e773ae6c3a35b918a1f8e437d374108cc447634865d5c57fd37f1308ff6bcd7d" HandleID="k8s-pod-network.e773ae6c3a35b918a1f8e437d374108cc447634865d5c57fd37f1308ff6bcd7d" Workload="localhost-k8s-calico--apiserver--565bc9487f--vs2f7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f690), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-565bc9487f-vs2f7", "timestamp":"2026-03-13 00:46:48.565101157 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002069a0)} Mar 13 00:46:48.899868 containerd[1555]: 2026-03-13 00:46:48.579 [INFO][4994] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:46:48.899868 containerd[1555]: 2026-03-13 00:46:48.714 [INFO][4994] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:46:48.899868 containerd[1555]: 2026-03-13 00:46:48.714 [INFO][4994] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 13 00:46:48.899868 containerd[1555]: 2026-03-13 00:46:48.734 [INFO][4994] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e773ae6c3a35b918a1f8e437d374108cc447634865d5c57fd37f1308ff6bcd7d" host="localhost" Mar 13 00:46:48.899868 containerd[1555]: 2026-03-13 00:46:48.750 [INFO][4994] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 13 00:46:48.899868 containerd[1555]: 2026-03-13 00:46:48.768 [INFO][4994] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 13 00:46:48.899868 containerd[1555]: 2026-03-13 00:46:48.776 [INFO][4994] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 13 00:46:48.899868 containerd[1555]: 2026-03-13 00:46:48.783 [INFO][4994] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 13 00:46:48.899868 containerd[1555]: 2026-03-13 00:46:48.784 [INFO][4994] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e773ae6c3a35b918a1f8e437d374108cc447634865d5c57fd37f1308ff6bcd7d" host="localhost" Mar 13 00:46:48.899868 containerd[1555]: 2026-03-13 00:46:48.792 [INFO][4994] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e773ae6c3a35b918a1f8e437d374108cc447634865d5c57fd37f1308ff6bcd7d Mar 13 00:46:48.899868 containerd[1555]: 2026-03-13 00:46:48.800 [INFO][4994] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e773ae6c3a35b918a1f8e437d374108cc447634865d5c57fd37f1308ff6bcd7d" host="localhost" Mar 13 00:46:48.899868 containerd[1555]: 2026-03-13 00:46:48.813 [INFO][4994] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.e773ae6c3a35b918a1f8e437d374108cc447634865d5c57fd37f1308ff6bcd7d" host="localhost" Mar 13 00:46:48.899868 containerd[1555]: 2026-03-13 00:46:48.813 [INFO][4994] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.e773ae6c3a35b918a1f8e437d374108cc447634865d5c57fd37f1308ff6bcd7d" host="localhost" Mar 13 00:46:48.899868 containerd[1555]: 2026-03-13 00:46:48.813 [INFO][4994] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:46:48.899868 containerd[1555]: 2026-03-13 00:46:48.813 [INFO][4994] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="e773ae6c3a35b918a1f8e437d374108cc447634865d5c57fd37f1308ff6bcd7d" HandleID="k8s-pod-network.e773ae6c3a35b918a1f8e437d374108cc447634865d5c57fd37f1308ff6bcd7d" Workload="localhost-k8s-calico--apiserver--565bc9487f--vs2f7-eth0" Mar 13 00:46:48.900950 containerd[1555]: 2026-03-13 00:46:48.819 [INFO][4948] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e773ae6c3a35b918a1f8e437d374108cc447634865d5c57fd37f1308ff6bcd7d" Namespace="calico-system" Pod="calico-apiserver-565bc9487f-vs2f7" WorkloadEndpoint="localhost-k8s-calico--apiserver--565bc9487f--vs2f7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--565bc9487f--vs2f7-eth0", GenerateName:"calico-apiserver-565bc9487f-", Namespace:"calico-system", SelfLink:"", UID:"878d8c46-17fa-41ad-876f-3483a84be9ce", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 46, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"565bc9487f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-565bc9487f-vs2f7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliddd560b7770", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:46:48.900950 containerd[1555]: 2026-03-13 00:46:48.819 [INFO][4948] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="e773ae6c3a35b918a1f8e437d374108cc447634865d5c57fd37f1308ff6bcd7d" Namespace="calico-system" Pod="calico-apiserver-565bc9487f-vs2f7" WorkloadEndpoint="localhost-k8s-calico--apiserver--565bc9487f--vs2f7-eth0" Mar 13 00:46:48.900950 containerd[1555]: 2026-03-13 00:46:48.819 [INFO][4948] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliddd560b7770 ContainerID="e773ae6c3a35b918a1f8e437d374108cc447634865d5c57fd37f1308ff6bcd7d" Namespace="calico-system" Pod="calico-apiserver-565bc9487f-vs2f7" WorkloadEndpoint="localhost-k8s-calico--apiserver--565bc9487f--vs2f7-eth0" Mar 13 00:46:48.900950 containerd[1555]: 2026-03-13 00:46:48.849 [INFO][4948] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e773ae6c3a35b918a1f8e437d374108cc447634865d5c57fd37f1308ff6bcd7d" Namespace="calico-system" Pod="calico-apiserver-565bc9487f-vs2f7" WorkloadEndpoint="localhost-k8s-calico--apiserver--565bc9487f--vs2f7-eth0" Mar 13 00:46:48.900950 containerd[1555]: 2026-03-13 00:46:48.853 [INFO][4948] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e773ae6c3a35b918a1f8e437d374108cc447634865d5c57fd37f1308ff6bcd7d" Namespace="calico-system" Pod="calico-apiserver-565bc9487f-vs2f7" WorkloadEndpoint="localhost-k8s-calico--apiserver--565bc9487f--vs2f7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--565bc9487f--vs2f7-eth0", GenerateName:"calico-apiserver-565bc9487f-", Namespace:"calico-system", SelfLink:"", UID:"878d8c46-17fa-41ad-876f-3483a84be9ce", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 46, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"565bc9487f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e773ae6c3a35b918a1f8e437d374108cc447634865d5c57fd37f1308ff6bcd7d", Pod:"calico-apiserver-565bc9487f-vs2f7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliddd560b7770", MAC:"a6:e4:b8:6d:0b:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:46:48.900950 containerd[1555]: 2026-03-13 00:46:48.877 [INFO][4948] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e773ae6c3a35b918a1f8e437d374108cc447634865d5c57fd37f1308ff6bcd7d" Namespace="calico-system" Pod="calico-apiserver-565bc9487f-vs2f7" WorkloadEndpoint="localhost-k8s-calico--apiserver--565bc9487f--vs2f7-eth0" Mar 13 00:46:48.961005 systemd[1]: Started cri-containerd-655f0e3a3344e91efe2a955d19973e0b746f30c70f0d86d4c4058a42130406a9.scope - libcontainer container 655f0e3a3344e91efe2a955d19973e0b746f30c70f0d86d4c4058a42130406a9. Mar 13 00:46:48.978070 systemd[1]: Started cri-containerd-3b321bf604379315af077906982affe4971d9cb0fd40d6b4e623d559b0a7d4a0.scope - libcontainer container 3b321bf604379315af077906982affe4971d9cb0fd40d6b4e623d559b0a7d4a0. Mar 13 00:46:48.996930 containerd[1555]: time="2026-03-13T00:46:48.996541381Z" level=info msg="connecting to shim e773ae6c3a35b918a1f8e437d374108cc447634865d5c57fd37f1308ff6bcd7d" address="unix:///run/containerd/s/db92f6318d8115222d7efa993ebec0a0da8bbd9ee3264674514da057c7542af5" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:46:49.030989 systemd-resolved[1395]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 13 00:46:49.051639 systemd-resolved[1395]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 13 00:46:49.071128 systemd[1]: Started cri-containerd-e773ae6c3a35b918a1f8e437d374108cc447634865d5c57fd37f1308ff6bcd7d.scope - libcontainer container e773ae6c3a35b918a1f8e437d374108cc447634865d5c57fd37f1308ff6bcd7d. Mar 13 00:46:49.135058 systemd-resolved[1395]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 13 00:46:49.142187 containerd[1555]: time="2026-03-13T00:46:49.141924355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-9zfs9,Uid:9747dbd6-53d4-4d32-a6d8-4dc1ff7f2068,Namespace:kube-system,Attempt:0,} returns sandbox id \"655f0e3a3344e91efe2a955d19973e0b746f30c70f0d86d4c4058a42130406a9\"" Mar 13 00:46:49.150818 kubelet[2813]: E0313 00:46:49.150506 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:49.167239 containerd[1555]: time="2026-03-13T00:46:49.166922113Z" level=info msg="CreateContainer within sandbox \"655f0e3a3344e91efe2a955d19973e0b746f30c70f0d86d4c4058a42130406a9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 13 00:46:49.173224 containerd[1555]: time="2026-03-13T00:46:49.172796182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7df5f8b75f-hdlkc,Uid:ed18a379-1928-4da8-b306-c7990ac89b7b,Namespace:calico-system,Attempt:0,} returns sandbox id \"3b321bf604379315af077906982affe4971d9cb0fd40d6b4e623d559b0a7d4a0\"" Mar 13 00:46:49.195766 containerd[1555]: time="2026-03-13T00:46:49.195504568Z" level=info msg="Container 4e2439997a9b6cf4812481bc6db87f006dbc9e0302d1e06c4a97d4175db4dddf: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:46:49.196575 containerd[1555]: time="2026-03-13T00:46:49.196459478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-565bc9487f-vs2f7,Uid:878d8c46-17fa-41ad-876f-3483a84be9ce,Namespace:calico-system,Attempt:0,} returns sandbox id \"e773ae6c3a35b918a1f8e437d374108cc447634865d5c57fd37f1308ff6bcd7d\"" Mar 13 00:46:49.209063 containerd[1555]: time="2026-03-13T00:46:49.208949571Z" level=info msg="CreateContainer within sandbox \"655f0e3a3344e91efe2a955d19973e0b746f30c70f0d86d4c4058a42130406a9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4e2439997a9b6cf4812481bc6db87f006dbc9e0302d1e06c4a97d4175db4dddf\"" Mar 13 00:46:49.210270 containerd[1555]: time="2026-03-13T00:46:49.210179939Z" level=info msg="StartContainer for \"4e2439997a9b6cf4812481bc6db87f006dbc9e0302d1e06c4a97d4175db4dddf\"" Mar 13 00:46:49.211560 containerd[1555]: time="2026-03-13T00:46:49.211299035Z" level=info msg="connecting to shim 4e2439997a9b6cf4812481bc6db87f006dbc9e0302d1e06c4a97d4175db4dddf" address="unix:///run/containerd/s/7fc3531397d28b9b5d2f95898c8b73522b3133908e74afb3388d26043d2d2aeb" protocol=ttrpc version=3 Mar 13 00:46:49.261121 systemd[1]: Started cri-containerd-4e2439997a9b6cf4812481bc6db87f006dbc9e0302d1e06c4a97d4175db4dddf.scope - libcontainer container 4e2439997a9b6cf4812481bc6db87f006dbc9e0302d1e06c4a97d4175db4dddf. Mar 13 00:46:49.347023 containerd[1555]: time="2026-03-13T00:46:49.346972599Z" level=info msg="StartContainer for \"4e2439997a9b6cf4812481bc6db87f006dbc9e0302d1e06c4a97d4175db4dddf\" returns successfully" Mar 13 00:46:49.602074 kubelet[2813]: E0313 00:46:49.601943 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:49.622380 kubelet[2813]: I0313 00:46:49.621787 2813 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-9zfs9" podStartSLOduration=75.621770298 podStartE2EDuration="1m15.621770298s" podCreationTimestamp="2026-03-13 00:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:46:49.618352166 +0000 UTC m=+82.649136876" watchObservedRunningTime="2026-03-13 00:46:49.621770298 +0000 UTC m=+82.652554978" Mar 13 00:46:50.063146 systemd-networkd[1471]: cali63a8cbadc09: Gained IPv6LL Mar 13 00:46:50.256253 systemd-networkd[1471]: calib2c7d126386: Gained IPv6LL Mar 13 00:46:50.511137 systemd-networkd[1471]: caliddd560b7770: Gained IPv6LL Mar 13 00:46:50.620507 kubelet[2813]: E0313 00:46:50.620479 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:51.190461 containerd[1555]: time="2026-03-13T00:46:51.190323156Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:51.192994 containerd[1555]: time="2026-03-13T00:46:51.192537231Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 13 00:46:51.195440 containerd[1555]: time="2026-03-13T00:46:51.194444766Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:51.199036 containerd[1555]: time="2026-03-13T00:46:51.198827031Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:51.199908 containerd[1555]: time="2026-03-13T00:46:51.199824263Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 4.714882661s" Mar 13 00:46:51.199908 containerd[1555]: time="2026-03-13T00:46:51.199853348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 13 00:46:51.201574 containerd[1555]: time="2026-03-13T00:46:51.201475973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 13 00:46:51.206559 containerd[1555]: time="2026-03-13T00:46:51.206466786Z" level=info msg="CreateContainer within sandbox \"92eb82e5f0cb8914548d680eb5e29517e2b15e3bf45616f7e6f75352f11a1e2e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 13 00:46:51.221969 containerd[1555]: time="2026-03-13T00:46:51.221782405Z" level=info msg="Container 6b8779cac3aafefb77e837e73024fd5df8df2c7f0a66962ef335ab33ab18e628: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:46:51.249763 containerd[1555]: time="2026-03-13T00:46:51.248985791Z" level=info msg="CreateContainer within sandbox \"92eb82e5f0cb8914548d680eb5e29517e2b15e3bf45616f7e6f75352f11a1e2e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6b8779cac3aafefb77e837e73024fd5df8df2c7f0a66962ef335ab33ab18e628\"" Mar 13 00:46:51.250775 containerd[1555]: time="2026-03-13T00:46:51.250561192Z" level=info msg="StartContainer for \"6b8779cac3aafefb77e837e73024fd5df8df2c7f0a66962ef335ab33ab18e628\"" Mar 13 00:46:51.251852 containerd[1555]: time="2026-03-13T00:46:51.251826440Z" level=info msg="connecting to shim 6b8779cac3aafefb77e837e73024fd5df8df2c7f0a66962ef335ab33ab18e628" address="unix:///run/containerd/s/c7d03b2a30cec2b0b46d82bc49fb11968b93e0216595814660a0f8f1a0c07c8d" protocol=ttrpc version=3 Mar 13 00:46:51.286895 kubelet[2813]: E0313 00:46:51.286554 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:51.288500 kubelet[2813]: E0313 00:46:51.288139 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:51.315134 systemd[1]: Started cri-containerd-6b8779cac3aafefb77e837e73024fd5df8df2c7f0a66962ef335ab33ab18e628.scope - libcontainer container 6b8779cac3aafefb77e837e73024fd5df8df2c7f0a66962ef335ab33ab18e628. Mar 13 00:46:51.415037 containerd[1555]: time="2026-03-13T00:46:51.414955219Z" level=info msg="StartContainer for \"6b8779cac3aafefb77e837e73024fd5df8df2c7f0a66962ef335ab33ab18e628\" returns successfully" Mar 13 00:46:51.627251 kubelet[2813]: E0313 00:46:51.627119 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:51.645210 kubelet[2813]: I0313 00:46:51.645036 2813 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-565bc9487f-2bzm6" podStartSLOduration=42.398706586 podStartE2EDuration="47.645020864s" podCreationTimestamp="2026-03-13 00:46:04 +0000 UTC" firstStartedPulling="2026-03-13 00:46:45.954884182 +0000 UTC m=+78.985668861" lastFinishedPulling="2026-03-13 00:46:51.201198459 +0000 UTC m=+84.231983139" observedRunningTime="2026-03-13 00:46:51.644339885 +0000 UTC m=+84.675124564" watchObservedRunningTime="2026-03-13 00:46:51.645020864 +0000 UTC m=+84.675805554" Mar 13 00:46:54.146451 containerd[1555]: time="2026-03-13T00:46:54.146319023Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:54.147966 containerd[1555]: time="2026-03-13T00:46:54.147856838Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 13 00:46:54.150085 containerd[1555]: time="2026-03-13T00:46:54.149997588Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:54.154076 containerd[1555]: time="2026-03-13T00:46:54.153946122Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:54.168835 containerd[1555]: time="2026-03-13T00:46:54.168497209Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 2.966922331s" Mar 13 00:46:54.168835 containerd[1555]: time="2026-03-13T00:46:54.168620249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 13 00:46:54.170837 containerd[1555]: time="2026-03-13T00:46:54.170542119Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 13 00:46:54.211773 containerd[1555]: time="2026-03-13T00:46:54.211525476Z" level=info msg="CreateContainer within sandbox \"3b321bf604379315af077906982affe4971d9cb0fd40d6b4e623d559b0a7d4a0\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 13 00:46:54.227896 containerd[1555]: time="2026-03-13T00:46:54.226386399Z" level=info msg="Container 571c2ec03621461fbd1005d502041a14cc5119297b1737630254f9169569dbc8: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:46:54.240389 containerd[1555]: time="2026-03-13T00:46:54.240173144Z" level=info msg="CreateContainer within sandbox \"3b321bf604379315af077906982affe4971d9cb0fd40d6b4e623d559b0a7d4a0\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"571c2ec03621461fbd1005d502041a14cc5119297b1737630254f9169569dbc8\"" Mar 13 00:46:54.242998 containerd[1555]: time="2026-03-13T00:46:54.241968149Z" level=info msg="StartContainer for \"571c2ec03621461fbd1005d502041a14cc5119297b1737630254f9169569dbc8\"" Mar 13 00:46:54.243972 containerd[1555]: time="2026-03-13T00:46:54.243635046Z" level=info msg="connecting to shim 571c2ec03621461fbd1005d502041a14cc5119297b1737630254f9169569dbc8" address="unix:///run/containerd/s/1930a802f55eaae27a92c05527db3861220750559f7f5fcd1406115eb7ac7ab0" protocol=ttrpc version=3 Mar 13 00:46:54.308141 containerd[1555]: time="2026-03-13T00:46:54.307906407Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:54.310280 containerd[1555]: time="2026-03-13T00:46:54.310243063Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 13 00:46:54.319798 containerd[1555]: time="2026-03-13T00:46:54.319588428Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 148.962212ms" Mar 13 00:46:54.319798 containerd[1555]: time="2026-03-13T00:46:54.319618835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 13 00:46:54.338554 systemd[1]: Started cri-containerd-571c2ec03621461fbd1005d502041a14cc5119297b1737630254f9169569dbc8.scope - libcontainer container 571c2ec03621461fbd1005d502041a14cc5119297b1737630254f9169569dbc8. Mar 13 00:46:54.342034 containerd[1555]: time="2026-03-13T00:46:54.341844606Z" level=info msg="CreateContainer within sandbox \"e773ae6c3a35b918a1f8e437d374108cc447634865d5c57fd37f1308ff6bcd7d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 13 00:46:54.362622 containerd[1555]: time="2026-03-13T00:46:54.362321276Z" level=info msg="Container 0b2e32f5b4f11f6b031135a14f68bd3069837000b57d2d695dc965e15d91e04d: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:46:54.381515 containerd[1555]: time="2026-03-13T00:46:54.381330440Z" level=info msg="CreateContainer within sandbox \"e773ae6c3a35b918a1f8e437d374108cc447634865d5c57fd37f1308ff6bcd7d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0b2e32f5b4f11f6b031135a14f68bd3069837000b57d2d695dc965e15d91e04d\"" Mar 13 00:46:54.383052 containerd[1555]: time="2026-03-13T00:46:54.382909453Z" level=info msg="StartContainer for \"0b2e32f5b4f11f6b031135a14f68bd3069837000b57d2d695dc965e15d91e04d\"" Mar 13 00:46:54.390165 containerd[1555]: time="2026-03-13T00:46:54.389903256Z" level=info msg="connecting to shim 0b2e32f5b4f11f6b031135a14f68bd3069837000b57d2d695dc965e15d91e04d" address="unix:///run/containerd/s/db92f6318d8115222d7efa993ebec0a0da8bbd9ee3264674514da057c7542af5" protocol=ttrpc version=3 Mar 13 00:46:54.440874 systemd[1]: Started cri-containerd-0b2e32f5b4f11f6b031135a14f68bd3069837000b57d2d695dc965e15d91e04d.scope - libcontainer container 0b2e32f5b4f11f6b031135a14f68bd3069837000b57d2d695dc965e15d91e04d. Mar 13 00:46:54.507586 containerd[1555]: time="2026-03-13T00:46:54.507334196Z" level=info msg="StartContainer for \"571c2ec03621461fbd1005d502041a14cc5119297b1737630254f9169569dbc8\" returns successfully" Mar 13 00:46:54.628086 containerd[1555]: time="2026-03-13T00:46:54.628011026Z" level=info msg="StartContainer for \"0b2e32f5b4f11f6b031135a14f68bd3069837000b57d2d695dc965e15d91e04d\" returns successfully" Mar 13 00:46:54.694641 kubelet[2813]: I0313 00:46:54.691938 2813 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-565bc9487f-vs2f7" podStartSLOduration=45.562596554 podStartE2EDuration="50.691921004s" podCreationTimestamp="2026-03-13 00:46:04 +0000 UTC" firstStartedPulling="2026-03-13 00:46:49.20226237 +0000 UTC m=+82.233047050" lastFinishedPulling="2026-03-13 00:46:54.33158682 +0000 UTC m=+87.362371500" observedRunningTime="2026-03-13 00:46:54.691107427 +0000 UTC m=+87.721892108" watchObservedRunningTime="2026-03-13 00:46:54.691921004 +0000 UTC m=+87.722705694" Mar 13 00:46:54.726817 kubelet[2813]: I0313 00:46:54.725832 2813 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7df5f8b75f-hdlkc" podStartSLOduration=44.731623347 podStartE2EDuration="49.725819111s" podCreationTimestamp="2026-03-13 00:46:05 +0000 UTC" firstStartedPulling="2026-03-13 00:46:49.176012072 +0000 UTC m=+82.206796751" lastFinishedPulling="2026-03-13 00:46:54.170207834 +0000 UTC m=+87.200992515" observedRunningTime="2026-03-13 00:46:54.724089777 +0000 UTC m=+87.754874477" watchObservedRunningTime="2026-03-13 00:46:54.725819111 +0000 UTC m=+87.756603791" Mar 13 00:47:01.291130 kubelet[2813]: E0313 00:47:01.290980 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:47:28.289219 kubelet[2813]: E0313 00:47:28.287554 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:47:40.154046 systemd[1]: Started sshd@9-10.0.0.89:22-10.0.0.1:54418.service - OpenSSH per-connection server daemon (10.0.0.1:54418). Mar 13 00:47:40.467986 sshd[5670]: Accepted publickey for core from 10.0.0.1 port 54418 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:47:40.474415 sshd-session[5670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:47:40.513858 systemd-logind[1540]: New session 10 of user core. Mar 13 00:47:40.515970 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 13 00:47:41.139478 sshd[5681]: Connection closed by 10.0.0.1 port 54418 Mar 13 00:47:41.141090 sshd-session[5670]: pam_unix(sshd:session): session closed for user core Mar 13 00:47:41.152137 systemd[1]: sshd@9-10.0.0.89:22-10.0.0.1:54418.service: Deactivated successfully. Mar 13 00:47:41.157650 systemd[1]: session-10.scope: Deactivated successfully. Mar 13 00:47:41.160075 systemd-logind[1540]: Session 10 logged out. Waiting for processes to exit. Mar 13 00:47:41.164943 systemd-logind[1540]: Removed session 10. Mar 13 00:47:46.162420 systemd[1]: Started sshd@10-10.0.0.89:22-10.0.0.1:54426.service - OpenSSH per-connection server daemon (10.0.0.1:54426). Mar 13 00:47:46.265188 sshd[5703]: Accepted publickey for core from 10.0.0.1 port 54426 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:47:46.267878 sshd-session[5703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:47:46.280093 systemd-logind[1540]: New session 11 of user core. Mar 13 00:47:46.288284 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 13 00:47:46.484553 sshd[5706]: Connection closed by 10.0.0.1 port 54426 Mar 13 00:47:46.485249 sshd-session[5703]: pam_unix(sshd:session): session closed for user core Mar 13 00:47:46.493029 systemd-logind[1540]: Session 11 logged out. Waiting for processes to exit. Mar 13 00:47:46.493331 systemd[1]: sshd@10-10.0.0.89:22-10.0.0.1:54426.service: Deactivated successfully. Mar 13 00:47:46.497968 systemd[1]: session-11.scope: Deactivated successfully. Mar 13 00:47:46.502393 systemd-logind[1540]: Removed session 11. Mar 13 00:47:51.507498 systemd[1]: Started sshd@11-10.0.0.89:22-10.0.0.1:52448.service - OpenSSH per-connection server daemon (10.0.0.1:52448). Mar 13 00:47:51.591537 sshd[5745]: Accepted publickey for core from 10.0.0.1 port 52448 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:47:51.593562 sshd-session[5745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:47:51.605465 systemd-logind[1540]: New session 12 of user core. Mar 13 00:47:51.621221 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 13 00:47:51.813185 sshd[5748]: Connection closed by 10.0.0.1 port 52448 Mar 13 00:47:51.813968 sshd-session[5745]: pam_unix(sshd:session): session closed for user core Mar 13 00:47:51.823396 systemd[1]: sshd@11-10.0.0.89:22-10.0.0.1:52448.service: Deactivated successfully. Mar 13 00:47:51.828586 systemd[1]: session-12.scope: Deactivated successfully. Mar 13 00:47:51.831377 systemd-logind[1540]: Session 12 logged out. Waiting for processes to exit. Mar 13 00:47:51.834164 systemd-logind[1540]: Removed session 12. Mar 13 00:47:55.286336 kubelet[2813]: E0313 00:47:55.286238 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:47:56.830488 systemd[1]: Started sshd@12-10.0.0.89:22-10.0.0.1:52450.service - OpenSSH per-connection server daemon (10.0.0.1:52450). Mar 13 00:47:56.948444 sshd[5784]: Accepted publickey for core from 10.0.0.1 port 52450 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:47:56.951351 sshd-session[5784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:47:56.960531 systemd-logind[1540]: New session 13 of user core. Mar 13 00:47:56.970216 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 13 00:47:57.181392 sshd[5797]: Connection closed by 10.0.0.1 port 52450 Mar 13 00:47:57.182128 sshd-session[5784]: pam_unix(sshd:session): session closed for user core Mar 13 00:47:57.188593 systemd[1]: sshd@12-10.0.0.89:22-10.0.0.1:52450.service: Deactivated successfully. Mar 13 00:47:57.192976 systemd[1]: session-13.scope: Deactivated successfully. Mar 13 00:47:57.197428 systemd-logind[1540]: Session 13 logged out. Waiting for processes to exit. Mar 13 00:47:57.201420 systemd-logind[1540]: Removed session 13. Mar 13 00:47:58.286590 kubelet[2813]: E0313 00:47:58.286362 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:48:02.201347 systemd[1]: Started sshd@13-10.0.0.89:22-10.0.0.1:46420.service - OpenSSH per-connection server daemon (10.0.0.1:46420). Mar 13 00:48:02.286534 sshd[5811]: Accepted publickey for core from 10.0.0.1 port 46420 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:48:02.288154 kubelet[2813]: E0313 00:48:02.288127 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:48:02.289007 sshd-session[5811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:48:02.299488 systemd-logind[1540]: New session 14 of user core. Mar 13 00:48:02.312283 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 13 00:48:02.536145 sshd[5814]: Connection closed by 10.0.0.1 port 46420 Mar 13 00:48:02.536559 sshd-session[5811]: pam_unix(sshd:session): session closed for user core Mar 13 00:48:02.543299 systemd[1]: sshd@13-10.0.0.89:22-10.0.0.1:46420.service: Deactivated successfully. Mar 13 00:48:02.547631 systemd[1]: session-14.scope: Deactivated successfully. Mar 13 00:48:02.554120 systemd-logind[1540]: Session 14 logged out. Waiting for processes to exit. Mar 13 00:48:02.559195 systemd-logind[1540]: Removed session 14. Mar 13 00:48:04.288038 kubelet[2813]: E0313 00:48:04.286441 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:48:04.288038 kubelet[2813]: E0313 00:48:04.287409 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:48:07.288460 kubelet[2813]: E0313 00:48:07.288235 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:48:07.558183 systemd[1]: Started sshd@14-10.0.0.89:22-10.0.0.1:46424.service - OpenSSH per-connection server daemon (10.0.0.1:46424). Mar 13 00:48:07.674119 sshd[5859]: Accepted publickey for core from 10.0.0.1 port 46424 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:48:07.676391 sshd-session[5859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:48:07.688424 systemd-logind[1540]: New session 15 of user core. Mar 13 00:48:07.705113 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 13 00:48:07.900440 sshd[5862]: Connection closed by 10.0.0.1 port 46424 Mar 13 00:48:07.902040 sshd-session[5859]: pam_unix(sshd:session): session closed for user core Mar 13 00:48:07.911505 systemd[1]: sshd@14-10.0.0.89:22-10.0.0.1:46424.service: Deactivated successfully. Mar 13 00:48:07.916958 systemd[1]: session-15.scope: Deactivated successfully. Mar 13 00:48:07.919023 systemd-logind[1540]: Session 15 logged out. Waiting for processes to exit. Mar 13 00:48:07.928644 systemd-logind[1540]: Removed session 15. Mar 13 00:48:12.922162 systemd[1]: Started sshd@15-10.0.0.89:22-10.0.0.1:37446.service - OpenSSH per-connection server daemon (10.0.0.1:37446). Mar 13 00:48:13.018468 sshd[5916]: Accepted publickey for core from 10.0.0.1 port 37446 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:48:13.021189 sshd-session[5916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:48:13.034204 systemd-logind[1540]: New session 16 of user core. Mar 13 00:48:13.044281 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 13 00:48:13.293053 sshd[5919]: Connection closed by 10.0.0.1 port 37446 Mar 13 00:48:13.294180 sshd-session[5916]: pam_unix(sshd:session): session closed for user core Mar 13 00:48:13.303368 systemd[1]: sshd@15-10.0.0.89:22-10.0.0.1:37446.service: Deactivated successfully. Mar 13 00:48:13.308585 systemd[1]: session-16.scope: Deactivated successfully. Mar 13 00:48:13.314290 systemd-logind[1540]: Session 16 logged out. Waiting for processes to exit. Mar 13 00:48:13.317105 systemd-logind[1540]: Removed session 16. Mar 13 00:48:18.312625 systemd[1]: Started sshd@16-10.0.0.89:22-10.0.0.1:37456.service - OpenSSH per-connection server daemon (10.0.0.1:37456). Mar 13 00:48:18.413322 sshd[5934]: Accepted publickey for core from 10.0.0.1 port 37456 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:48:18.415507 sshd-session[5934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:48:18.428478 systemd-logind[1540]: New session 17 of user core. Mar 13 00:48:18.438458 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 13 00:48:18.637939 sshd[5937]: Connection closed by 10.0.0.1 port 37456 Mar 13 00:48:18.640354 sshd-session[5934]: pam_unix(sshd:session): session closed for user core Mar 13 00:48:18.647124 systemd[1]: sshd@16-10.0.0.89:22-10.0.0.1:37456.service: Deactivated successfully. Mar 13 00:48:18.651075 systemd[1]: session-17.scope: Deactivated successfully. Mar 13 00:48:18.658465 systemd-logind[1540]: Session 17 logged out. Waiting for processes to exit. Mar 13 00:48:18.662508 systemd-logind[1540]: Removed session 17. Mar 13 00:48:23.657006 systemd[1]: Started sshd@17-10.0.0.89:22-10.0.0.1:42502.service - OpenSSH per-connection server daemon (10.0.0.1:42502). Mar 13 00:48:23.734063 sshd[5974]: Accepted publickey for core from 10.0.0.1 port 42502 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:48:23.736413 sshd-session[5974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:48:23.746178 systemd-logind[1540]: New session 18 of user core. Mar 13 00:48:23.761111 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 13 00:48:23.894857 sshd[5977]: Connection closed by 10.0.0.1 port 42502 Mar 13 00:48:23.895494 sshd-session[5974]: pam_unix(sshd:session): session closed for user core Mar 13 00:48:23.901645 systemd[1]: sshd@17-10.0.0.89:22-10.0.0.1:42502.service: Deactivated successfully. Mar 13 00:48:23.904946 systemd[1]: session-18.scope: Deactivated successfully. Mar 13 00:48:23.906847 systemd-logind[1540]: Session 18 logged out. Waiting for processes to exit. Mar 13 00:48:23.909553 systemd-logind[1540]: Removed session 18. Mar 13 00:48:28.911348 systemd[1]: Started sshd@18-10.0.0.89:22-10.0.0.1:42508.service - OpenSSH per-connection server daemon (10.0.0.1:42508). Mar 13 00:48:29.002570 sshd[6016]: Accepted publickey for core from 10.0.0.1 port 42508 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:48:29.005829 sshd-session[6016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:48:29.015967 systemd-logind[1540]: New session 19 of user core. Mar 13 00:48:29.035245 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 13 00:48:29.189761 sshd[6019]: Connection closed by 10.0.0.1 port 42508 Mar 13 00:48:29.190185 sshd-session[6016]: pam_unix(sshd:session): session closed for user core Mar 13 00:48:29.197113 systemd[1]: sshd@18-10.0.0.89:22-10.0.0.1:42508.service: Deactivated successfully. Mar 13 00:48:29.201532 systemd[1]: session-19.scope: Deactivated successfully. Mar 13 00:48:29.205497 systemd-logind[1540]: Session 19 logged out. Waiting for processes to exit. Mar 13 00:48:29.210084 systemd-logind[1540]: Removed session 19. Mar 13 00:48:34.206526 systemd[1]: Started sshd@19-10.0.0.89:22-10.0.0.1:34792.service - OpenSSH per-connection server daemon (10.0.0.1:34792). Mar 13 00:48:34.300469 sshd[6083]: Accepted publickey for core from 10.0.0.1 port 34792 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:48:34.303208 sshd-session[6083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:48:34.311985 systemd-logind[1540]: New session 20 of user core. Mar 13 00:48:34.321151 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 13 00:48:34.510174 sshd[6086]: Connection closed by 10.0.0.1 port 34792 Mar 13 00:48:34.511409 sshd-session[6083]: pam_unix(sshd:session): session closed for user core Mar 13 00:48:34.522579 systemd-logind[1540]: Session 20 logged out. Waiting for processes to exit. Mar 13 00:48:34.523071 systemd[1]: sshd@19-10.0.0.89:22-10.0.0.1:34792.service: Deactivated successfully. Mar 13 00:48:34.530094 systemd[1]: session-20.scope: Deactivated successfully. Mar 13 00:48:34.534314 systemd-logind[1540]: Removed session 20. Mar 13 00:48:39.526270 systemd[1]: Started sshd@20-10.0.0.89:22-10.0.0.1:34806.service - OpenSSH per-connection server daemon (10.0.0.1:34806). Mar 13 00:48:39.630809 sshd[6128]: Accepted publickey for core from 10.0.0.1 port 34806 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:48:39.633184 sshd-session[6128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:48:39.643263 systemd-logind[1540]: New session 21 of user core. Mar 13 00:48:39.653262 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 13 00:48:39.856978 sshd[6131]: Connection closed by 10.0.0.1 port 34806 Mar 13 00:48:39.857246 sshd-session[6128]: pam_unix(sshd:session): session closed for user core Mar 13 00:48:39.865075 systemd[1]: sshd@20-10.0.0.89:22-10.0.0.1:34806.service: Deactivated successfully. Mar 13 00:48:39.869147 systemd[1]: session-21.scope: Deactivated successfully. Mar 13 00:48:39.872405 systemd-logind[1540]: Session 21 logged out. Waiting for processes to exit. Mar 13 00:48:39.876566 systemd-logind[1540]: Removed session 21. Mar 13 00:48:42.286516 kubelet[2813]: E0313 00:48:42.286381 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:48:44.887890 systemd[1]: Started sshd@21-10.0.0.89:22-10.0.0.1:59224.service - OpenSSH per-connection server daemon (10.0.0.1:59224). Mar 13 00:48:44.984830 sshd[6146]: Accepted publickey for core from 10.0.0.1 port 59224 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:48:44.988013 sshd-session[6146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:48:44.997016 systemd-logind[1540]: New session 22 of user core. Mar 13 00:48:45.012062 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 13 00:48:45.140413 sshd[6149]: Connection closed by 10.0.0.1 port 59224 Mar 13 00:48:45.140853 sshd-session[6146]: pam_unix(sshd:session): session closed for user core Mar 13 00:48:45.145889 systemd[1]: sshd@21-10.0.0.89:22-10.0.0.1:59224.service: Deactivated successfully. Mar 13 00:48:45.149254 systemd[1]: session-22.scope: Deactivated successfully. Mar 13 00:48:45.150867 systemd-logind[1540]: Session 22 logged out. Waiting for processes to exit. Mar 13 00:48:45.154773 systemd-logind[1540]: Removed session 22. Mar 13 00:48:50.156359 systemd[1]: Started sshd@22-10.0.0.89:22-10.0.0.1:48542.service - OpenSSH per-connection server daemon (10.0.0.1:48542). Mar 13 00:48:50.227573 sshd[6190]: Accepted publickey for core from 10.0.0.1 port 48542 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:48:50.230419 sshd-session[6190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:48:50.241386 systemd-logind[1540]: New session 23 of user core. Mar 13 00:48:50.255111 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 13 00:48:50.402369 sshd[6193]: Connection closed by 10.0.0.1 port 48542 Mar 13 00:48:50.402813 sshd-session[6190]: pam_unix(sshd:session): session closed for user core Mar 13 00:48:50.409560 systemd[1]: sshd@22-10.0.0.89:22-10.0.0.1:48542.service: Deactivated successfully. Mar 13 00:48:50.413229 systemd[1]: session-23.scope: Deactivated successfully. Mar 13 00:48:50.415775 systemd-logind[1540]: Session 23 logged out. Waiting for processes to exit. Mar 13 00:48:50.418822 systemd-logind[1540]: Removed session 23. Mar 13 00:48:55.427442 systemd[1]: Started sshd@23-10.0.0.89:22-10.0.0.1:48552.service - OpenSSH per-connection server daemon (10.0.0.1:48552). Mar 13 00:48:55.511254 sshd[6230]: Accepted publickey for core from 10.0.0.1 port 48552 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:48:55.513061 sshd-session[6230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:48:55.521933 systemd-logind[1540]: New session 24 of user core. Mar 13 00:48:55.532065 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 13 00:48:55.694819 sshd[6233]: Connection closed by 10.0.0.1 port 48552 Mar 13 00:48:55.695267 sshd-session[6230]: pam_unix(sshd:session): session closed for user core Mar 13 00:48:55.701618 systemd[1]: sshd@23-10.0.0.89:22-10.0.0.1:48552.service: Deactivated successfully. Mar 13 00:48:55.704927 systemd[1]: session-24.scope: Deactivated successfully. Mar 13 00:48:55.707270 systemd-logind[1540]: Session 24 logged out. Waiting for processes to exit. Mar 13 00:48:55.709446 systemd-logind[1540]: Removed session 24. Mar 13 00:49:00.714229 systemd[1]: Started sshd@24-10.0.0.89:22-10.0.0.1:40690.service - OpenSSH per-connection server daemon (10.0.0.1:40690). Mar 13 00:49:00.807894 sshd[6247]: Accepted publickey for core from 10.0.0.1 port 40690 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:49:00.810310 sshd-session[6247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:49:00.821582 systemd-logind[1540]: New session 25 of user core. Mar 13 00:49:00.827300 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 13 00:49:00.993642 sshd[6250]: Connection closed by 10.0.0.1 port 40690 Mar 13 00:49:00.994217 sshd-session[6247]: pam_unix(sshd:session): session closed for user core Mar 13 00:49:01.001574 systemd[1]: sshd@24-10.0.0.89:22-10.0.0.1:40690.service: Deactivated successfully. Mar 13 00:49:01.006444 systemd[1]: session-25.scope: Deactivated successfully. Mar 13 00:49:01.008931 systemd-logind[1540]: Session 25 logged out. Waiting for processes to exit. Mar 13 00:49:01.012403 systemd-logind[1540]: Removed session 25. Mar 13 00:49:01.285288 kubelet[2813]: E0313 00:49:01.285047 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:49:06.015542 systemd[1]: Started sshd@25-10.0.0.89:22-10.0.0.1:40696.service - OpenSSH per-connection server daemon (10.0.0.1:40696). Mar 13 00:49:06.138796 sshd[6292]: Accepted publickey for core from 10.0.0.1 port 40696 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:49:06.141246 sshd-session[6292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:49:06.151362 systemd-logind[1540]: New session 26 of user core. Mar 13 00:49:06.161058 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 13 00:49:06.323080 sshd[6296]: Connection closed by 10.0.0.1 port 40696 Mar 13 00:49:06.323874 sshd-session[6292]: pam_unix(sshd:session): session closed for user core Mar 13 00:49:06.331521 systemd[1]: sshd@25-10.0.0.89:22-10.0.0.1:40696.service: Deactivated successfully. Mar 13 00:49:06.334587 systemd[1]: session-26.scope: Deactivated successfully. Mar 13 00:49:06.337157 systemd-logind[1540]: Session 26 logged out. Waiting for processes to exit. Mar 13 00:49:06.340357 systemd-logind[1540]: Removed session 26. Mar 13 00:49:11.338439 systemd[1]: Started sshd@26-10.0.0.89:22-10.0.0.1:52172.service - OpenSSH per-connection server daemon (10.0.0.1:52172). Mar 13 00:49:11.419916 sshd[6310]: Accepted publickey for core from 10.0.0.1 port 52172 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:49:11.421823 sshd-session[6310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:49:11.431310 systemd-logind[1540]: New session 27 of user core. Mar 13 00:49:11.440523 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 13 00:49:11.591585 sshd[6314]: Connection closed by 10.0.0.1 port 52172 Mar 13 00:49:11.592463 sshd-session[6310]: pam_unix(sshd:session): session closed for user core Mar 13 00:49:11.599264 systemd[1]: sshd@26-10.0.0.89:22-10.0.0.1:52172.service: Deactivated successfully. Mar 13 00:49:11.602631 systemd[1]: session-27.scope: Deactivated successfully. Mar 13 00:49:11.605955 systemd-logind[1540]: Session 27 logged out. Waiting for processes to exit. Mar 13 00:49:11.609211 systemd-logind[1540]: Removed session 27. Mar 13 00:49:15.285843 kubelet[2813]: E0313 00:49:15.285208 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:49:16.613245 systemd[1]: Started sshd@27-10.0.0.89:22-10.0.0.1:52184.service - OpenSSH per-connection server daemon (10.0.0.1:52184). Mar 13 00:49:16.713989 sshd[6328]: Accepted publickey for core from 10.0.0.1 port 52184 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:49:16.716319 sshd-session[6328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:49:16.725507 systemd-logind[1540]: New session 28 of user core. Mar 13 00:49:16.737409 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 13 00:49:16.937517 sshd[6331]: Connection closed by 10.0.0.1 port 52184 Mar 13 00:49:16.937907 sshd-session[6328]: pam_unix(sshd:session): session closed for user core Mar 13 00:49:16.945261 systemd[1]: sshd@27-10.0.0.89:22-10.0.0.1:52184.service: Deactivated successfully. Mar 13 00:49:16.949590 systemd[1]: session-28.scope: Deactivated successfully. Mar 13 00:49:16.951303 systemd-logind[1540]: Session 28 logged out. Waiting for processes to exit. Mar 13 00:49:16.953992 systemd-logind[1540]: Removed session 28. Mar 13 00:49:21.971441 systemd[1]: Started sshd@28-10.0.0.89:22-10.0.0.1:36080.service - OpenSSH per-connection server daemon (10.0.0.1:36080). Mar 13 00:49:22.067834 sshd[6384]: Accepted publickey for core from 10.0.0.1 port 36080 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:49:22.072223 sshd-session[6384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:49:22.101285 systemd-logind[1540]: New session 29 of user core. Mar 13 00:49:22.115132 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 13 00:49:22.295385 sshd[6387]: Connection closed by 10.0.0.1 port 36080 Mar 13 00:49:22.296492 sshd-session[6384]: pam_unix(sshd:session): session closed for user core Mar 13 00:49:22.303179 systemd[1]: sshd@28-10.0.0.89:22-10.0.0.1:36080.service: Deactivated successfully. Mar 13 00:49:22.307553 systemd[1]: session-29.scope: Deactivated successfully. Mar 13 00:49:22.309573 systemd-logind[1540]: Session 29 logged out. Waiting for processes to exit. Mar 13 00:49:22.315372 systemd-logind[1540]: Removed session 29. Mar 13 00:49:24.285772 kubelet[2813]: E0313 00:49:24.285595 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:49:25.300876 kubelet[2813]: E0313 00:49:25.299861 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:49:27.312295 systemd[1]: Started sshd@29-10.0.0.89:22-10.0.0.1:36094.service - OpenSSH per-connection server daemon (10.0.0.1:36094). Mar 13 00:49:27.415002 sshd[6426]: Accepted publickey for core from 10.0.0.1 port 36094 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:49:27.418180 sshd-session[6426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:49:27.429400 systemd-logind[1540]: New session 30 of user core. Mar 13 00:49:27.438236 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 13 00:49:27.620411 sshd[6431]: Connection closed by 10.0.0.1 port 36094 Mar 13 00:49:27.620939 sshd-session[6426]: pam_unix(sshd:session): session closed for user core Mar 13 00:49:27.635623 systemd[1]: sshd@29-10.0.0.89:22-10.0.0.1:36094.service: Deactivated successfully. Mar 13 00:49:27.640784 systemd[1]: session-30.scope: Deactivated successfully. Mar 13 00:49:27.643307 systemd-logind[1540]: Session 30 logged out. Waiting for processes to exit. Mar 13 00:49:27.648974 systemd[1]: Started sshd@30-10.0.0.89:22-10.0.0.1:36098.service - OpenSSH per-connection server daemon (10.0.0.1:36098). Mar 13 00:49:27.650992 systemd-logind[1540]: Removed session 30. Mar 13 00:49:27.759332 sshd[6446]: Accepted publickey for core from 10.0.0.1 port 36098 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:49:27.764043 sshd-session[6446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:49:27.800584 systemd-logind[1540]: New session 31 of user core. Mar 13 00:49:27.822178 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 13 00:49:28.179500 sshd[6449]: Connection closed by 10.0.0.1 port 36098 Mar 13 00:49:28.180338 sshd-session[6446]: pam_unix(sshd:session): session closed for user core Mar 13 00:49:28.194031 systemd[1]: sshd@30-10.0.0.89:22-10.0.0.1:36098.service: Deactivated successfully. Mar 13 00:49:28.199789 systemd[1]: session-31.scope: Deactivated successfully. Mar 13 00:49:28.203248 systemd-logind[1540]: Session 31 logged out. Waiting for processes to exit. Mar 13 00:49:28.206444 systemd[1]: Started sshd@31-10.0.0.89:22-10.0.0.1:36114.service - OpenSSH per-connection server daemon (10.0.0.1:36114). Mar 13 00:49:28.211000 systemd-logind[1540]: Removed session 31. Mar 13 00:49:28.287058 kubelet[2813]: E0313 00:49:28.286813 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:49:28.313323 sshd[6468]: Accepted publickey for core from 10.0.0.1 port 36114 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:49:28.316456 sshd-session[6468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:49:28.328815 systemd-logind[1540]: New session 32 of user core. Mar 13 00:49:28.338910 systemd[1]: Started session-32.scope - Session 32 of User core. Mar 13 00:49:28.563930 sshd[6471]: Connection closed by 10.0.0.1 port 36114 Mar 13 00:49:28.562916 sshd-session[6468]: pam_unix(sshd:session): session closed for user core Mar 13 00:49:28.574259 systemd-logind[1540]: Session 32 logged out. Waiting for processes to exit. Mar 13 00:49:28.580044 systemd[1]: sshd@31-10.0.0.89:22-10.0.0.1:36114.service: Deactivated successfully. Mar 13 00:49:28.596471 systemd[1]: session-32.scope: Deactivated successfully. Mar 13 00:49:28.604057 systemd-logind[1540]: Removed session 32. Mar 13 00:49:33.596058 systemd[1]: Started sshd@32-10.0.0.89:22-10.0.0.1:47262.service - OpenSSH per-connection server daemon (10.0.0.1:47262). Mar 13 00:49:33.676372 sshd[6552]: Accepted publickey for core from 10.0.0.1 port 47262 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:49:33.679657 sshd-session[6552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:49:33.699790 systemd-logind[1540]: New session 33 of user core. Mar 13 00:49:33.712256 systemd[1]: Started session-33.scope - Session 33 of User core. Mar 13 00:49:33.874250 sshd[6555]: Connection closed by 10.0.0.1 port 47262 Mar 13 00:49:33.873973 sshd-session[6552]: pam_unix(sshd:session): session closed for user core Mar 13 00:49:33.887361 systemd[1]: sshd@32-10.0.0.89:22-10.0.0.1:47262.service: Deactivated successfully. Mar 13 00:49:33.891276 systemd[1]: session-33.scope: Deactivated successfully. Mar 13 00:49:33.896374 systemd-logind[1540]: Session 33 logged out. Waiting for processes to exit. Mar 13 00:49:33.898973 systemd-logind[1540]: Removed session 33. Mar 13 00:49:34.290386 kubelet[2813]: E0313 00:49:34.289595 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:49:38.892581 systemd[1]: Started sshd@33-10.0.0.89:22-10.0.0.1:47270.service - OpenSSH per-connection server daemon (10.0.0.1:47270). Mar 13 00:49:38.981271 sshd[6597]: Accepted publickey for core from 10.0.0.1 port 47270 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:49:38.984216 sshd-session[6597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:49:38.993321 systemd-logind[1540]: New session 34 of user core. Mar 13 00:49:39.002264 systemd[1]: Started session-34.scope - Session 34 of User core. Mar 13 00:49:39.145505 sshd[6600]: Connection closed by 10.0.0.1 port 47270 Mar 13 00:49:39.146486 sshd-session[6597]: pam_unix(sshd:session): session closed for user core Mar 13 00:49:39.152617 systemd[1]: sshd@33-10.0.0.89:22-10.0.0.1:47270.service: Deactivated successfully. Mar 13 00:49:39.156300 systemd[1]: session-34.scope: Deactivated successfully. Mar 13 00:49:39.161964 systemd-logind[1540]: Session 34 logged out. Waiting for processes to exit. Mar 13 00:49:39.164551 systemd-logind[1540]: Removed session 34. Mar 13 00:49:44.186970 systemd[1]: Started sshd@34-10.0.0.89:22-10.0.0.1:35870.service - OpenSSH per-connection server daemon (10.0.0.1:35870). Mar 13 00:49:44.296472 sshd[6613]: Accepted publickey for core from 10.0.0.1 port 35870 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:49:44.299622 sshd-session[6613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:49:44.309900 systemd-logind[1540]: New session 35 of user core. Mar 13 00:49:44.326304 systemd[1]: Started session-35.scope - Session 35 of User core. Mar 13 00:49:44.629781 sshd[6616]: Connection closed by 10.0.0.1 port 35870 Mar 13 00:49:44.631335 sshd-session[6613]: pam_unix(sshd:session): session closed for user core Mar 13 00:49:44.639444 systemd[1]: sshd@34-10.0.0.89:22-10.0.0.1:35870.service: Deactivated successfully. Mar 13 00:49:44.645105 systemd[1]: session-35.scope: Deactivated successfully. Mar 13 00:49:44.651916 systemd-logind[1540]: Session 35 logged out. Waiting for processes to exit. Mar 13 00:49:44.656057 systemd-logind[1540]: Removed session 35. Mar 13 00:49:45.288536 kubelet[2813]: E0313 00:49:45.288127 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:49:49.649850 systemd[1]: Started sshd@35-10.0.0.89:22-10.0.0.1:35878.service - OpenSSH per-connection server daemon (10.0.0.1:35878). Mar 13 00:49:49.757117 sshd[6684]: Accepted publickey for core from 10.0.0.1 port 35878 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:49:49.758858 sshd-session[6684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:49:49.767155 systemd-logind[1540]: New session 36 of user core. Mar 13 00:49:49.773094 systemd[1]: Started session-36.scope - Session 36 of User core. Mar 13 00:49:49.963457 sshd[6693]: Connection closed by 10.0.0.1 port 35878 Mar 13 00:49:49.963791 sshd-session[6684]: pam_unix(sshd:session): session closed for user core Mar 13 00:49:49.970114 systemd[1]: sshd@35-10.0.0.89:22-10.0.0.1:35878.service: Deactivated successfully. Mar 13 00:49:49.973029 systemd[1]: session-36.scope: Deactivated successfully. Mar 13 00:49:49.974898 systemd-logind[1540]: Session 36 logged out. Waiting for processes to exit. Mar 13 00:49:49.977980 systemd-logind[1540]: Removed session 36. Mar 13 00:49:54.983605 systemd[1]: Started sshd@36-10.0.0.89:22-10.0.0.1:38392.service - OpenSSH per-connection server daemon (10.0.0.1:38392). Mar 13 00:49:55.069610 sshd[6728]: Accepted publickey for core from 10.0.0.1 port 38392 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:49:55.071601 sshd-session[6728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:49:55.081331 systemd-logind[1540]: New session 37 of user core. Mar 13 00:49:55.089313 systemd[1]: Started session-37.scope - Session 37 of User core. Mar 13 00:49:55.266827 sshd[6731]: Connection closed by 10.0.0.1 port 38392 Mar 13 00:49:55.266897 sshd-session[6728]: pam_unix(sshd:session): session closed for user core Mar 13 00:49:55.284608 systemd[1]: sshd@36-10.0.0.89:22-10.0.0.1:38392.service: Deactivated successfully. Mar 13 00:49:55.288296 systemd[1]: session-37.scope: Deactivated successfully. Mar 13 00:49:55.290940 systemd-logind[1540]: Session 37 logged out. Waiting for processes to exit. Mar 13 00:49:55.295063 systemd[1]: Started sshd@37-10.0.0.89:22-10.0.0.1:38396.service - OpenSSH per-connection server daemon (10.0.0.1:38396). Mar 13 00:49:55.299382 systemd-logind[1540]: Removed session 37. Mar 13 00:49:55.382284 sshd[6744]: Accepted publickey for core from 10.0.0.1 port 38396 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:49:55.385136 sshd-session[6744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:49:55.399446 systemd-logind[1540]: New session 38 of user core. Mar 13 00:49:55.408279 systemd[1]: Started session-38.scope - Session 38 of User core. Mar 13 00:49:56.009786 sshd[6747]: Connection closed by 10.0.0.1 port 38396 Mar 13 00:49:56.010320 sshd-session[6744]: pam_unix(sshd:session): session closed for user core Mar 13 00:49:56.021936 systemd[1]: sshd@37-10.0.0.89:22-10.0.0.1:38396.service: Deactivated successfully. Mar 13 00:49:56.025081 systemd[1]: session-38.scope: Deactivated successfully. Mar 13 00:49:56.027543 systemd-logind[1540]: Session 38 logged out. Waiting for processes to exit. Mar 13 00:49:56.033131 systemd[1]: Started sshd@38-10.0.0.89:22-10.0.0.1:38404.service - OpenSSH per-connection server daemon (10.0.0.1:38404). Mar 13 00:49:56.036402 systemd-logind[1540]: Removed session 38. Mar 13 00:49:56.226611 sshd[6759]: Accepted publickey for core from 10.0.0.1 port 38404 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:49:56.231553 sshd-session[6759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:49:56.242781 systemd-logind[1540]: New session 39 of user core. Mar 13 00:49:56.258109 systemd[1]: Started session-39.scope - Session 39 of User core. Mar 13 00:49:57.149867 sshd[6762]: Connection closed by 10.0.0.1 port 38404 Mar 13 00:49:57.150079 sshd-session[6759]: pam_unix(sshd:session): session closed for user core Mar 13 00:49:57.166326 systemd[1]: sshd@38-10.0.0.89:22-10.0.0.1:38404.service: Deactivated successfully. Mar 13 00:49:57.176604 systemd[1]: session-39.scope: Deactivated successfully. Mar 13 00:49:57.183614 systemd-logind[1540]: Session 39 logged out. Waiting for processes to exit. Mar 13 00:49:57.187967 systemd[1]: Started sshd@39-10.0.0.89:22-10.0.0.1:38412.service - OpenSSH per-connection server daemon (10.0.0.1:38412). Mar 13 00:49:57.204409 systemd-logind[1540]: Removed session 39. Mar 13 00:49:57.302585 sshd[6788]: Accepted publickey for core from 10.0.0.1 port 38412 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:49:57.304882 sshd-session[6788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:49:57.315449 systemd-logind[1540]: New session 40 of user core. Mar 13 00:49:57.326915 systemd[1]: Started session-40.scope - Session 40 of User core. Mar 13 00:49:57.773944 sshd[6792]: Connection closed by 10.0.0.1 port 38412 Mar 13 00:49:57.775189 sshd-session[6788]: pam_unix(sshd:session): session closed for user core Mar 13 00:49:57.795504 systemd[1]: sshd@39-10.0.0.89:22-10.0.0.1:38412.service: Deactivated successfully. Mar 13 00:49:57.803180 systemd[1]: session-40.scope: Deactivated successfully. Mar 13 00:49:57.805436 systemd-logind[1540]: Session 40 logged out. Waiting for processes to exit. Mar 13 00:49:57.817046 systemd[1]: Started sshd@40-10.0.0.89:22-10.0.0.1:38420.service - OpenSSH per-connection server daemon (10.0.0.1:38420). Mar 13 00:49:57.821954 systemd-logind[1540]: Removed session 40. Mar 13 00:49:57.899525 sshd[6803]: Accepted publickey for core from 10.0.0.1 port 38420 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:49:57.902110 sshd-session[6803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:49:57.911879 systemd-logind[1540]: New session 41 of user core. Mar 13 00:49:57.918928 systemd[1]: Started session-41.scope - Session 41 of User core. Mar 13 00:49:58.105652 sshd[6806]: Connection closed by 10.0.0.1 port 38420 Mar 13 00:49:58.106326 sshd-session[6803]: pam_unix(sshd:session): session closed for user core Mar 13 00:49:58.114014 systemd[1]: sshd@40-10.0.0.89:22-10.0.0.1:38420.service: Deactivated successfully. Mar 13 00:49:58.117541 systemd[1]: session-41.scope: Deactivated successfully. Mar 13 00:49:58.120537 systemd-logind[1540]: Session 41 logged out. Waiting for processes to exit. Mar 13 00:49:58.124044 systemd-logind[1540]: Removed session 41. Mar 13 00:50:03.126072 systemd[1]: Started sshd@41-10.0.0.89:22-10.0.0.1:60144.service - OpenSSH per-connection server daemon (10.0.0.1:60144). Mar 13 00:50:03.205094 sshd[6821]: Accepted publickey for core from 10.0.0.1 port 60144 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:50:03.207980 sshd-session[6821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:50:03.218537 systemd-logind[1540]: New session 42 of user core. Mar 13 00:50:03.230979 systemd[1]: Started session-42.scope - Session 42 of User core. Mar 13 00:50:03.404322 sshd[6824]: Connection closed by 10.0.0.1 port 60144 Mar 13 00:50:03.404806 sshd-session[6821]: pam_unix(sshd:session): session closed for user core Mar 13 00:50:03.409884 systemd[1]: sshd@41-10.0.0.89:22-10.0.0.1:60144.service: Deactivated successfully. Mar 13 00:50:03.414476 systemd[1]: session-42.scope: Deactivated successfully. Mar 13 00:50:03.419454 systemd-logind[1540]: Session 42 logged out. Waiting for processes to exit. Mar 13 00:50:03.422404 systemd-logind[1540]: Removed session 42. Mar 13 00:50:08.420611 systemd[1]: Started sshd@42-10.0.0.89:22-10.0.0.1:60152.service - OpenSSH per-connection server daemon (10.0.0.1:60152). Mar 13 00:50:08.504110 sshd[6865]: Accepted publickey for core from 10.0.0.1 port 60152 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:50:08.507119 sshd-session[6865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:50:08.517375 systemd-logind[1540]: New session 43 of user core. Mar 13 00:50:08.529080 systemd[1]: Started session-43.scope - Session 43 of User core. Mar 13 00:50:08.691848 sshd[6868]: Connection closed by 10.0.0.1 port 60152 Mar 13 00:50:08.693166 sshd-session[6865]: pam_unix(sshd:session): session closed for user core Mar 13 00:50:08.700022 systemd[1]: sshd@42-10.0.0.89:22-10.0.0.1:60152.service: Deactivated successfully. Mar 13 00:50:08.703187 systemd[1]: session-43.scope: Deactivated successfully. Mar 13 00:50:08.706030 systemd-logind[1540]: Session 43 logged out. Waiting for processes to exit. Mar 13 00:50:08.709882 systemd-logind[1540]: Removed session 43. Mar 13 00:50:12.285551 kubelet[2813]: E0313 00:50:12.285203 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:50:13.711079 systemd[1]: Started sshd@43-10.0.0.89:22-10.0.0.1:49484.service - OpenSSH per-connection server daemon (10.0.0.1:49484). Mar 13 00:50:13.802828 sshd[6884]: Accepted publickey for core from 10.0.0.1 port 49484 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:50:13.805012 sshd-session[6884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:50:13.816040 systemd-logind[1540]: New session 44 of user core. Mar 13 00:50:13.825090 systemd[1]: Started session-44.scope - Session 44 of User core. Mar 13 00:50:14.031027 sshd[6887]: Connection closed by 10.0.0.1 port 49484 Mar 13 00:50:14.031466 sshd-session[6884]: pam_unix(sshd:session): session closed for user core Mar 13 00:50:14.038583 systemd[1]: sshd@43-10.0.0.89:22-10.0.0.1:49484.service: Deactivated successfully. Mar 13 00:50:14.042130 systemd[1]: session-44.scope: Deactivated successfully. Mar 13 00:50:14.045514 systemd-logind[1540]: Session 44 logged out. Waiting for processes to exit. Mar 13 00:50:14.048921 systemd-logind[1540]: Removed session 44. Mar 13 00:50:19.046600 systemd[1]: Started sshd@44-10.0.0.89:22-10.0.0.1:49488.service - OpenSSH per-connection server daemon (10.0.0.1:49488). Mar 13 00:50:19.153031 sshd[6900]: Accepted publickey for core from 10.0.0.1 port 49488 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:50:19.157264 sshd-session[6900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:50:19.168589 systemd-logind[1540]: New session 45 of user core. Mar 13 00:50:19.188588 systemd[1]: Started session-45.scope - Session 45 of User core. Mar 13 00:50:19.386138 sshd[6903]: Connection closed by 10.0.0.1 port 49488 Mar 13 00:50:19.386851 sshd-session[6900]: pam_unix(sshd:session): session closed for user core Mar 13 00:50:19.393574 systemd[1]: sshd@44-10.0.0.89:22-10.0.0.1:49488.service: Deactivated successfully. Mar 13 00:50:19.398389 systemd[1]: session-45.scope: Deactivated successfully. Mar 13 00:50:19.403635 systemd-logind[1540]: Session 45 logged out. Waiting for processes to exit. Mar 13 00:50:19.408153 systemd-logind[1540]: Removed session 45. Mar 13 00:50:21.657423 containerd[1555]: time="2026-03-13T00:50:21.649613099Z" level=warning msg="container event discarded" container=0d5f747741e3eb765ecab5dbba30cf0d0ab6a7645e9f7eb2e259ef041f5387cd type=CONTAINER_CREATED_EVENT Mar 13 00:50:21.685255 containerd[1555]: time="2026-03-13T00:50:21.684987918Z" level=warning msg="container event discarded" container=0d5f747741e3eb765ecab5dbba30cf0d0ab6a7645e9f7eb2e259ef041f5387cd type=CONTAINER_STARTED_EVENT Mar 13 00:50:21.685255 containerd[1555]: time="2026-03-13T00:50:21.685075292Z" level=warning msg="container event discarded" container=e7968a11eb7ac8b6a0115996ab6ad193f8d05421da4785dd4e1502d7f6080408 type=CONTAINER_CREATED_EVENT Mar 13 00:50:21.685255 containerd[1555]: time="2026-03-13T00:50:21.685091452Z" level=warning msg="container event discarded" container=e7968a11eb7ac8b6a0115996ab6ad193f8d05421da4785dd4e1502d7f6080408 type=CONTAINER_STARTED_EVENT Mar 13 00:50:21.685255 containerd[1555]: time="2026-03-13T00:50:21.685100910Z" level=warning msg="container event discarded" container=72721676f0c224b053ac0b588fbd2a2b066d7f301d131a8f8f6c07d9edfa745f type=CONTAINER_CREATED_EVENT Mar 13 00:50:21.685255 containerd[1555]: time="2026-03-13T00:50:21.685112451Z" level=warning msg="container event discarded" container=72721676f0c224b053ac0b588fbd2a2b066d7f301d131a8f8f6c07d9edfa745f type=CONTAINER_STARTED_EVENT Mar 13 00:50:21.731214 containerd[1555]: time="2026-03-13T00:50:21.731056803Z" level=warning msg="container event discarded" container=52abf87e10d21c982984de846c2a950fbc78213ae08fc9fa65eeec203c5b2cb7 type=CONTAINER_CREATED_EVENT Mar 13 00:50:21.741776 containerd[1555]: time="2026-03-13T00:50:21.741489042Z" level=warning msg="container event discarded" container=66f151dc5507df24b0d2ac5d3a4fa2467b02554559f1146bd950dd7547786943 type=CONTAINER_CREATED_EVENT Mar 13 00:50:21.757048 containerd[1555]: time="2026-03-13T00:50:21.756998306Z" level=warning msg="container event discarded" container=3acfaffcae2328605b94c1d6ef8a2103b71119c354580c4896c8168204a8e6ac type=CONTAINER_CREATED_EVENT Mar 13 00:50:21.967415 containerd[1555]: time="2026-03-13T00:50:21.967143509Z" level=warning msg="container event discarded" container=52abf87e10d21c982984de846c2a950fbc78213ae08fc9fa65eeec203c5b2cb7 type=CONTAINER_STARTED_EVENT Mar 13 00:50:21.967415 containerd[1555]: time="2026-03-13T00:50:21.967240609Z" level=warning msg="container event discarded" container=3acfaffcae2328605b94c1d6ef8a2103b71119c354580c4896c8168204a8e6ac type=CONTAINER_STARTED_EVENT Mar 13 00:50:21.967415 containerd[1555]: time="2026-03-13T00:50:21.967256909Z" level=warning msg="container event discarded" container=66f151dc5507df24b0d2ac5d3a4fa2467b02554559f1146bd950dd7547786943 type=CONTAINER_STARTED_EVENT Mar 13 00:50:24.406533 systemd[1]: Started sshd@45-10.0.0.89:22-10.0.0.1:56464.service - OpenSSH per-connection server daemon (10.0.0.1:56464). Mar 13 00:50:24.483722 sshd[6940]: Accepted publickey for core from 10.0.0.1 port 56464 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:50:24.485500 sshd-session[6940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:50:24.491868 systemd-logind[1540]: New session 46 of user core. Mar 13 00:50:24.500840 systemd[1]: Started session-46.scope - Session 46 of User core. Mar 13 00:50:24.604506 sshd[6943]: Connection closed by 10.0.0.1 port 56464 Mar 13 00:50:24.605075 sshd-session[6940]: pam_unix(sshd:session): session closed for user core Mar 13 00:50:24.609741 systemd[1]: sshd@45-10.0.0.89:22-10.0.0.1:56464.service: Deactivated successfully. Mar 13 00:50:24.612921 systemd[1]: session-46.scope: Deactivated successfully. Mar 13 00:50:24.616157 systemd-logind[1540]: Session 46 logged out. Waiting for processes to exit. Mar 13 00:50:24.618868 systemd-logind[1540]: Removed session 46. Mar 13 00:50:25.270300 update_engine[1542]: I20260313 00:50:25.269515 1542 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 13 00:50:25.270300 update_engine[1542]: I20260313 00:50:25.270232 1542 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 13 00:50:25.273410 update_engine[1542]: I20260313 00:50:25.273293 1542 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 13 00:50:25.275577 update_engine[1542]: I20260313 00:50:25.275489 1542 omaha_request_params.cc:62] Current group set to stable Mar 13 00:50:25.276041 update_engine[1542]: I20260313 00:50:25.275961 1542 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 13 00:50:25.276041 update_engine[1542]: I20260313 00:50:25.276026 1542 update_attempter.cc:643] Scheduling an action processor start. Mar 13 00:50:25.276103 update_engine[1542]: I20260313 00:50:25.276047 1542 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 13 00:50:25.276391 update_engine[1542]: I20260313 00:50:25.276294 1542 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 13 00:50:25.276609 update_engine[1542]: I20260313 00:50:25.276527 1542 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 13 00:50:25.276653 update_engine[1542]: I20260313 00:50:25.276607 1542 omaha_request_action.cc:272] Request: Mar 13 00:50:25.276653 update_engine[1542]: Mar 13 00:50:25.276653 update_engine[1542]: Mar 13 00:50:25.276653 update_engine[1542]: Mar 13 00:50:25.276653 update_engine[1542]: Mar 13 00:50:25.276653 update_engine[1542]: Mar 13 00:50:25.276653 update_engine[1542]: Mar 13 00:50:25.276653 update_engine[1542]: Mar 13 00:50:25.276653 update_engine[1542]: Mar 13 00:50:25.276653 update_engine[1542]: I20260313 00:50:25.276626 1542 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 13 00:50:25.286851 update_engine[1542]: I20260313 00:50:25.285728 1542 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 13 00:50:25.286851 update_engine[1542]: I20260313 00:50:25.286593 1542 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 13 00:50:25.290141 locksmithd[1606]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 13 00:50:25.307685 update_engine[1542]: E20260313 00:50:25.307551 1542 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 13 00:50:25.307917 update_engine[1542]: I20260313 00:50:25.307816 1542 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 13 00:50:26.291560 kubelet[2813]: E0313 00:50:26.291482 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"