Mar 10 02:08:40.383632 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 9 23:01:22 -00 2026 Mar 10 02:08:40.383668 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bcd0808bf4ec60436f0ff2e8373a873eb88ae42d4ac26e6e6d81129499700895 Mar 10 02:08:40.383680 kernel: BIOS-provided physical RAM map: Mar 10 02:08:40.383691 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 10 02:08:40.383699 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 10 02:08:40.383707 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 10 02:08:40.383716 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Mar 10 02:08:40.383724 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 10 02:08:40.383731 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Mar 10 02:08:40.383739 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Mar 10 02:08:40.383747 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Mar 10 02:08:40.383755 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Mar 10 02:08:40.383769 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Mar 10 02:08:40.383780 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Mar 10 02:08:40.383790 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Mar 10 02:08:40.383798 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 10 02:08:40.383806 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Mar 10 02:08:40.383818 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Mar 10 02:08:40.383826 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Mar 10 02:08:40.383834 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Mar 10 02:08:40.383842 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Mar 10 02:08:40.383850 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 10 02:08:40.383859 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 10 02:08:40.383867 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 10 02:08:40.383878 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Mar 10 02:08:40.383889 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 10 02:08:40.383897 kernel: NX (Execute Disable) protection: active Mar 10 02:08:40.383905 kernel: APIC: Static calls initialized Mar 10 02:08:40.383917 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Mar 10 02:08:40.383926 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Mar 10 02:08:40.383933 kernel: extended physical RAM map: Mar 10 02:08:40.383942 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 10 02:08:40.383950 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 10 02:08:40.383958 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 10 02:08:40.383966 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Mar 10 02:08:40.384013 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 10 02:08:40.384022 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Mar 10 02:08:40.384030 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Mar 10 02:08:40.384038 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Mar 10 02:08:40.384050 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Mar 10 02:08:40.384063 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Mar 10 02:08:40.384071 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Mar 10 02:08:40.384080 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Mar 10 02:08:40.384090 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Mar 10 02:08:40.384105 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Mar 10 02:08:40.384114 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Mar 10 02:08:40.384122 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Mar 10 02:08:40.384131 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 10 02:08:40.384139 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Mar 10 02:08:40.384148 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Mar 10 02:08:40.384157 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Mar 10 02:08:40.384165 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Mar 10 02:08:40.384174 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Mar 10 02:08:40.384182 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 10 02:08:40.384193 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 10 02:08:40.384206 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 10 02:08:40.384215 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Mar 10 02:08:40.384223 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 10 02:08:40.384232 kernel: efi: EFI v2.7 by EDK II Mar 10 02:08:40.384241 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Mar 10 02:08:40.384249 kernel: random: crng init done Mar 10 02:08:40.384258 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Mar 10 02:08:40.384267 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Mar 10 02:08:40.384472 kernel: secureboot: Secure boot disabled Mar 10 02:08:40.384486 kernel: SMBIOS 2.8 present. Mar 10 02:08:40.384495 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Mar 10 02:08:40.384509 kernel: DMI: Memory slots populated: 1/1 Mar 10 02:08:40.384519 kernel: Hypervisor detected: KVM Mar 10 02:08:40.384528 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Mar 10 02:08:40.384537 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 10 02:08:40.384547 kernel: kvm-clock: using sched offset of 12365614365 cycles Mar 10 02:08:40.384557 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 10 02:08:40.384567 kernel: tsc: Detected 2445.426 MHz processor Mar 10 02:08:40.384577 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 10 02:08:40.384587 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 10 02:08:40.384596 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Mar 10 02:08:40.384606 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 10 02:08:40.384619 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 10 02:08:40.384628 kernel: Using GB pages for direct mapping Mar 10 02:08:40.384638 kernel: ACPI: Early table checksum verification disabled Mar 10 02:08:40.384648 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Mar 10 02:08:40.384657 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 10 02:08:40.384667 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 02:08:40.384677 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 02:08:40.384686 kernel: ACPI: FACS 0x000000009CBDD000 000040 Mar 10 02:08:40.384700 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 02:08:40.384713 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 02:08:40.384722 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 02:08:40.384731 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 02:08:40.384740 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 10 02:08:40.384749 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Mar 10 02:08:40.384758 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Mar 10 02:08:40.384767 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Mar 10 02:08:40.384776 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Mar 10 02:08:40.384788 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Mar 10 02:08:40.384797 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Mar 10 02:08:40.384809 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Mar 10 02:08:40.384819 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Mar 10 02:08:40.384827 kernel: No NUMA configuration found Mar 10 02:08:40.384836 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Mar 10 02:08:40.384846 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Mar 10 02:08:40.384855 kernel: Zone ranges: Mar 10 02:08:40.384863 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 10 02:08:40.384875 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Mar 10 02:08:40.384884 kernel: Normal empty Mar 10 02:08:40.384893 kernel: Device empty Mar 10 02:08:40.384904 kernel: Movable zone start for each node Mar 10 02:08:40.384916 kernel: Early memory node ranges Mar 10 02:08:40.384925 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 10 02:08:40.384934 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Mar 10 02:08:40.384943 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Mar 10 02:08:40.384952 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Mar 10 02:08:40.384960 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Mar 10 02:08:40.385010 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Mar 10 02:08:40.385024 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Mar 10 02:08:40.385033 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Mar 10 02:08:40.385043 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Mar 10 02:08:40.385052 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 10 02:08:40.385070 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 10 02:08:40.385082 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Mar 10 02:08:40.385092 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 10 02:08:40.385101 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Mar 10 02:08:40.385110 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Mar 10 02:08:40.385123 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Mar 10 02:08:40.385133 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Mar 10 02:08:40.385146 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Mar 10 02:08:40.385155 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 10 02:08:40.385164 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 10 02:08:40.385173 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 10 02:08:40.385183 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 10 02:08:40.385195 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 10 02:08:40.385205 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 10 02:08:40.385216 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 10 02:08:40.385229 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 10 02:08:40.385238 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 10 02:08:40.385247 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 10 02:08:40.385257 kernel: TSC deadline timer available Mar 10 02:08:40.385266 kernel: CPU topo: Max. logical packages: 1 Mar 10 02:08:40.385331 kernel: CPU topo: Max. logical dies: 1 Mar 10 02:08:40.385348 kernel: CPU topo: Max. dies per package: 1 Mar 10 02:08:40.385358 kernel: CPU topo: Max. threads per core: 1 Mar 10 02:08:40.385367 kernel: CPU topo: Num. cores per package: 4 Mar 10 02:08:40.385376 kernel: CPU topo: Num. threads per package: 4 Mar 10 02:08:40.385385 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Mar 10 02:08:40.385395 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 10 02:08:40.385404 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 10 02:08:40.385413 kernel: kvm-guest: setup PV sched yield Mar 10 02:08:40.385422 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Mar 10 02:08:40.385438 kernel: Booting paravirtualized kernel on KVM Mar 10 02:08:40.385450 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 10 02:08:40.385460 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 10 02:08:40.385469 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Mar 10 02:08:40.385479 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Mar 10 02:08:40.385488 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 10 02:08:40.385497 kernel: kvm-guest: PV spinlocks enabled Mar 10 02:08:40.385507 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 10 02:08:40.385517 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bcd0808bf4ec60436f0ff2e8373a873eb88ae42d4ac26e6e6d81129499700895 Mar 10 02:08:40.385531 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 10 02:08:40.385544 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 10 02:08:40.385553 kernel: Fallback order for Node 0: 0 Mar 10 02:08:40.385563 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Mar 10 02:08:40.385572 kernel: Policy zone: DMA32 Mar 10 02:08:40.385581 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 10 02:08:40.385591 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 10 02:08:40.385600 kernel: ftrace: allocating 40099 entries in 157 pages Mar 10 02:08:40.385612 kernel: ftrace: allocated 157 pages with 5 groups Mar 10 02:08:40.385622 kernel: Dynamic Preempt: voluntary Mar 10 02:08:40.385632 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 10 02:08:40.385650 kernel: rcu: RCU event tracing is enabled. Mar 10 02:08:40.385663 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 10 02:08:40.385673 kernel: Trampoline variant of Tasks RCU enabled. Mar 10 02:08:40.385682 kernel: Rude variant of Tasks RCU enabled. Mar 10 02:08:40.385692 kernel: Tracing variant of Tasks RCU enabled. Mar 10 02:08:40.385702 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 10 02:08:40.385715 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 10 02:08:40.385724 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 10 02:08:40.385734 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 10 02:08:40.385744 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 10 02:08:40.385756 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 10 02:08:40.385768 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 10 02:08:40.385779 kernel: Console: colour dummy device 80x25 Mar 10 02:08:40.385789 kernel: printk: legacy console [ttyS0] enabled Mar 10 02:08:40.385798 kernel: ACPI: Core revision 20240827 Mar 10 02:08:40.385811 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 10 02:08:40.385820 kernel: APIC: Switch to symmetric I/O mode setup Mar 10 02:08:40.385830 kernel: x2apic enabled Mar 10 02:08:40.385839 kernel: APIC: Switched APIC routing to: physical x2apic Mar 10 02:08:40.385849 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 10 02:08:40.385858 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 10 02:08:40.385870 kernel: kvm-guest: setup PV IPIs Mar 10 02:08:40.385882 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 10 02:08:40.385892 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 10 02:08:40.385905 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 10 02:08:40.385914 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 10 02:08:40.385924 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 10 02:08:40.385933 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 10 02:08:40.385943 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 10 02:08:40.385952 kernel: Spectre V2 : Mitigation: Retpolines Mar 10 02:08:40.385962 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 10 02:08:40.386011 kernel: Speculative Store Bypass: Vulnerable Mar 10 02:08:40.386022 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 10 02:08:40.386036 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 10 02:08:40.386046 kernel: active return thunk: srso_alias_return_thunk Mar 10 02:08:40.386055 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 10 02:08:40.386065 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 10 02:08:40.386074 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 10 02:08:40.386083 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 10 02:08:40.386094 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 10 02:08:40.386106 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 10 02:08:40.386123 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 10 02:08:40.386133 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 10 02:08:40.386142 kernel: Freeing SMP alternatives memory: 32K Mar 10 02:08:40.386152 kernel: pid_max: default: 32768 minimum: 301 Mar 10 02:08:40.386161 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 10 02:08:40.386170 kernel: landlock: Up and running. Mar 10 02:08:40.386180 kernel: SELinux: Initializing. Mar 10 02:08:40.386189 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 10 02:08:40.386198 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 10 02:08:40.386212 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 10 02:08:40.386225 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 10 02:08:40.386236 kernel: signal: max sigframe size: 1776 Mar 10 02:08:40.386246 kernel: rcu: Hierarchical SRCU implementation. Mar 10 02:08:40.386255 kernel: rcu: Max phase no-delay instances is 400. Mar 10 02:08:40.386265 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 10 02:08:40.386347 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 10 02:08:40.386358 kernel: smp: Bringing up secondary CPUs ... Mar 10 02:08:40.386368 kernel: smpboot: x86: Booting SMP configuration: Mar 10 02:08:40.386381 kernel: .... node #0, CPUs: #1 #2 #3 Mar 10 02:08:40.386390 kernel: smp: Brought up 1 node, 4 CPUs Mar 10 02:08:40.386400 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 10 02:08:40.386410 kernel: Memory: 2414476K/2565800K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46204K init, 2556K bss, 145388K reserved, 0K cma-reserved) Mar 10 02:08:40.386419 kernel: devtmpfs: initialized Mar 10 02:08:40.386431 kernel: x86/mm: Memory block size: 128MB Mar 10 02:08:40.386443 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Mar 10 02:08:40.386456 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Mar 10 02:08:40.386466 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Mar 10 02:08:40.386479 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Mar 10 02:08:40.386488 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Mar 10 02:08:40.386498 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Mar 10 02:08:40.386507 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 10 02:08:40.386517 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 10 02:08:40.386526 kernel: pinctrl core: initialized pinctrl subsystem Mar 10 02:08:40.386535 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 10 02:08:40.386546 kernel: audit: initializing netlink subsys (disabled) Mar 10 02:08:40.386558 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 10 02:08:40.386572 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 10 02:08:40.386582 kernel: audit: type=2000 audit(1773108512.280:1): state=initialized audit_enabled=0 res=1 Mar 10 02:08:40.386591 kernel: cpuidle: using governor menu Mar 10 02:08:40.386601 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 10 02:08:40.386610 kernel: dca service started, version 1.12.1 Mar 10 02:08:40.386619 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Mar 10 02:08:40.386629 kernel: PCI: Using configuration type 1 for base access Mar 10 02:08:40.386638 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 10 02:08:40.386648 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 10 02:08:40.386663 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 10 02:08:40.386674 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 10 02:08:40.386687 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 10 02:08:40.386697 kernel: ACPI: Added _OSI(Module Device) Mar 10 02:08:40.386706 kernel: ACPI: Added _OSI(Processor Device) Mar 10 02:08:40.386716 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 10 02:08:40.386725 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 10 02:08:40.386735 kernel: ACPI: Interpreter enabled Mar 10 02:08:40.386744 kernel: ACPI: PM: (supports S0 S3 S5) Mar 10 02:08:40.386757 kernel: ACPI: Using IOAPIC for interrupt routing Mar 10 02:08:40.386766 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 10 02:08:40.386777 kernel: PCI: Using E820 reservations for host bridge windows Mar 10 02:08:40.386789 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 10 02:08:40.386800 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 10 02:08:40.387075 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 10 02:08:40.387244 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 10 02:08:40.387482 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 10 02:08:40.387499 kernel: PCI host bridge to bus 0000:00 Mar 10 02:08:40.387661 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 10 02:08:40.387808 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 10 02:08:40.387953 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 10 02:08:40.388132 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Mar 10 02:08:40.388351 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Mar 10 02:08:40.388657 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Mar 10 02:08:40.388814 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 10 02:08:40.389042 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Mar 10 02:08:40.389211 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Mar 10 02:08:40.389451 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Mar 10 02:08:40.389611 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Mar 10 02:08:40.389777 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Mar 10 02:08:40.389939 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 10 02:08:40.390157 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Mar 10 02:08:40.390463 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Mar 10 02:08:40.390625 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Mar 10 02:08:40.390786 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Mar 10 02:08:40.390956 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Mar 10 02:08:40.391166 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Mar 10 02:08:40.391400 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Mar 10 02:08:40.391557 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Mar 10 02:08:40.391737 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Mar 10 02:08:40.391894 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Mar 10 02:08:40.392098 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Mar 10 02:08:40.392254 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Mar 10 02:08:40.392639 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Mar 10 02:08:40.392809 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Mar 10 02:08:40.392968 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 10 02:08:40.393180 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Mar 10 02:08:40.393402 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Mar 10 02:08:40.393565 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Mar 10 02:08:40.393730 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Mar 10 02:08:40.393903 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Mar 10 02:08:40.393920 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 10 02:08:40.393930 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 10 02:08:40.393940 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 10 02:08:40.393950 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 10 02:08:40.393959 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 10 02:08:40.393969 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 10 02:08:40.394021 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 10 02:08:40.394036 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 10 02:08:40.394046 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 10 02:08:40.394055 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 10 02:08:40.394064 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 10 02:08:40.394074 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 10 02:08:40.394084 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 10 02:08:40.394093 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 10 02:08:40.394102 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 10 02:08:40.394114 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 10 02:08:40.394129 kernel: iommu: Default domain type: Translated Mar 10 02:08:40.394139 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 10 02:08:40.394148 kernel: efivars: Registered efivars operations Mar 10 02:08:40.394157 kernel: PCI: Using ACPI for IRQ routing Mar 10 02:08:40.394167 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 10 02:08:40.394176 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Mar 10 02:08:40.394185 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Mar 10 02:08:40.394194 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Mar 10 02:08:40.394204 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Mar 10 02:08:40.394219 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Mar 10 02:08:40.394230 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Mar 10 02:08:40.394239 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Mar 10 02:08:40.394250 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Mar 10 02:08:40.394463 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 10 02:08:40.394620 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 10 02:08:40.394927 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 10 02:08:40.394944 kernel: vgaarb: loaded Mar 10 02:08:40.394958 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 10 02:08:40.394970 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 10 02:08:40.395023 kernel: clocksource: Switched to clocksource kvm-clock Mar 10 02:08:40.395033 kernel: VFS: Disk quotas dquot_6.6.0 Mar 10 02:08:40.395042 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 10 02:08:40.395051 kernel: pnp: PnP ACPI init Mar 10 02:08:40.395232 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Mar 10 02:08:40.395250 kernel: pnp: PnP ACPI: found 6 devices Mar 10 02:08:40.395265 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 10 02:08:40.395334 kernel: NET: Registered PF_INET protocol family Mar 10 02:08:40.395349 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 10 02:08:40.395359 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 10 02:08:40.395368 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 10 02:08:40.395378 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 10 02:08:40.395408 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 10 02:08:40.395421 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 10 02:08:40.395431 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 10 02:08:40.395447 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 10 02:08:40.395461 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 10 02:08:40.395470 kernel: NET: Registered PF_XDP protocol family Mar 10 02:08:40.395634 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Mar 10 02:08:40.395795 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Mar 10 02:08:40.395949 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 10 02:08:40.396139 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 10 02:08:40.396344 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 10 02:08:40.396499 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Mar 10 02:08:40.396648 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Mar 10 02:08:40.396789 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Mar 10 02:08:40.396804 kernel: PCI: CLS 0 bytes, default 64 Mar 10 02:08:40.396814 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 10 02:08:40.396828 kernel: Initialise system trusted keyrings Mar 10 02:08:40.396840 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 10 02:08:40.396854 kernel: Key type asymmetric registered Mar 10 02:08:40.396863 kernel: Asymmetric key parser 'x509' registered Mar 10 02:08:40.396876 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 10 02:08:40.396886 kernel: io scheduler mq-deadline registered Mar 10 02:08:40.396896 kernel: io scheduler kyber registered Mar 10 02:08:40.396905 kernel: io scheduler bfq registered Mar 10 02:08:40.396915 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 10 02:08:40.396927 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 10 02:08:40.396940 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 10 02:08:40.396951 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 10 02:08:40.396964 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 10 02:08:40.397015 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 10 02:08:40.397026 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 10 02:08:40.397037 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 10 02:08:40.397051 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 10 02:08:40.397250 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 10 02:08:40.397272 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 10 02:08:40.397504 kernel: rtc_cmos 00:04: registered as rtc0 Mar 10 02:08:40.397658 kernel: rtc_cmos 00:04: setting system clock to 2026-03-10T02:08:39 UTC (1773108519) Mar 10 02:08:40.397818 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Mar 10 02:08:40.397833 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 10 02:08:40.397844 kernel: efifb: probing for efifb Mar 10 02:08:40.397854 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Mar 10 02:08:40.397864 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Mar 10 02:08:40.397879 kernel: efifb: scrolling: redraw Mar 10 02:08:40.397892 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 10 02:08:40.397905 kernel: Console: switching to colour frame buffer device 160x50 Mar 10 02:08:40.397915 kernel: fb0: EFI VGA frame buffer device Mar 10 02:08:40.397925 kernel: pstore: Using crash dump compression: deflate Mar 10 02:08:40.397935 kernel: pstore: Registered efi_pstore as persistent store backend Mar 10 02:08:40.397944 kernel: NET: Registered PF_INET6 protocol family Mar 10 02:08:40.397954 kernel: Segment Routing with IPv6 Mar 10 02:08:40.397964 kernel: In-situ OAM (IOAM) with IPv6 Mar 10 02:08:40.398025 kernel: NET: Registered PF_PACKET protocol family Mar 10 02:08:40.398035 kernel: Key type dns_resolver registered Mar 10 02:08:40.398045 kernel: IPI shorthand broadcast: enabled Mar 10 02:08:40.398055 kernel: sched_clock: Marking stable (4679035334, 3568482003)->(9540517308, -1292999971) Mar 10 02:08:40.398064 kernel: registered taskstats version 1 Mar 10 02:08:40.398074 kernel: Loading compiled-in X.509 certificates Mar 10 02:08:40.398084 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: 64a6e3ad023f02465a8c66e81554b4b2e64fb972' Mar 10 02:08:40.398093 kernel: Demotion targets for Node 0: null Mar 10 02:08:40.398105 kernel: Key type .fscrypt registered Mar 10 02:08:40.398120 kernel: Key type fscrypt-provisioning registered Mar 10 02:08:40.398130 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 10 02:08:40.398140 kernel: ima: Allocated hash algorithm: sha1 Mar 10 02:08:40.398150 kernel: ima: No architecture policies found Mar 10 02:08:40.398159 kernel: clk: Disabling unused clocks Mar 10 02:08:40.398169 kernel: Warning: unable to open an initial console. Mar 10 02:08:40.398180 kernel: Freeing unused kernel image (initmem) memory: 46204K Mar 10 02:08:40.398190 kernel: Write protecting the kernel read-only data: 40960k Mar 10 02:08:40.398199 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Mar 10 02:08:40.398217 kernel: Run /init as init process Mar 10 02:08:40.398227 kernel: with arguments: Mar 10 02:08:40.398236 kernel: /init Mar 10 02:08:40.398246 kernel: with environment: Mar 10 02:08:40.398255 kernel: HOME=/ Mar 10 02:08:40.398265 kernel: TERM=linux Mar 10 02:08:40.398332 systemd[1]: Successfully made /usr/ read-only. Mar 10 02:08:40.398349 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 10 02:08:40.398364 systemd[1]: Detected virtualization kvm. Mar 10 02:08:40.398375 systemd[1]: Detected architecture x86-64. Mar 10 02:08:40.398385 systemd[1]: Running in initrd. Mar 10 02:08:40.398395 systemd[1]: No hostname configured, using default hostname. Mar 10 02:08:40.398406 systemd[1]: Hostname set to . Mar 10 02:08:40.398418 systemd[1]: Initializing machine ID from VM UUID. Mar 10 02:08:40.398431 systemd[1]: Queued start job for default target initrd.target. Mar 10 02:08:40.398442 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 10 02:08:40.398456 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 10 02:08:40.398467 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 10 02:08:40.398478 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 10 02:08:40.398488 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 10 02:08:40.398500 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 10 02:08:40.398512 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 10 02:08:40.398524 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 10 02:08:40.398539 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 10 02:08:40.398550 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 10 02:08:40.398562 systemd[1]: Reached target paths.target - Path Units. Mar 10 02:08:40.398573 systemd[1]: Reached target slices.target - Slice Units. Mar 10 02:08:40.398584 systemd[1]: Reached target swap.target - Swaps. Mar 10 02:08:40.398595 systemd[1]: Reached target timers.target - Timer Units. Mar 10 02:08:40.398608 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 10 02:08:40.398767 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 10 02:08:40.398782 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 10 02:08:40.398793 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 10 02:08:40.398804 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 10 02:08:40.398814 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 10 02:08:40.398826 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 10 02:08:40.398840 systemd[1]: Reached target sockets.target - Socket Units. Mar 10 02:08:40.398856 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 10 02:08:40.398867 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 10 02:08:40.398877 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 10 02:08:40.398894 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 10 02:08:40.398905 systemd[1]: Starting systemd-fsck-usr.service... Mar 10 02:08:40.398915 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 10 02:08:40.398925 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 10 02:08:40.398936 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 02:08:40.399018 systemd-journald[203]: Collecting audit messages is disabled. Mar 10 02:08:40.399049 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 10 02:08:40.399065 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 10 02:08:40.399083 systemd-journald[203]: Journal started Mar 10 02:08:40.399105 systemd-journald[203]: Runtime Journal (/run/log/journal/aab7e496b3c64b909e52602b9d56b825) is 6M, max 48.1M, 42.1M free. Mar 10 02:08:40.408334 systemd[1]: Started systemd-journald.service - Journal Service. Mar 10 02:08:40.407932 systemd[1]: Finished systemd-fsck-usr.service. Mar 10 02:08:40.423570 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 10 02:08:40.437731 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 10 02:08:40.446765 systemd-modules-load[204]: Inserted module 'overlay' Mar 10 02:08:40.459600 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 10 02:08:40.471646 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 02:08:40.484122 systemd-tmpfiles[213]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 10 02:08:40.487387 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 10 02:08:40.503498 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 10 02:08:40.509356 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 10 02:08:40.546434 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 10 02:08:40.553952 kernel: Bridge firewalling registered Mar 10 02:08:40.554188 systemd-modules-load[204]: Inserted module 'br_netfilter' Mar 10 02:08:40.558738 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 10 02:08:40.569538 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 10 02:08:40.583537 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 10 02:08:40.604875 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 10 02:08:40.615155 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 10 02:08:40.626425 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 10 02:08:40.634176 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 10 02:08:40.685102 dracut-cmdline[243]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bcd0808bf4ec60436f0ff2e8373a873eb88ae42d4ac26e6e6d81129499700895 Mar 10 02:08:40.710585 systemd-resolved[244]: Positive Trust Anchors: Mar 10 02:08:40.710624 systemd-resolved[244]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 10 02:08:40.710662 systemd-resolved[244]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 10 02:08:40.714132 systemd-resolved[244]: Defaulting to hostname 'linux'. Mar 10 02:08:40.715723 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 10 02:08:40.750026 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 10 02:08:40.912699 kernel: SCSI subsystem initialized Mar 10 02:08:40.928496 kernel: Loading iSCSI transport class v2.0-870. Mar 10 02:08:40.956063 kernel: iscsi: registered transport (tcp) Mar 10 02:08:40.993941 kernel: iscsi: registered transport (qla4xxx) Mar 10 02:08:40.994052 kernel: QLogic iSCSI HBA Driver Mar 10 02:08:41.051373 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 10 02:08:41.097620 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 10 02:08:41.102347 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 10 02:08:41.227012 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 10 02:08:41.238272 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 10 02:08:41.334606 kernel: raid6: avx2x4 gen() 25476 MB/s Mar 10 02:08:41.354358 kernel: raid6: avx2x2 gen() 28661 MB/s Mar 10 02:08:41.374853 kernel: raid6: avx2x1 gen() 19417 MB/s Mar 10 02:08:41.374945 kernel: raid6: using algorithm avx2x2 gen() 28661 MB/s Mar 10 02:08:41.396010 kernel: raid6: .... xor() 17536 MB/s, rmw enabled Mar 10 02:08:41.396049 kernel: raid6: using avx2x2 recovery algorithm Mar 10 02:08:41.425381 kernel: xor: automatically using best checksumming function avx Mar 10 02:08:41.696936 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 10 02:08:41.712132 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 10 02:08:41.717717 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 10 02:08:41.775561 systemd-udevd[454]: Using default interface naming scheme 'v255'. Mar 10 02:08:41.784927 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 10 02:08:41.800653 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 10 02:08:41.850759 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Mar 10 02:08:41.912374 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 10 02:08:41.927136 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 10 02:08:42.065253 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 10 02:08:42.102380 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 10 02:08:42.186348 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 10 02:08:42.205388 kernel: cryptd: max_cpu_qlen set to 1000 Mar 10 02:08:42.205441 kernel: libata version 3.00 loaded. Mar 10 02:08:42.230593 kernel: ahci 0000:00:1f.2: version 3.0 Mar 10 02:08:42.235609 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 10 02:08:42.244938 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 10 02:08:42.273575 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Mar 10 02:08:42.273847 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Mar 10 02:08:42.274108 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 10 02:08:42.274893 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Mar 10 02:08:42.274921 kernel: scsi host0: ahci Mar 10 02:08:42.275251 kernel: scsi host1: ahci Mar 10 02:08:42.275344 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 10 02:08:42.245372 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 02:08:42.320027 kernel: scsi host2: ahci Mar 10 02:08:42.320367 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 10 02:08:42.320389 kernel: scsi host3: ahci Mar 10 02:08:42.320603 kernel: GPT:9289727 != 19775487 Mar 10 02:08:42.320620 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 10 02:08:42.320636 kernel: GPT:9289727 != 19775487 Mar 10 02:08:42.320650 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 10 02:08:42.320664 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 10 02:08:42.320680 kernel: scsi host4: ahci Mar 10 02:08:42.320881 kernel: scsi host5: ahci Mar 10 02:08:42.287457 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 02:08:42.405743 kernel: AES CTR mode by8 optimization enabled Mar 10 02:08:42.405771 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Mar 10 02:08:42.405787 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Mar 10 02:08:42.405802 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Mar 10 02:08:42.405824 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Mar 10 02:08:42.405838 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Mar 10 02:08:42.405855 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Mar 10 02:08:42.300182 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 02:08:42.406858 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 10 02:08:42.459862 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 10 02:08:42.475911 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 10 02:08:42.487211 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 10 02:08:42.491713 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 10 02:08:42.524661 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 10 02:08:42.537381 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 10 02:08:42.539354 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 10 02:08:42.539432 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 02:08:42.561193 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 02:08:42.580678 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 02:08:42.584233 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 10 02:08:42.601945 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 10 02:08:42.601974 disk-uuid[620]: Primary Header is updated. Mar 10 02:08:42.601974 disk-uuid[620]: Secondary Entries is updated. Mar 10 02:08:42.601974 disk-uuid[620]: Secondary Header is updated. Mar 10 02:08:42.653779 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 02:08:42.701465 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 10 02:08:42.701500 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 10 02:08:42.701516 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 10 02:08:42.701531 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 10 02:08:42.701545 kernel: ata3.00: LPM support broken, forcing max_power Mar 10 02:08:42.701559 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 10 02:08:42.701573 kernel: ata3.00: applying bridge limits Mar 10 02:08:42.701588 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 10 02:08:42.701602 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 10 02:08:42.711614 kernel: ata3.00: LPM support broken, forcing max_power Mar 10 02:08:42.711830 kernel: ata3.00: configured for UDMA/100 Mar 10 02:08:42.719338 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 10 02:08:42.792690 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 10 02:08:42.793590 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 10 02:08:42.822427 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 10 02:08:43.325615 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 10 02:08:43.338381 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 10 02:08:43.352109 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 10 02:08:43.359051 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 10 02:08:43.378505 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 10 02:08:43.433430 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 10 02:08:43.634349 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 10 02:08:43.637054 disk-uuid[622]: The operation has completed successfully. Mar 10 02:08:43.710056 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 10 02:08:43.710340 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 10 02:08:43.765152 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 10 02:08:43.803258 sh[663]: Success Mar 10 02:08:43.851255 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 10 02:08:43.851410 kernel: device-mapper: uevent: version 1.0.3 Mar 10 02:08:43.857336 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 10 02:08:43.890359 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Mar 10 02:08:43.954620 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 10 02:08:43.958393 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 10 02:08:43.985384 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 10 02:08:44.007611 kernel: BTRFS: device fsid 91a17919-8e0b-4e39-b5e3-1547b6175986 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (675) Mar 10 02:08:44.014346 kernel: BTRFS info (device dm-0): first mount of filesystem 91a17919-8e0b-4e39-b5e3-1547b6175986 Mar 10 02:08:44.014393 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 10 02:08:44.059494 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 10 02:08:44.059575 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 10 02:08:44.064703 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 10 02:08:44.069551 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 10 02:08:44.076377 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 10 02:08:44.077762 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 10 02:08:44.085756 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 10 02:08:44.171587 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (710) Mar 10 02:08:44.184087 kernel: BTRFS info (device vda6): first mount of filesystem ee81d5fa-b10d-48ad-a53f-95a2476266f6 Mar 10 02:08:44.184168 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 10 02:08:44.210049 kernel: BTRFS info (device vda6): turning on async discard Mar 10 02:08:44.210126 kernel: BTRFS info (device vda6): enabling free space tree Mar 10 02:08:44.228480 kernel: BTRFS info (device vda6): last unmount of filesystem ee81d5fa-b10d-48ad-a53f-95a2476266f6 Mar 10 02:08:44.241949 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 10 02:08:44.249121 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 10 02:08:44.425061 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 10 02:08:44.432478 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 10 02:08:44.442398 ignition[776]: Ignition 2.22.0 Mar 10 02:08:44.442409 ignition[776]: Stage: fetch-offline Mar 10 02:08:44.442462 ignition[776]: no configs at "/usr/lib/ignition/base.d" Mar 10 02:08:44.442482 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 02:08:44.442671 ignition[776]: parsed url from cmdline: "" Mar 10 02:08:44.442678 ignition[776]: no config URL provided Mar 10 02:08:44.442687 ignition[776]: reading system config file "/usr/lib/ignition/user.ign" Mar 10 02:08:44.442700 ignition[776]: no config at "/usr/lib/ignition/user.ign" Mar 10 02:08:44.442751 ignition[776]: op(1): [started] loading QEMU firmware config module Mar 10 02:08:44.442816 ignition[776]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 10 02:08:44.476269 ignition[776]: op(1): [finished] loading QEMU firmware config module Mar 10 02:08:44.562631 systemd-networkd[850]: lo: Link UP Mar 10 02:08:44.562675 systemd-networkd[850]: lo: Gained carrier Mar 10 02:08:44.570074 systemd-networkd[850]: Enumeration completed Mar 10 02:08:44.570402 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 10 02:08:44.575646 systemd-networkd[850]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 02:08:44.575654 systemd-networkd[850]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 10 02:08:44.578581 systemd-networkd[850]: eth0: Link UP Mar 10 02:08:44.579072 systemd-networkd[850]: eth0: Gained carrier Mar 10 02:08:44.579085 systemd-networkd[850]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 02:08:44.586702 systemd[1]: Reached target network.target - Network. Mar 10 02:08:44.664505 systemd-networkd[850]: eth0: DHCPv4 address 10.0.0.112/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 10 02:08:44.888774 ignition[776]: parsing config with SHA512: baf17ef839707cb96bfab075f21c353105b74926f695c7b9233b067b3bba02d674a1d42da0703636a249690e39476af7b957eb4df3cbe502a7b62fde0bf4c60e Mar 10 02:08:44.900897 unknown[776]: fetched base config from "system" Mar 10 02:08:44.900913 unknown[776]: fetched user config from "qemu" Mar 10 02:08:44.910505 ignition[776]: fetch-offline: fetch-offline passed Mar 10 02:08:44.912616 systemd-resolved[244]: Detected conflict on linux IN A 10.0.0.112 Mar 10 02:08:44.910599 ignition[776]: Ignition finished successfully Mar 10 02:08:44.912626 systemd-resolved[244]: Hostname conflict, changing published hostname from 'linux' to 'linux8'. Mar 10 02:08:44.922644 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 10 02:08:44.949466 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 10 02:08:44.953470 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 10 02:08:45.037678 ignition[857]: Ignition 2.22.0 Mar 10 02:08:45.038044 ignition[857]: Stage: kargs Mar 10 02:08:45.039782 ignition[857]: no configs at "/usr/lib/ignition/base.d" Mar 10 02:08:45.039796 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 02:08:45.043908 ignition[857]: kargs: kargs passed Mar 10 02:08:45.043972 ignition[857]: Ignition finished successfully Mar 10 02:08:45.077480 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 10 02:08:45.098847 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 10 02:08:45.150362 ignition[864]: Ignition 2.22.0 Mar 10 02:08:45.150403 ignition[864]: Stage: disks Mar 10 02:08:45.150656 ignition[864]: no configs at "/usr/lib/ignition/base.d" Mar 10 02:08:45.150676 ignition[864]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 02:08:45.151890 ignition[864]: disks: disks passed Mar 10 02:08:45.165911 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 10 02:08:45.151958 ignition[864]: Ignition finished successfully Mar 10 02:08:45.173968 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 10 02:08:45.191186 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 10 02:08:45.197608 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 10 02:08:45.198182 systemd[1]: Reached target sysinit.target - System Initialization. Mar 10 02:08:45.199933 systemd[1]: Reached target basic.target - Basic System. Mar 10 02:08:45.212260 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 10 02:08:45.280500 systemd-fsck[874]: ROOT: clean, 15/553520 files, 52789/553472 blocks Mar 10 02:08:45.297528 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 10 02:08:45.310365 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 10 02:08:45.735387 kernel: EXT4-fs (vda9): mounted filesystem 494bf987-03e9-4980-9fc3-4af435e63ebe r/w with ordered data mode. Quota mode: none. Mar 10 02:08:45.737397 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 10 02:08:45.746729 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 10 02:08:45.760330 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 10 02:08:45.797546 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 10 02:08:45.809794 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 10 02:08:45.809899 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 10 02:08:45.809939 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 10 02:08:45.817036 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 10 02:08:45.840344 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 10 02:08:45.849834 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (882) Mar 10 02:08:45.867842 kernel: BTRFS info (device vda6): first mount of filesystem ee81d5fa-b10d-48ad-a53f-95a2476266f6 Mar 10 02:08:45.867893 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 10 02:08:45.881503 kernel: BTRFS info (device vda6): turning on async discard Mar 10 02:08:45.881547 kernel: BTRFS info (device vda6): enabling free space tree Mar 10 02:08:45.883392 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 10 02:08:45.948553 initrd-setup-root[906]: cut: /sysroot/etc/passwd: No such file or directory Mar 10 02:08:45.960649 initrd-setup-root[913]: cut: /sysroot/etc/group: No such file or directory Mar 10 02:08:45.965464 systemd-networkd[850]: eth0: Gained IPv6LL Mar 10 02:08:45.971960 initrd-setup-root[920]: cut: /sysroot/etc/shadow: No such file or directory Mar 10 02:08:45.982487 initrd-setup-root[927]: cut: /sysroot/etc/gshadow: No such file or directory Mar 10 02:08:46.197517 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 10 02:08:46.205946 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 10 02:08:46.212405 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 10 02:08:46.233853 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 10 02:08:46.241851 kernel: BTRFS info (device vda6): last unmount of filesystem ee81d5fa-b10d-48ad-a53f-95a2476266f6 Mar 10 02:08:46.269265 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 10 02:08:46.291518 ignition[996]: INFO : Ignition 2.22.0 Mar 10 02:08:46.291518 ignition[996]: INFO : Stage: mount Mar 10 02:08:46.297568 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 10 02:08:46.297568 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 02:08:46.297568 ignition[996]: INFO : mount: mount passed Mar 10 02:08:46.297568 ignition[996]: INFO : Ignition finished successfully Mar 10 02:08:46.314705 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 10 02:08:46.318854 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 10 02:08:46.740342 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 10 02:08:46.766414 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1009) Mar 10 02:08:46.766454 kernel: BTRFS info (device vda6): first mount of filesystem ee81d5fa-b10d-48ad-a53f-95a2476266f6 Mar 10 02:08:46.772968 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 10 02:08:46.781322 kernel: BTRFS info (device vda6): turning on async discard Mar 10 02:08:46.781354 kernel: BTRFS info (device vda6): enabling free space tree Mar 10 02:08:46.783469 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 10 02:08:46.830828 ignition[1026]: INFO : Ignition 2.22.0 Mar 10 02:08:46.830828 ignition[1026]: INFO : Stage: files Mar 10 02:08:46.836761 ignition[1026]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 10 02:08:46.836761 ignition[1026]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 02:08:46.836761 ignition[1026]: DEBUG : files: compiled without relabeling support, skipping Mar 10 02:08:46.836761 ignition[1026]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 10 02:08:46.836761 ignition[1026]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 10 02:08:46.863257 ignition[1026]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 10 02:08:46.868780 ignition[1026]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 10 02:08:46.868780 ignition[1026]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 10 02:08:46.865914 unknown[1026]: wrote ssh authorized keys file for user: core Mar 10 02:08:46.883834 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 10 02:08:46.883834 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 10 02:08:46.952548 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 10 02:08:47.063752 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 10 02:08:47.063752 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 10 02:08:47.079508 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 10 02:08:47.079508 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 10 02:08:47.079508 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 10 02:08:47.079508 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 10 02:08:47.079508 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 10 02:08:47.079508 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 10 02:08:47.079508 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 10 02:08:47.079508 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 10 02:08:47.079508 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 10 02:08:47.079508 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 10 02:08:47.146911 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 10 02:08:47.146911 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 10 02:08:47.146911 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Mar 10 02:08:47.446082 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 10 02:08:48.107575 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 10 02:08:48.107575 ignition[1026]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 10 02:08:48.132686 ignition[1026]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 10 02:08:48.132686 ignition[1026]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 10 02:08:48.132686 ignition[1026]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 10 02:08:48.132686 ignition[1026]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 10 02:08:48.132686 ignition[1026]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 10 02:08:48.132686 ignition[1026]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 10 02:08:48.132686 ignition[1026]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 10 02:08:48.132686 ignition[1026]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 10 02:08:48.243531 ignition[1026]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 10 02:08:48.259640 ignition[1026]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 10 02:08:48.259640 ignition[1026]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 10 02:08:48.286559 ignition[1026]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 10 02:08:48.286559 ignition[1026]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 10 02:08:48.286559 ignition[1026]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 10 02:08:48.286559 ignition[1026]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 10 02:08:48.286559 ignition[1026]: INFO : files: files passed Mar 10 02:08:48.286559 ignition[1026]: INFO : Ignition finished successfully Mar 10 02:08:48.287420 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 10 02:08:48.311949 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 10 02:08:48.329156 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 10 02:08:48.370566 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 10 02:08:48.370739 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 10 02:08:48.380254 initrd-setup-root-after-ignition[1054]: grep: /sysroot/oem/oem-release: No such file or directory Mar 10 02:08:48.392988 initrd-setup-root-after-ignition[1056]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 10 02:08:48.392988 initrd-setup-root-after-ignition[1056]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 10 02:08:48.387816 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 10 02:08:48.425206 initrd-setup-root-after-ignition[1060]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 10 02:08:48.396220 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 10 02:08:48.414966 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 10 02:08:48.531505 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 10 02:08:48.531646 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 10 02:08:48.539720 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 10 02:08:48.555114 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 10 02:08:48.558858 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 10 02:08:48.589511 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 10 02:08:48.664856 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 10 02:08:48.677228 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 10 02:08:48.718815 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 10 02:08:48.728397 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 10 02:08:48.755416 systemd[1]: Stopped target timers.target - Timer Units. Mar 10 02:08:48.762560 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 10 02:08:48.762807 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 10 02:08:48.775053 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 10 02:08:48.781646 systemd[1]: Stopped target basic.target - Basic System. Mar 10 02:08:48.787343 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 10 02:08:48.798811 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 10 02:08:48.808773 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 10 02:08:48.809919 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 10 02:08:48.825757 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 10 02:08:48.831242 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 10 02:08:48.833225 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 10 02:08:48.841155 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 10 02:08:48.843940 systemd[1]: Stopped target swap.target - Swaps. Mar 10 02:08:48.879240 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 10 02:08:48.879489 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 10 02:08:48.890331 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 10 02:08:48.897796 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 10 02:08:48.906557 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 10 02:08:48.906927 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 10 02:08:48.914560 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 10 02:08:48.914777 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 10 02:08:48.941521 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 10 02:08:48.941770 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 10 02:08:48.943886 systemd[1]: Stopped target paths.target - Path Units. Mar 10 02:08:48.961966 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 10 02:08:48.965268 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 10 02:08:48.968476 systemd[1]: Stopped target slices.target - Slice Units. Mar 10 02:08:48.978785 systemd[1]: Stopped target sockets.target - Socket Units. Mar 10 02:08:48.986422 systemd[1]: iscsid.socket: Deactivated successfully. Mar 10 02:08:48.986649 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 10 02:08:48.989360 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 10 02:08:48.989477 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 10 02:08:49.013460 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 10 02:08:49.014800 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 10 02:08:49.024062 systemd[1]: ignition-files.service: Deactivated successfully. Mar 10 02:08:49.024206 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 10 02:08:49.042352 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 10 02:08:49.047165 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 10 02:08:49.048732 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 10 02:08:49.073717 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 10 02:08:49.086860 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 10 02:08:49.087136 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 10 02:08:49.105437 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 10 02:08:49.110073 ignition[1081]: INFO : Ignition 2.22.0 Mar 10 02:08:49.110073 ignition[1081]: INFO : Stage: umount Mar 10 02:08:49.110073 ignition[1081]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 10 02:08:49.110073 ignition[1081]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 02:08:49.110073 ignition[1081]: INFO : umount: umount passed Mar 10 02:08:49.110073 ignition[1081]: INFO : Ignition finished successfully Mar 10 02:08:49.105631 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 10 02:08:49.124092 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 10 02:08:49.124260 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 10 02:08:49.137768 systemd[1]: Stopped target network.target - Network. Mar 10 02:08:49.143221 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 10 02:08:49.143438 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 10 02:08:49.153543 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 10 02:08:49.153609 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 10 02:08:49.163158 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 10 02:08:49.163261 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 10 02:08:49.170156 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 10 02:08:49.170227 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 10 02:08:49.176750 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 10 02:08:49.194516 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 10 02:08:49.199802 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 10 02:08:49.201055 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 10 02:08:49.201187 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 10 02:08:49.214591 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 10 02:08:49.214824 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 10 02:08:49.229428 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 10 02:08:49.230525 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 10 02:08:49.230633 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 10 02:08:49.239555 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 10 02:08:49.241126 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 10 02:08:49.241351 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 10 02:08:49.257874 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 10 02:08:49.258689 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 10 02:08:49.268479 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 10 02:08:49.268558 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 10 02:08:49.279412 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 10 02:08:49.296453 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 10 02:08:49.296517 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 10 02:08:49.303533 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 10 02:08:49.303629 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 10 02:08:49.320141 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 10 02:08:49.320232 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 10 02:08:49.330186 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 10 02:08:49.337539 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 10 02:08:49.338151 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 10 02:08:49.338336 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 10 02:08:49.356040 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 10 02:08:49.356176 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 10 02:08:49.436716 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 10 02:08:49.436930 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 10 02:08:49.464535 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 10 02:08:49.465728 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 10 02:08:49.479158 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 10 02:08:49.479238 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 10 02:08:49.484237 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 10 02:08:49.484457 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 10 02:08:49.486120 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 10 02:08:49.486214 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 10 02:08:49.504607 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 10 02:08:49.504721 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 10 02:08:49.524468 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 10 02:08:49.524591 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 10 02:08:49.550367 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 10 02:08:49.561172 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 10 02:08:49.561361 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 10 02:08:49.575869 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 10 02:08:49.575974 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 10 02:08:49.590725 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 10 02:08:49.590843 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 02:08:49.628710 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 10 02:08:49.628911 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 10 02:08:49.634942 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 10 02:08:49.638783 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 10 02:08:49.693372 systemd[1]: Switching root. Mar 10 02:08:49.740925 systemd-journald[203]: Journal stopped Mar 10 02:08:52.224883 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Mar 10 02:08:52.224954 kernel: SELinux: policy capability network_peer_controls=1 Mar 10 02:08:52.224967 kernel: SELinux: policy capability open_perms=1 Mar 10 02:08:52.224978 kernel: SELinux: policy capability extended_socket_class=1 Mar 10 02:08:52.224988 kernel: SELinux: policy capability always_check_network=0 Mar 10 02:08:52.225037 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 10 02:08:52.225054 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 10 02:08:52.225065 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 10 02:08:52.225075 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 10 02:08:52.225086 kernel: SELinux: policy capability userspace_initial_context=0 Mar 10 02:08:52.225096 kernel: audit: type=1403 audit(1773108530.033:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 10 02:08:52.225108 systemd[1]: Successfully loaded SELinux policy in 112.084ms. Mar 10 02:08:52.225131 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.159ms. Mar 10 02:08:52.225145 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 10 02:08:52.225159 systemd[1]: Detected virtualization kvm. Mar 10 02:08:52.225170 systemd[1]: Detected architecture x86-64. Mar 10 02:08:52.225180 systemd[1]: Detected first boot. Mar 10 02:08:52.225191 systemd[1]: Initializing machine ID from VM UUID. Mar 10 02:08:52.225202 zram_generator::config[1126]: No configuration found. Mar 10 02:08:52.225218 kernel: Guest personality initialized and is inactive Mar 10 02:08:52.225229 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Mar 10 02:08:52.225240 kernel: Initialized host personality Mar 10 02:08:52.225252 kernel: NET: Registered PF_VSOCK protocol family Mar 10 02:08:52.225262 systemd[1]: Populated /etc with preset unit settings. Mar 10 02:08:52.225320 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 10 02:08:52.225334 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 10 02:08:52.225346 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 10 02:08:52.225357 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 10 02:08:52.225368 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 10 02:08:52.225379 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 10 02:08:52.225390 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 10 02:08:52.225403 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 10 02:08:52.225415 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 10 02:08:52.225426 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 10 02:08:52.225438 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 10 02:08:52.225449 systemd[1]: Created slice user.slice - User and Session Slice. Mar 10 02:08:52.225460 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 10 02:08:52.225471 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 10 02:08:52.225482 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 10 02:08:52.225493 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 10 02:08:52.225507 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 10 02:08:52.225517 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 10 02:08:52.225528 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 10 02:08:52.225539 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 10 02:08:52.225550 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 10 02:08:52.225561 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 10 02:08:52.225572 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 10 02:08:52.225585 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 10 02:08:52.225595 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 10 02:08:52.225606 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 10 02:08:52.225617 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 10 02:08:52.225634 systemd[1]: Reached target slices.target - Slice Units. Mar 10 02:08:52.225645 systemd[1]: Reached target swap.target - Swaps. Mar 10 02:08:52.225656 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 10 02:08:52.225667 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 10 02:08:52.225679 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 10 02:08:52.225696 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 10 02:08:52.225707 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 10 02:08:52.225719 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 10 02:08:52.225730 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 10 02:08:52.225740 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 10 02:08:52.225751 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 10 02:08:52.225762 systemd[1]: Mounting media.mount - External Media Directory... Mar 10 02:08:52.225773 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 02:08:52.225784 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 10 02:08:52.225797 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 10 02:08:52.225808 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 10 02:08:52.225819 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 10 02:08:52.225830 systemd[1]: Reached target machines.target - Containers. Mar 10 02:08:52.225841 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 10 02:08:52.225852 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 10 02:08:52.225863 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 10 02:08:52.225874 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 10 02:08:52.225885 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 10 02:08:52.225898 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 10 02:08:52.225909 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 10 02:08:52.225920 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 10 02:08:52.225931 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 10 02:08:52.225942 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 10 02:08:52.225953 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 10 02:08:52.225964 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 10 02:08:52.225975 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 10 02:08:52.225987 systemd[1]: Stopped systemd-fsck-usr.service. Mar 10 02:08:52.225999 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 10 02:08:52.226055 kernel: fuse: init (API version 7.41) Mar 10 02:08:52.226074 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 10 02:08:52.226090 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 10 02:08:52.226105 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 10 02:08:52.226120 kernel: ACPI: bus type drm_connector registered Mar 10 02:08:52.226134 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 10 02:08:52.226180 systemd-journald[1211]: Collecting audit messages is disabled. Mar 10 02:08:52.226216 systemd-journald[1211]: Journal started Mar 10 02:08:52.226250 systemd-journald[1211]: Runtime Journal (/run/log/journal/aab7e496b3c64b909e52602b9d56b825) is 6M, max 48.1M, 42.1M free. Mar 10 02:08:51.333855 systemd[1]: Queued start job for default target multi-user.target. Mar 10 02:08:51.352269 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 10 02:08:51.353195 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 10 02:08:51.353835 systemd[1]: systemd-journald.service: Consumed 1.396s CPU time. Mar 10 02:08:52.233356 kernel: loop: module loaded Mar 10 02:08:52.248829 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 10 02:08:52.267802 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 10 02:08:52.276515 systemd[1]: verity-setup.service: Deactivated successfully. Mar 10 02:08:52.276566 systemd[1]: Stopped verity-setup.service. Mar 10 02:08:52.298405 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 02:08:52.306113 systemd[1]: Started systemd-journald.service - Journal Service. Mar 10 02:08:52.311617 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 10 02:08:52.318401 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 10 02:08:52.324714 systemd[1]: Mounted media.mount - External Media Directory. Mar 10 02:08:52.330809 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 10 02:08:52.336052 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 10 02:08:52.342430 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 10 02:08:52.348986 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 10 02:08:52.356775 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 10 02:08:52.363647 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 10 02:08:52.364071 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 10 02:08:52.373642 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 10 02:08:52.374053 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 10 02:08:52.379868 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 10 02:08:52.380530 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 10 02:08:52.388257 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 10 02:08:52.388627 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 10 02:08:52.397854 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 10 02:08:52.398453 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 10 02:08:52.409744 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 10 02:08:52.410181 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 10 02:08:52.421719 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 10 02:08:52.434674 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 10 02:08:52.450811 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 10 02:08:52.463066 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 10 02:08:52.470607 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 10 02:08:52.496426 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 10 02:08:52.509218 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 10 02:08:52.524223 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 10 02:08:52.534979 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 10 02:08:52.536437 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 10 02:08:52.547705 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 10 02:08:52.561835 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 10 02:08:52.577490 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 10 02:08:52.583506 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 10 02:08:52.594671 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 10 02:08:52.608212 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 10 02:08:52.611385 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 10 02:08:52.627453 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 10 02:08:52.629848 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 10 02:08:52.642873 systemd-journald[1211]: Time spent on flushing to /var/log/journal/aab7e496b3c64b909e52602b9d56b825 is 42.947ms for 1062 entries. Mar 10 02:08:52.642873 systemd-journald[1211]: System Journal (/var/log/journal/aab7e496b3c64b909e52602b9d56b825) is 8M, max 195.6M, 187.6M free. Mar 10 02:08:52.707402 systemd-journald[1211]: Received client request to flush runtime journal. Mar 10 02:08:52.661387 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 10 02:08:52.678484 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 10 02:08:52.706659 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 10 02:08:52.719755 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 10 02:08:52.737353 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 10 02:08:52.747209 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 10 02:08:52.771103 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 10 02:08:52.777464 kernel: loop0: detected capacity change from 0 to 110984 Mar 10 02:08:52.793370 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 10 02:08:52.803933 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 10 02:08:52.846656 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 10 02:08:52.865647 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 10 02:08:52.890161 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 10 02:08:52.916920 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 10 02:08:52.921901 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 10 02:08:52.933359 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Mar 10 02:08:52.933405 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Mar 10 02:08:52.941515 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 10 02:08:52.947267 kernel: loop1: detected capacity change from 0 to 217752 Mar 10 02:08:53.072814 kernel: loop2: detected capacity change from 0 to 128560 Mar 10 02:08:53.212509 kernel: loop3: detected capacity change from 0 to 110984 Mar 10 02:08:53.296332 kernel: loop4: detected capacity change from 0 to 217752 Mar 10 02:08:53.423416 kernel: loop5: detected capacity change from 0 to 128560 Mar 10 02:08:53.511756 (sd-merge)[1270]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 10 02:08:53.516457 (sd-merge)[1270]: Merged extensions into '/usr'. Mar 10 02:08:53.531954 systemd[1]: Reload requested from client PID 1246 ('systemd-sysext') (unit systemd-sysext.service)... Mar 10 02:08:53.532000 systemd[1]: Reloading... Mar 10 02:08:53.672393 zram_generator::config[1293]: No configuration found. Mar 10 02:08:53.989954 ldconfig[1241]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 10 02:08:54.034059 systemd[1]: Reloading finished in 499 ms. Mar 10 02:08:54.090178 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 10 02:08:54.100561 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 10 02:08:54.109875 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 10 02:08:54.143381 systemd[1]: Starting ensure-sysext.service... Mar 10 02:08:54.161501 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 10 02:08:54.169168 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 10 02:08:54.194221 systemd[1]: Reload requested from client PID 1334 ('systemctl') (unit ensure-sysext.service)... Mar 10 02:08:54.194238 systemd[1]: Reloading... Mar 10 02:08:54.231692 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 10 02:08:54.231740 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 10 02:08:54.233342 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 10 02:08:54.233749 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 10 02:08:54.238191 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 10 02:08:54.238646 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Mar 10 02:08:54.238788 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Mar 10 02:08:54.256211 systemd-tmpfiles[1335]: Detected autofs mount point /boot during canonicalization of boot. Mar 10 02:08:54.256228 systemd-tmpfiles[1335]: Skipping /boot Mar 10 02:08:54.256984 systemd-udevd[1336]: Using default interface naming scheme 'v255'. Mar 10 02:08:54.276072 systemd-tmpfiles[1335]: Detected autofs mount point /boot during canonicalization of boot. Mar 10 02:08:54.276113 systemd-tmpfiles[1335]: Skipping /boot Mar 10 02:08:54.302692 zram_generator::config[1365]: No configuration found. Mar 10 02:08:54.542534 kernel: mousedev: PS/2 mouse device common for all mice Mar 10 02:08:54.591345 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 10 02:08:54.594512 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 10 02:08:54.600673 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 10 02:08:54.600953 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 10 02:08:54.601208 kernel: ACPI: button: Power Button [PWRF] Mar 10 02:08:54.684726 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 10 02:08:54.685069 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 10 02:08:54.691711 systemd[1]: Reloading finished in 496 ms. Mar 10 02:08:54.708080 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 10 02:08:54.733818 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 10 02:08:54.829868 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 02:08:54.832253 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 10 02:08:54.841541 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 10 02:08:54.848113 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 10 02:08:54.858879 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 10 02:08:54.865948 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 10 02:08:54.873620 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 10 02:08:54.880409 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 10 02:08:54.884744 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 10 02:08:54.886627 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 10 02:08:54.891410 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 10 02:08:54.892993 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 10 02:08:54.925805 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 10 02:08:54.941584 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 10 02:08:54.962743 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 10 02:08:54.976852 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 02:08:54.981129 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 02:08:54.986259 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 10 02:08:54.986864 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 10 02:08:54.992903 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 10 02:08:55.008553 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 10 02:08:55.021685 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 10 02:08:55.022204 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 10 02:08:55.042424 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 10 02:08:55.042723 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 10 02:08:55.062709 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 10 02:08:55.075118 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 10 02:08:55.086893 augenrules[1486]: No rules Mar 10 02:08:55.087989 systemd[1]: audit-rules.service: Deactivated successfully. Mar 10 02:08:55.088499 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 10 02:08:55.110908 systemd[1]: Finished ensure-sysext.service. Mar 10 02:08:55.134058 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 10 02:08:55.134350 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 10 02:08:55.140782 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 10 02:08:55.164865 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 10 02:08:55.198463 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 10 02:08:55.206576 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 10 02:08:55.338665 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 10 02:08:55.356841 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 02:08:55.366562 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 10 02:08:55.378707 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 10 02:08:55.585507 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 10 02:08:55.989364 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 10 02:08:55.997971 systemd-resolved[1468]: Positive Trust Anchors: Mar 10 02:08:55.998048 systemd-resolved[1468]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 10 02:08:55.998088 systemd-resolved[1468]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 10 02:08:56.006781 systemd-resolved[1468]: Defaulting to hostname 'linux'. Mar 10 02:08:56.009962 systemd[1]: Reached target time-set.target - System Time Set. Mar 10 02:08:56.023517 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 10 02:08:56.024057 systemd-networkd[1464]: lo: Link UP Mar 10 02:08:56.024066 systemd-networkd[1464]: lo: Gained carrier Mar 10 02:08:56.028077 systemd-networkd[1464]: Enumeration completed Mar 10 02:08:56.030136 systemd-networkd[1464]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 02:08:56.030245 systemd-networkd[1464]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 10 02:08:56.033133 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 10 02:08:56.034634 systemd-networkd[1464]: eth0: Link UP Mar 10 02:08:56.035588 systemd-networkd[1464]: eth0: Gained carrier Mar 10 02:08:56.035645 systemd-networkd[1464]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 02:08:56.043541 systemd[1]: Reached target network.target - Network. Mar 10 02:08:56.062443 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 10 02:08:56.073401 systemd[1]: Reached target sysinit.target - System Initialization. Mar 10 02:08:56.079711 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 10 02:08:56.097742 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 10 02:08:56.110493 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Mar 10 02:08:56.125663 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 10 02:08:56.135995 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 10 02:08:56.154081 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 10 02:08:56.161126 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 10 02:08:56.161196 systemd[1]: Reached target paths.target - Path Units. Mar 10 02:08:56.165448 systemd[1]: Reached target timers.target - Timer Units. Mar 10 02:08:56.167433 systemd-networkd[1464]: eth0: DHCPv4 address 10.0.0.112/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 10 02:08:56.168527 systemd-timesyncd[1500]: Network configuration changed, trying to establish connection. Mar 10 02:08:57.896611 systemd-resolved[1468]: Clock change detected. Flushing caches. Mar 10 02:08:57.896656 systemd-timesyncd[1500]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 10 02:08:57.896765 systemd-timesyncd[1500]: Initial clock synchronization to Tue 2026-03-10 02:08:57.896534 UTC. Mar 10 02:08:57.898445 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 10 02:08:57.911264 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 10 02:08:57.919230 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 10 02:08:57.927529 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 10 02:08:57.938566 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 10 02:08:57.956858 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 10 02:08:57.962544 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 10 02:08:57.987813 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 10 02:08:58.005863 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 10 02:08:58.024923 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 10 02:08:58.045545 systemd[1]: Reached target sockets.target - Socket Units. Mar 10 02:08:58.051130 systemd[1]: Reached target basic.target - Basic System. Mar 10 02:08:58.059015 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 10 02:08:58.059076 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 10 02:08:58.063082 systemd[1]: Starting containerd.service - containerd container runtime... Mar 10 02:08:58.091128 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 10 02:08:58.111039 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 10 02:08:58.121804 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 10 02:08:58.134314 jq[1525]: false Mar 10 02:08:58.136117 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 10 02:08:58.141340 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 10 02:08:58.146368 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Mar 10 02:08:58.158125 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 10 02:08:58.178416 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 10 02:08:58.193141 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 10 02:08:58.204138 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 10 02:08:58.218783 extend-filesystems[1526]: Found /dev/vda6 Mar 10 02:08:58.232435 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Refreshing passwd entry cache Mar 10 02:08:58.225566 oslogin_cache_refresh[1527]: Refreshing passwd entry cache Mar 10 02:08:58.236360 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 10 02:08:58.246355 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 10 02:08:58.247187 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 10 02:08:58.250215 systemd[1]: Starting update-engine.service - Update Engine... Mar 10 02:08:58.255783 extend-filesystems[1526]: Found /dev/vda9 Mar 10 02:08:58.264860 extend-filesystems[1526]: Checking size of /dev/vda9 Mar 10 02:08:58.271785 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 10 02:08:58.279750 oslogin_cache_refresh[1527]: Failure getting users, quitting Mar 10 02:08:58.283522 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Failure getting users, quitting Mar 10 02:08:58.283522 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 10 02:08:58.283522 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Refreshing group entry cache Mar 10 02:08:58.279774 oslogin_cache_refresh[1527]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 10 02:08:58.279841 oslogin_cache_refresh[1527]: Refreshing group entry cache Mar 10 02:08:58.305304 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Failure getting groups, quitting Mar 10 02:08:58.305304 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 10 02:08:58.305108 oslogin_cache_refresh[1527]: Failure getting groups, quitting Mar 10 02:08:58.305124 oslogin_cache_refresh[1527]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 10 02:08:58.325751 extend-filesystems[1526]: Resized partition /dev/vda9 Mar 10 02:08:58.348811 extend-filesystems[1552]: resize2fs 1.47.3 (8-Jul-2025) Mar 10 02:08:58.385137 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 10 02:08:58.345125 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 10 02:08:58.386575 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 10 02:08:58.403527 jq[1546]: true Mar 10 02:08:58.400829 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 10 02:08:58.401223 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 10 02:08:58.401677 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Mar 10 02:08:58.402085 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Mar 10 02:08:58.414533 systemd[1]: motdgen.service: Deactivated successfully. Mar 10 02:08:58.414886 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 10 02:08:58.439441 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 10 02:08:58.439826 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 10 02:08:58.514892 (ntainerd)[1558]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 10 02:08:58.558586 tar[1556]: linux-amd64/LICENSE Mar 10 02:08:58.558586 tar[1556]: linux-amd64/helm Mar 10 02:08:58.581319 update_engine[1543]: I20260310 02:08:58.580590 1543 main.cc:92] Flatcar Update Engine starting Mar 10 02:08:58.644067 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 10 02:08:58.639250 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 10 02:08:58.638934 dbus-daemon[1523]: [system] SELinux support is enabled Mar 10 02:08:58.693625 update_engine[1543]: I20260310 02:08:58.646415 1543 update_check_scheduler.cc:74] Next update check in 9m4s Mar 10 02:08:58.696893 jq[1557]: true Mar 10 02:08:58.650507 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 10 02:08:58.650545 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 10 02:08:58.660149 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 10 02:08:58.660170 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 10 02:08:58.701067 extend-filesystems[1552]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 10 02:08:58.701067 extend-filesystems[1552]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 10 02:08:58.701067 extend-filesystems[1552]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 10 02:08:58.729397 extend-filesystems[1526]: Resized filesystem in /dev/vda9 Mar 10 02:08:58.704895 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 10 02:08:58.705326 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 10 02:08:58.823189 kernel: kvm_amd: TSC scaling supported Mar 10 02:08:58.823271 kernel: kvm_amd: Nested Virtualization enabled Mar 10 02:08:58.823295 kernel: kvm_amd: Nested Paging enabled Mar 10 02:08:58.820510 systemd[1]: Started update-engine.service - Update Engine. Mar 10 02:08:58.849365 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 10 02:08:58.849467 kernel: kvm_amd: PMU virtualization is disabled Mar 10 02:08:58.846749 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 10 02:08:58.889592 systemd-logind[1539]: Watching system buttons on /dev/input/event2 (Power Button) Mar 10 02:08:58.893839 systemd-logind[1539]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 10 02:08:58.894379 systemd-logind[1539]: New seat seat0. Mar 10 02:08:58.900445 systemd[1]: Started systemd-logind.service - User Login Management. Mar 10 02:08:58.935036 bash[1594]: Updated "/home/core/.ssh/authorized_keys" Mar 10 02:08:58.927866 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 10 02:08:58.948485 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 10 02:08:59.024056 sshd_keygen[1549]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 10 02:08:59.083396 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 10 02:08:59.104527 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 10 02:08:59.166230 containerd[1558]: time="2026-03-10T02:08:59Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 10 02:08:59.171226 containerd[1558]: time="2026-03-10T02:08:59.169936602Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 10 02:08:59.192388 systemd[1]: issuegen.service: Deactivated successfully. Mar 10 02:08:59.192816 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 10 02:08:59.211657 locksmithd[1586]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 10 02:08:59.218790 containerd[1558]: time="2026-03-10T02:08:59.217321913Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="50.885µs" Mar 10 02:08:59.218790 containerd[1558]: time="2026-03-10T02:08:59.217646108Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 10 02:08:59.218790 containerd[1558]: time="2026-03-10T02:08:59.217674411Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 10 02:08:59.218790 containerd[1558]: time="2026-03-10T02:08:59.218159787Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 10 02:08:59.218790 containerd[1558]: time="2026-03-10T02:08:59.218293677Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 10 02:08:59.218790 containerd[1558]: time="2026-03-10T02:08:59.218328673Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 10 02:08:59.218790 containerd[1558]: time="2026-03-10T02:08:59.218620347Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 10 02:08:59.218790 containerd[1558]: time="2026-03-10T02:08:59.218637118Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 10 02:08:59.220513 containerd[1558]: time="2026-03-10T02:08:59.219201703Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 10 02:08:59.220513 containerd[1558]: time="2026-03-10T02:08:59.219341453Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 10 02:08:59.220513 containerd[1558]: time="2026-03-10T02:08:59.219768220Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 10 02:08:59.220513 containerd[1558]: time="2026-03-10T02:08:59.219784110Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 10 02:08:59.220513 containerd[1558]: time="2026-03-10T02:08:59.220073420Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 10 02:08:59.219641 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 10 02:08:59.229521 containerd[1558]: time="2026-03-10T02:08:59.220859598Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 10 02:08:59.229521 containerd[1558]: time="2026-03-10T02:08:59.220940439Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 10 02:08:59.229521 containerd[1558]: time="2026-03-10T02:08:59.221087844Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 10 02:08:59.229521 containerd[1558]: time="2026-03-10T02:08:59.221124362Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 10 02:08:59.229521 containerd[1558]: time="2026-03-10T02:08:59.221592155Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 10 02:08:59.229521 containerd[1558]: time="2026-03-10T02:08:59.221765209Z" level=info msg="metadata content store policy set" policy=shared Mar 10 02:08:59.239001 kernel: EDAC MC: Ver: 3.0.0 Mar 10 02:08:59.256162 containerd[1558]: time="2026-03-10T02:08:59.256057505Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 10 02:08:59.256415 containerd[1558]: time="2026-03-10T02:08:59.256212003Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 10 02:08:59.256415 containerd[1558]: time="2026-03-10T02:08:59.256383754Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 10 02:08:59.256415 containerd[1558]: time="2026-03-10T02:08:59.256405564Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 10 02:08:59.256546 containerd[1558]: time="2026-03-10T02:08:59.256425372Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 10 02:08:59.256546 containerd[1558]: time="2026-03-10T02:08:59.256445759Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 10 02:08:59.256546 containerd[1558]: time="2026-03-10T02:08:59.256463743Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 10 02:08:59.256546 containerd[1558]: time="2026-03-10T02:08:59.256479723Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 10 02:08:59.256546 containerd[1558]: time="2026-03-10T02:08:59.256496013Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 10 02:08:59.256546 containerd[1558]: time="2026-03-10T02:08:59.256508767Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 10 02:08:59.256546 containerd[1558]: time="2026-03-10T02:08:59.256521281Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 10 02:08:59.256546 containerd[1558]: time="2026-03-10T02:08:59.256538452Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 10 02:08:59.256799 containerd[1558]: time="2026-03-10T02:08:59.256763092Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 10 02:08:59.256799 containerd[1558]: time="2026-03-10T02:08:59.256791415Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 10 02:08:59.256852 containerd[1558]: time="2026-03-10T02:08:59.256814909Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 10 02:08:59.256852 containerd[1558]: time="2026-03-10T02:08:59.256831199Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 10 02:08:59.256852 containerd[1558]: time="2026-03-10T02:08:59.256846307Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 10 02:08:59.256918 containerd[1558]: time="2026-03-10T02:08:59.256860193Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 10 02:08:59.256918 containerd[1558]: time="2026-03-10T02:08:59.256875271Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 10 02:08:59.256918 containerd[1558]: time="2026-03-10T02:08:59.256888406Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 10 02:08:59.256918 containerd[1558]: time="2026-03-10T02:08:59.256902251Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 10 02:08:59.256918 containerd[1558]: time="2026-03-10T02:08:59.256915587Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 10 02:08:59.257151 containerd[1558]: time="2026-03-10T02:08:59.256929042Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 10 02:08:59.257910 containerd[1558]: time="2026-03-10T02:08:59.257397316Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 10 02:08:59.257910 containerd[1558]: time="2026-03-10T02:08:59.257425989Z" level=info msg="Start snapshots syncer" Mar 10 02:08:59.257910 containerd[1558]: time="2026-03-10T02:08:59.257452890Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 10 02:08:59.259829 containerd[1558]: time="2026-03-10T02:08:59.258895844Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 10 02:08:59.259829 containerd[1558]: time="2026-03-10T02:08:59.259066342Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 10 02:08:59.260163 containerd[1558]: time="2026-03-10T02:08:59.259125963Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 10 02:08:59.260163 containerd[1558]: time="2026-03-10T02:08:59.259276455Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 10 02:08:59.260163 containerd[1558]: time="2026-03-10T02:08:59.259302683Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 10 02:08:59.260163 containerd[1558]: time="2026-03-10T02:08:59.259317881Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 10 02:08:59.260163 containerd[1558]: time="2026-03-10T02:08:59.259330796Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 10 02:08:59.260163 containerd[1558]: time="2026-03-10T02:08:59.259345212Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 10 02:08:59.260163 containerd[1558]: time="2026-03-10T02:08:59.259358788Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 10 02:08:59.260163 containerd[1558]: time="2026-03-10T02:08:59.259375600Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 10 02:08:59.260163 containerd[1558]: time="2026-03-10T02:08:59.259406417Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 10 02:08:59.260163 containerd[1558]: time="2026-03-10T02:08:59.259423478Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 10 02:08:59.260163 containerd[1558]: time="2026-03-10T02:08:59.259443556Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 10 02:08:59.260163 containerd[1558]: time="2026-03-10T02:08:59.259483541Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 10 02:08:59.260163 containerd[1558]: time="2026-03-10T02:08:59.259504580Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 10 02:08:59.260163 containerd[1558]: time="2026-03-10T02:08:59.259516322Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 10 02:08:59.260492 containerd[1558]: time="2026-03-10T02:08:59.259528405Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 10 02:08:59.260492 containerd[1558]: time="2026-03-10T02:08:59.259544805Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 10 02:08:59.260492 containerd[1558]: time="2026-03-10T02:08:59.259566866Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 10 02:08:59.260492 containerd[1558]: time="2026-03-10T02:08:59.259592374Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 10 02:08:59.260492 containerd[1558]: time="2026-03-10T02:08:59.259614896Z" level=info msg="runtime interface created" Mar 10 02:08:59.260492 containerd[1558]: time="2026-03-10T02:08:59.259622129Z" level=info msg="created NRI interface" Mar 10 02:08:59.260492 containerd[1558]: time="2026-03-10T02:08:59.259632388Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 10 02:08:59.260492 containerd[1558]: time="2026-03-10T02:08:59.259645683Z" level=info msg="Connect containerd service" Mar 10 02:08:59.260492 containerd[1558]: time="2026-03-10T02:08:59.259667835Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 10 02:08:59.264835 containerd[1558]: time="2026-03-10T02:08:59.262865275Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 10 02:08:59.270867 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 10 02:08:59.312226 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 10 02:08:59.320390 tar[1556]: linux-amd64/README.md Mar 10 02:08:59.322404 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 10 02:08:59.328346 systemd[1]: Reached target getty.target - Login Prompts. Mar 10 02:08:59.363387 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 10 02:08:59.401287 systemd-networkd[1464]: eth0: Gained IPv6LL Mar 10 02:08:59.415634 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 10 02:08:59.440061 systemd[1]: Reached target network-online.target - Network is Online. Mar 10 02:08:59.462124 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 10 02:08:59.484125 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 02:08:59.512623 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 10 02:08:59.544028 containerd[1558]: time="2026-03-10T02:08:59.543925864Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 10 02:08:59.544204 containerd[1558]: time="2026-03-10T02:08:59.544185729Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 10 02:08:59.544304 containerd[1558]: time="2026-03-10T02:08:59.544287740Z" level=info msg="Start subscribing containerd event" Mar 10 02:08:59.544635 containerd[1558]: time="2026-03-10T02:08:59.544398757Z" level=info msg="Start recovering state" Mar 10 02:08:59.545073 containerd[1558]: time="2026-03-10T02:08:59.545053930Z" level=info msg="Start event monitor" Mar 10 02:08:59.545390 containerd[1558]: time="2026-03-10T02:08:59.545169506Z" level=info msg="Start cni network conf syncer for default" Mar 10 02:08:59.545507 containerd[1558]: time="2026-03-10T02:08:59.545489714Z" level=info msg="Start streaming server" Mar 10 02:08:59.546396 containerd[1558]: time="2026-03-10T02:08:59.546374316Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 10 02:08:59.546550 containerd[1558]: time="2026-03-10T02:08:59.546533653Z" level=info msg="runtime interface starting up..." Mar 10 02:08:59.547433 containerd[1558]: time="2026-03-10T02:08:59.547413876Z" level=info msg="starting plugins..." Mar 10 02:08:59.549258 containerd[1558]: time="2026-03-10T02:08:59.549131636Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 10 02:08:59.549520 systemd[1]: Started containerd.service - containerd container runtime. Mar 10 02:08:59.561861 containerd[1558]: time="2026-03-10T02:08:59.560106355Z" level=info msg="containerd successfully booted in 0.394770s" Mar 10 02:08:59.602534 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 10 02:08:59.614662 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 10 02:08:59.615138 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 10 02:08:59.627653 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 10 02:09:00.040797 kernel: hrtimer: interrupt took 3232704 ns Mar 10 02:09:00.956746 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 10 02:09:00.968606 systemd[1]: Started sshd@0-10.0.0.112:22-10.0.0.1:55922.service - OpenSSH per-connection server daemon (10.0.0.1:55922). Mar 10 02:09:01.256412 sshd[1657]: Accepted publickey for core from 10.0.0.1 port 55922 ssh2: RSA SHA256:d2FUFdel+KP9pqsSrlp8nTsY/4RJTtu7ZDVkbTKQqjY Mar 10 02:09:01.259075 sshd-session[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:09:01.280545 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 10 02:09:01.291146 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 10 02:09:01.313210 systemd-logind[1539]: New session 1 of user core. Mar 10 02:09:01.341916 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 10 02:09:01.358641 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 10 02:09:01.396745 (systemd)[1662]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 10 02:09:01.409330 systemd-logind[1539]: New session c1 of user core. Mar 10 02:09:01.482901 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 02:09:01.497307 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 10 02:09:01.517827 (kubelet)[1673]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 02:09:01.682051 systemd[1662]: Queued start job for default target default.target. Mar 10 02:09:01.693897 systemd[1662]: Created slice app.slice - User Application Slice. Mar 10 02:09:01.694021 systemd[1662]: Reached target paths.target - Paths. Mar 10 02:09:01.694111 systemd[1662]: Reached target timers.target - Timers. Mar 10 02:09:01.696249 systemd[1662]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 10 02:09:01.720226 systemd[1662]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 10 02:09:01.720931 systemd[1662]: Reached target sockets.target - Sockets. Mar 10 02:09:01.721148 systemd[1662]: Reached target basic.target - Basic System. Mar 10 02:09:01.721231 systemd[1662]: Reached target default.target - Main User Target. Mar 10 02:09:01.721329 systemd[1662]: Startup finished in 294ms. Mar 10 02:09:01.722584 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 10 02:09:01.746055 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 10 02:09:01.755207 systemd[1]: Startup finished in 4.821s (kernel) + 10.188s (initrd) + 10.106s (userspace) = 25.117s. Mar 10 02:09:01.821732 systemd[1]: Started sshd@1-10.0.0.112:22-10.0.0.1:55930.service - OpenSSH per-connection server daemon (10.0.0.1:55930). Mar 10 02:09:01.948921 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 55930 ssh2: RSA SHA256:d2FUFdel+KP9pqsSrlp8nTsY/4RJTtu7ZDVkbTKQqjY Mar 10 02:09:01.951231 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:09:01.963009 systemd-logind[1539]: New session 2 of user core. Mar 10 02:09:01.983292 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 10 02:09:02.016235 sshd[1692]: Connection closed by 10.0.0.1 port 55930 Mar 10 02:09:02.014516 sshd-session[1689]: pam_unix(sshd:session): session closed for user core Mar 10 02:09:02.037336 systemd[1]: sshd@1-10.0.0.112:22-10.0.0.1:55930.service: Deactivated successfully. Mar 10 02:09:02.041587 systemd[1]: session-2.scope: Deactivated successfully. Mar 10 02:09:02.046908 systemd-logind[1539]: Session 2 logged out. Waiting for processes to exit. Mar 10 02:09:02.049928 systemd[1]: Started sshd@2-10.0.0.112:22-10.0.0.1:55940.service - OpenSSH per-connection server daemon (10.0.0.1:55940). Mar 10 02:09:02.057528 systemd-logind[1539]: Removed session 2. Mar 10 02:09:02.132499 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 55940 ssh2: RSA SHA256:d2FUFdel+KP9pqsSrlp8nTsY/4RJTtu7ZDVkbTKQqjY Mar 10 02:09:02.138084 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:09:02.155499 systemd-logind[1539]: New session 3 of user core. Mar 10 02:09:02.187533 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 10 02:09:02.229230 sshd[1701]: Connection closed by 10.0.0.1 port 55940 Mar 10 02:09:02.230137 sshd-session[1698]: pam_unix(sshd:session): session closed for user core Mar 10 02:09:02.255583 systemd[1]: sshd@2-10.0.0.112:22-10.0.0.1:55940.service: Deactivated successfully. Mar 10 02:09:02.258643 systemd[1]: session-3.scope: Deactivated successfully. Mar 10 02:09:02.261492 systemd-logind[1539]: Session 3 logged out. Waiting for processes to exit. Mar 10 02:09:02.270808 systemd[1]: Started sshd@3-10.0.0.112:22-10.0.0.1:55944.service - OpenSSH per-connection server daemon (10.0.0.1:55944). Mar 10 02:09:02.272485 systemd-logind[1539]: Removed session 3. Mar 10 02:09:02.392019 sshd[1707]: Accepted publickey for core from 10.0.0.1 port 55944 ssh2: RSA SHA256:d2FUFdel+KP9pqsSrlp8nTsY/4RJTtu7ZDVkbTKQqjY Mar 10 02:09:02.397742 sshd-session[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:09:02.407734 systemd-logind[1539]: New session 4 of user core. Mar 10 02:09:02.416214 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 10 02:09:02.464896 sshd[1710]: Connection closed by 10.0.0.1 port 55944 Mar 10 02:09:02.463152 sshd-session[1707]: pam_unix(sshd:session): session closed for user core Mar 10 02:09:02.490540 systemd[1]: sshd@3-10.0.0.112:22-10.0.0.1:55944.service: Deactivated successfully. Mar 10 02:09:02.496228 systemd[1]: session-4.scope: Deactivated successfully. Mar 10 02:09:02.500280 systemd-logind[1539]: Session 4 logged out. Waiting for processes to exit. Mar 10 02:09:02.506165 systemd[1]: Started sshd@4-10.0.0.112:22-10.0.0.1:55956.service - OpenSSH per-connection server daemon (10.0.0.1:55956). Mar 10 02:09:02.510486 systemd-logind[1539]: Removed session 4. Mar 10 02:09:02.608630 sshd[1716]: Accepted publickey for core from 10.0.0.1 port 55956 ssh2: RSA SHA256:d2FUFdel+KP9pqsSrlp8nTsY/4RJTtu7ZDVkbTKQqjY Mar 10 02:09:02.609409 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:09:02.625690 systemd-logind[1539]: New session 5 of user core. Mar 10 02:09:02.631080 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 10 02:09:02.676637 kubelet[1673]: E0310 02:09:02.675400 1673 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 02:09:02.684602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 02:09:02.684905 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 02:09:02.685443 systemd[1]: kubelet.service: Consumed 1.234s CPU time, 257.8M memory peak. Mar 10 02:09:02.694645 sudo[1720]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 10 02:09:02.695214 sudo[1720]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 02:09:02.727305 sudo[1720]: pam_unix(sudo:session): session closed for user root Mar 10 02:09:02.731502 sshd[1719]: Connection closed by 10.0.0.1 port 55956 Mar 10 02:09:02.732071 sshd-session[1716]: pam_unix(sshd:session): session closed for user core Mar 10 02:09:02.753147 systemd[1]: sshd@4-10.0.0.112:22-10.0.0.1:55956.service: Deactivated successfully. Mar 10 02:09:02.760842 systemd[1]: session-5.scope: Deactivated successfully. Mar 10 02:09:02.764545 systemd-logind[1539]: Session 5 logged out. Waiting for processes to exit. Mar 10 02:09:02.779769 systemd[1]: Started sshd@5-10.0.0.112:22-10.0.0.1:55972.service - OpenSSH per-connection server daemon (10.0.0.1:55972). Mar 10 02:09:02.784738 systemd-logind[1539]: Removed session 5. Mar 10 02:09:02.870795 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 55972 ssh2: RSA SHA256:d2FUFdel+KP9pqsSrlp8nTsY/4RJTtu7ZDVkbTKQqjY Mar 10 02:09:02.871825 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:09:02.890660 systemd-logind[1539]: New session 6 of user core. Mar 10 02:09:02.905200 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 10 02:09:02.932781 sudo[1732]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 10 02:09:02.933288 sudo[1732]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 02:09:02.946483 sudo[1732]: pam_unix(sudo:session): session closed for user root Mar 10 02:09:02.956266 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 10 02:09:02.956767 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 02:09:03.018657 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 10 02:09:03.159389 augenrules[1754]: No rules Mar 10 02:09:03.160840 systemd[1]: audit-rules.service: Deactivated successfully. Mar 10 02:09:03.161400 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 10 02:09:03.164912 sudo[1731]: pam_unix(sudo:session): session closed for user root Mar 10 02:09:03.168176 sshd[1730]: Connection closed by 10.0.0.1 port 55972 Mar 10 02:09:03.171830 sshd-session[1727]: pam_unix(sshd:session): session closed for user core Mar 10 02:09:03.190094 systemd[1]: sshd@5-10.0.0.112:22-10.0.0.1:55972.service: Deactivated successfully. Mar 10 02:09:03.196475 systemd[1]: session-6.scope: Deactivated successfully. Mar 10 02:09:03.204836 systemd-logind[1539]: Session 6 logged out. Waiting for processes to exit. Mar 10 02:09:03.214187 systemd[1]: Started sshd@6-10.0.0.112:22-10.0.0.1:55982.service - OpenSSH per-connection server daemon (10.0.0.1:55982). Mar 10 02:09:03.218410 systemd-logind[1539]: Removed session 6. Mar 10 02:09:03.300417 sshd[1763]: Accepted publickey for core from 10.0.0.1 port 55982 ssh2: RSA SHA256:d2FUFdel+KP9pqsSrlp8nTsY/4RJTtu7ZDVkbTKQqjY Mar 10 02:09:03.303385 sshd-session[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:09:03.329352 systemd-logind[1539]: New session 7 of user core. Mar 10 02:09:03.343172 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 10 02:09:03.372294 sudo[1767]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 10 02:09:03.380407 sudo[1767]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 02:09:04.141119 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 10 02:09:04.159844 (dockerd)[1787]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 10 02:09:04.793613 dockerd[1787]: time="2026-03-10T02:09:04.793524025Z" level=info msg="Starting up" Mar 10 02:09:04.796627 dockerd[1787]: time="2026-03-10T02:09:04.795510737Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 10 02:09:04.833556 dockerd[1787]: time="2026-03-10T02:09:04.832442646Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 10 02:09:05.312772 dockerd[1787]: time="2026-03-10T02:09:05.312588970Z" level=info msg="Loading containers: start." Mar 10 02:09:05.347345 kernel: Initializing XFRM netlink socket Mar 10 02:09:06.358474 systemd-networkd[1464]: docker0: Link UP Mar 10 02:09:06.383292 dockerd[1787]: time="2026-03-10T02:09:06.383117639Z" level=info msg="Loading containers: done." Mar 10 02:09:06.428593 dockerd[1787]: time="2026-03-10T02:09:06.427903419Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 10 02:09:06.428593 dockerd[1787]: time="2026-03-10T02:09:06.428094044Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 10 02:09:06.428593 dockerd[1787]: time="2026-03-10T02:09:06.428203809Z" level=info msg="Initializing buildkit" Mar 10 02:09:06.519842 dockerd[1787]: time="2026-03-10T02:09:06.518829685Z" level=info msg="Completed buildkit initialization" Mar 10 02:09:06.542809 dockerd[1787]: time="2026-03-10T02:09:06.542572945Z" level=info msg="Daemon has completed initialization" Mar 10 02:09:06.542809 dockerd[1787]: time="2026-03-10T02:09:06.542748001Z" level=info msg="API listen on /run/docker.sock" Mar 10 02:09:06.542908 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 10 02:09:07.639402 containerd[1558]: time="2026-03-10T02:09:07.638443223Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 10 02:09:08.561051 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2934642643.mount: Deactivated successfully. Mar 10 02:09:12.579833 containerd[1558]: time="2026-03-10T02:09:12.578296210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:09:12.583642 containerd[1558]: time="2026-03-10T02:09:12.583431102Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=27696467" Mar 10 02:09:12.587012 containerd[1558]: time="2026-03-10T02:09:12.586808527Z" level=info msg="ImageCreate event name:\"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:09:12.591387 containerd[1558]: time="2026-03-10T02:09:12.590904738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:09:12.592222 containerd[1558]: time="2026-03-10T02:09:12.592146703Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"27693066\" in 4.953624193s" Mar 10 02:09:12.592222 containerd[1558]: time="2026-03-10T02:09:12.592209230Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\"" Mar 10 02:09:12.593126 containerd[1558]: time="2026-03-10T02:09:12.593056662Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 10 02:09:12.807571 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 10 02:09:12.812704 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 02:09:13.231373 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 02:09:13.257648 (kubelet)[2073]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 02:09:13.400416 kubelet[2073]: E0310 02:09:13.398920 2073 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 02:09:13.427238 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 02:09:13.428908 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 02:09:13.433895 systemd[1]: kubelet.service: Consumed 306ms CPU time, 110.6M memory peak. Mar 10 02:09:15.618167 containerd[1558]: time="2026-03-10T02:09:15.617900673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:09:15.621628 containerd[1558]: time="2026-03-10T02:09:15.621431024Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=21450700" Mar 10 02:09:15.626434 containerd[1558]: time="2026-03-10T02:09:15.625781453Z" level=info msg="ImageCreate event name:\"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:09:15.635414 containerd[1558]: time="2026-03-10T02:09:15.635325419Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:09:15.638134 containerd[1558]: time="2026-03-10T02:09:15.636389366Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"23142311\" in 3.043268364s" Mar 10 02:09:15.638134 containerd[1558]: time="2026-03-10T02:09:15.636452043Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\"" Mar 10 02:09:15.639189 containerd[1558]: time="2026-03-10T02:09:15.638822188Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 10 02:09:17.455003 containerd[1558]: time="2026-03-10T02:09:17.453852911Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:09:17.459644 containerd[1558]: time="2026-03-10T02:09:17.459486318Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=15548429" Mar 10 02:09:17.465408 containerd[1558]: time="2026-03-10T02:09:17.465277740Z" level=info msg="ImageCreate event name:\"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:09:17.474923 containerd[1558]: time="2026-03-10T02:09:17.474102851Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"17240058\" in 1.835216825s" Mar 10 02:09:17.474923 containerd[1558]: time="2026-03-10T02:09:17.474149689Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\"" Mar 10 02:09:17.474923 containerd[1558]: time="2026-03-10T02:09:17.474177588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:09:17.478116 containerd[1558]: time="2026-03-10T02:09:17.477795353Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 10 02:09:19.651574 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1564387775.mount: Deactivated successfully. Mar 10 02:09:21.578999 containerd[1558]: time="2026-03-10T02:09:21.578216247Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:09:21.583418 containerd[1558]: time="2026-03-10T02:09:21.582865798Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=25685312" Mar 10 02:09:21.586222 containerd[1558]: time="2026-03-10T02:09:21.585186311Z" level=info msg="ImageCreate event name:\"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:09:21.591276 containerd[1558]: time="2026-03-10T02:09:21.591162598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:09:21.592774 containerd[1558]: time="2026-03-10T02:09:21.592626651Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"25684331\" in 4.11477342s" Mar 10 02:09:21.592774 containerd[1558]: time="2026-03-10T02:09:21.592699087Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\"" Mar 10 02:09:21.594463 containerd[1558]: time="2026-03-10T02:09:21.594342785Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 10 02:09:22.283267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount829119592.mount: Deactivated successfully. Mar 10 02:09:23.560529 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 10 02:09:23.572517 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 02:09:24.064051 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 02:09:24.099833 (kubelet)[2161]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 02:09:24.753181 kubelet[2161]: E0310 02:09:24.752898 2161 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 02:09:24.758874 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 02:09:24.759614 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 02:09:24.760329 systemd[1]: kubelet.service: Consumed 894ms CPU time, 110M memory peak. Mar 10 02:09:26.922394 containerd[1558]: time="2026-03-10T02:09:26.922286574Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:09:26.924926 containerd[1558]: time="2026-03-10T02:09:26.924575608Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556542" Mar 10 02:09:26.928192 containerd[1558]: time="2026-03-10T02:09:26.927828150Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:09:26.933895 containerd[1558]: time="2026-03-10T02:09:26.933687960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:09:26.935440 containerd[1558]: time="2026-03-10T02:09:26.935125428Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 5.34072162s" Mar 10 02:09:26.935440 containerd[1558]: time="2026-03-10T02:09:26.935183176Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Mar 10 02:09:26.936003 containerd[1558]: time="2026-03-10T02:09:26.935873382Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 10 02:09:27.554403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount64184970.mount: Deactivated successfully. Mar 10 02:09:27.578248 containerd[1558]: time="2026-03-10T02:09:27.578005576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:09:27.581084 containerd[1558]: time="2026-03-10T02:09:27.580834149Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 10 02:09:27.588190 containerd[1558]: time="2026-03-10T02:09:27.586227958Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:09:27.597821 containerd[1558]: time="2026-03-10T02:09:27.594807196Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:09:27.597821 containerd[1558]: time="2026-03-10T02:09:27.595703638Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 659.796743ms" Mar 10 02:09:27.597821 containerd[1558]: time="2026-03-10T02:09:27.597479495Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 10 02:09:27.601160 containerd[1558]: time="2026-03-10T02:09:27.601054955Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 10 02:09:28.744913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount428692668.mount: Deactivated successfully. Mar 10 02:09:32.751519 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1496764972 wd_nsec: 1496764893 Mar 10 02:09:34.808193 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 10 02:09:34.812368 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 02:09:35.445871 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 02:09:35.460664 (kubelet)[2238]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 02:09:35.553362 containerd[1558]: time="2026-03-10T02:09:35.553234279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:09:35.559329 containerd[1558]: time="2026-03-10T02:09:35.559281175Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23630322" Mar 10 02:09:35.563177 containerd[1558]: time="2026-03-10T02:09:35.562839901Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:09:35.571213 containerd[1558]: time="2026-03-10T02:09:35.571140573Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:09:35.572563 containerd[1558]: time="2026-03-10T02:09:35.572441261Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 7.971314263s" Mar 10 02:09:35.572563 containerd[1558]: time="2026-03-10T02:09:35.572477769Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Mar 10 02:09:37.451663 kubelet[2238]: E0310 02:09:37.451504 2238 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 02:09:37.457479 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 02:09:37.457795 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 02:09:37.458782 systemd[1]: kubelet.service: Consumed 3.915s CPU time, 110.7M memory peak. Mar 10 02:09:40.083485 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 02:09:40.083810 systemd[1]: kubelet.service: Consumed 3.915s CPU time, 110.7M memory peak. Mar 10 02:09:40.088129 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 02:09:40.144604 systemd[1]: Reload requested from client PID 2284 ('systemctl') (unit session-7.scope)... Mar 10 02:09:40.144873 systemd[1]: Reloading... Mar 10 02:09:40.269470 zram_generator::config[2330]: No configuration found. Mar 10 02:09:40.654382 systemd[1]: Reloading finished in 508 ms. Mar 10 02:09:40.767453 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 10 02:09:40.767613 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 10 02:09:40.768107 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 02:09:40.768164 systemd[1]: kubelet.service: Consumed 175ms CPU time, 98.2M memory peak. Mar 10 02:09:40.771369 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 02:09:41.104009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 02:09:41.132067 (kubelet)[2375]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 10 02:09:41.274020 kubelet[2375]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 10 02:09:41.578644 kubelet[2375]: I0310 02:09:41.578213 2375 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 10 02:09:41.578644 kubelet[2375]: I0310 02:09:41.578379 2375 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 10 02:09:41.578842 kubelet[2375]: I0310 02:09:41.578660 2375 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 10 02:09:41.578842 kubelet[2375]: I0310 02:09:41.578669 2375 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 10 02:09:41.579163 kubelet[2375]: I0310 02:09:41.579104 2375 server.go:951] "Client rotation is on, will bootstrap in background" Mar 10 02:09:41.665045 kubelet[2375]: I0310 02:09:41.664288 2375 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 10 02:09:41.665045 kubelet[2375]: E0310 02:09:41.664927 2375 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.112:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 10 02:09:41.673064 kubelet[2375]: I0310 02:09:41.672909 2375 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 10 02:09:41.682454 kubelet[2375]: I0310 02:09:41.682232 2375 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 10 02:09:41.683502 kubelet[2375]: I0310 02:09:41.683399 2375 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 10 02:09:41.683830 kubelet[2375]: I0310 02:09:41.683469 2375 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 10 02:09:41.683830 kubelet[2375]: I0310 02:09:41.683728 2375 topology_manager.go:143] "Creating topology manager with none policy" Mar 10 02:09:41.683830 kubelet[2375]: I0310 02:09:41.683743 2375 container_manager_linux.go:308] "Creating device plugin manager" Mar 10 02:09:41.684141 kubelet[2375]: I0310 02:09:41.683873 2375 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 10 02:09:41.689187 kubelet[2375]: I0310 02:09:41.688292 2375 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 10 02:09:41.689187 kubelet[2375]: I0310 02:09:41.688734 2375 kubelet.go:482] "Attempting to sync node with API server" Mar 10 02:09:41.689187 kubelet[2375]: I0310 02:09:41.688753 2375 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 10 02:09:41.689187 kubelet[2375]: I0310 02:09:41.688783 2375 kubelet.go:394] "Adding apiserver pod source" Mar 10 02:09:41.689187 kubelet[2375]: I0310 02:09:41.688795 2375 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 10 02:09:41.699611 kubelet[2375]: I0310 02:09:41.699579 2375 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 10 02:09:41.703209 kubelet[2375]: I0310 02:09:41.703176 2375 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 10 02:09:41.703393 kubelet[2375]: I0310 02:09:41.703218 2375 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 10 02:09:41.703393 kubelet[2375]: W0310 02:09:41.703295 2375 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 10 02:09:41.707367 kubelet[2375]: I0310 02:09:41.707305 2375 server.go:1257] "Started kubelet" Mar 10 02:09:41.709795 kubelet[2375]: I0310 02:09:41.707497 2375 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 10 02:09:41.709795 kubelet[2375]: I0310 02:09:41.707563 2375 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 10 02:09:41.709795 kubelet[2375]: I0310 02:09:41.708233 2375 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 10 02:09:41.709795 kubelet[2375]: I0310 02:09:41.708300 2375 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 10 02:09:41.709795 kubelet[2375]: I0310 02:09:41.709466 2375 server.go:317] "Adding debug handlers to kubelet server" Mar 10 02:09:41.710279 kubelet[2375]: I0310 02:09:41.710081 2375 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 10 02:09:41.711338 kubelet[2375]: I0310 02:09:41.711195 2375 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 10 02:09:41.712298 kubelet[2375]: I0310 02:09:41.712229 2375 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 10 02:09:41.712635 kubelet[2375]: E0310 02:09:41.712494 2375 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 02:09:41.713032 kubelet[2375]: I0310 02:09:41.712910 2375 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 10 02:09:41.714520 kubelet[2375]: I0310 02:09:41.713038 2375 reconciler.go:29] "Reconciler: start to sync state" Mar 10 02:09:41.714520 kubelet[2375]: E0310 02:09:41.713593 2375 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="200ms" Mar 10 02:09:41.717740 kubelet[2375]: I0310 02:09:41.717533 2375 factory.go:223] Registration of the systemd container factory successfully Mar 10 02:09:41.717740 kubelet[2375]: I0310 02:09:41.717665 2375 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 10 02:09:41.719092 kubelet[2375]: E0310 02:09:41.718873 2375 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 10 02:09:41.719092 kubelet[2375]: E0310 02:09:41.717182 2375 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.112:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.112:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189b58dc58db72f6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-10 02:09:41.707248374 +0000 UTC m=+0.564146500,LastTimestamp:2026-03-10 02:09:41.707248374 +0000 UTC m=+0.564146500,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 10 02:09:41.721823 kubelet[2375]: I0310 02:09:41.721769 2375 factory.go:223] Registration of the containerd container factory successfully Mar 10 02:09:41.746427 kubelet[2375]: I0310 02:09:41.746363 2375 cpu_manager.go:225] "Starting" policy="none" Mar 10 02:09:41.746427 kubelet[2375]: I0310 02:09:41.746408 2375 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 10 02:09:41.746427 kubelet[2375]: I0310 02:09:41.746430 2375 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 10 02:09:41.755174 kubelet[2375]: I0310 02:09:41.754625 2375 policy_none.go:50] "Start" Mar 10 02:09:41.755174 kubelet[2375]: I0310 02:09:41.754667 2375 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 10 02:09:41.755174 kubelet[2375]: I0310 02:09:41.754719 2375 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 10 02:09:41.759998 kubelet[2375]: I0310 02:09:41.759553 2375 policy_none.go:44] "Start" Mar 10 02:09:41.764771 kubelet[2375]: I0310 02:09:41.764639 2375 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 10 02:09:41.770006 kubelet[2375]: I0310 02:09:41.769925 2375 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 10 02:09:41.770286 kubelet[2375]: I0310 02:09:41.770270 2375 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 10 02:09:41.771041 kubelet[2375]: I0310 02:09:41.770612 2375 kubelet.go:2501] "Starting kubelet main sync loop" Mar 10 02:09:41.771041 kubelet[2375]: E0310 02:09:41.770669 2375 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 10 02:09:41.774642 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 10 02:09:41.794915 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 10 02:09:41.804476 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 10 02:09:41.813429 kubelet[2375]: E0310 02:09:41.813332 2375 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 02:09:41.830873 kubelet[2375]: E0310 02:09:41.828572 2375 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 10 02:09:41.830873 kubelet[2375]: I0310 02:09:41.828872 2375 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 10 02:09:41.830873 kubelet[2375]: I0310 02:09:41.828886 2375 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 10 02:09:41.830873 kubelet[2375]: I0310 02:09:41.829821 2375 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 10 02:09:41.835316 kubelet[2375]: E0310 02:09:41.834015 2375 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 10 02:09:41.836994 kubelet[2375]: E0310 02:09:41.835731 2375 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 10 02:09:41.900932 systemd[1]: Created slice kubepods-burstable-pod1b5e52719ea0892ff3d5bd4e3fcecbf6.slice - libcontainer container kubepods-burstable-pod1b5e52719ea0892ff3d5bd4e3fcecbf6.slice. Mar 10 02:09:41.914663 kubelet[2375]: E0310 02:09:41.914517 2375 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="400ms" Mar 10 02:09:41.934496 kubelet[2375]: I0310 02:09:41.934406 2375 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 10 02:09:41.935110 kubelet[2375]: E0310 02:09:41.934909 2375 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost" Mar 10 02:09:41.939240 kubelet[2375]: E0310 02:09:41.939143 2375 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 02:09:41.944149 systemd[1]: Created slice kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice - libcontainer container kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice. Mar 10 02:09:41.962177 kubelet[2375]: E0310 02:09:41.961918 2375 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 02:09:41.969124 systemd[1]: Created slice kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice - libcontainer container kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice. Mar 10 02:09:41.975089 kubelet[2375]: E0310 02:09:41.974666 2375 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 02:09:42.015008 kubelet[2375]: I0310 02:09:42.014471 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1b5e52719ea0892ff3d5bd4e3fcecbf6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1b5e52719ea0892ff3d5bd4e3fcecbf6\") " pod="kube-system/kube-apiserver-localhost" Mar 10 02:09:42.015008 kubelet[2375]: I0310 02:09:42.014532 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1b5e52719ea0892ff3d5bd4e3fcecbf6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1b5e52719ea0892ff3d5bd4e3fcecbf6\") " pod="kube-system/kube-apiserver-localhost" Mar 10 02:09:42.015008 kubelet[2375]: I0310 02:09:42.014552 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 02:09:42.015008 kubelet[2375]: I0310 02:09:42.014565 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 02:09:42.015008 kubelet[2375]: I0310 02:09:42.014587 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1b5e52719ea0892ff3d5bd4e3fcecbf6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1b5e52719ea0892ff3d5bd4e3fcecbf6\") " pod="kube-system/kube-apiserver-localhost" Mar 10 02:09:42.015294 kubelet[2375]: I0310 02:09:42.014601 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 02:09:42.015294 kubelet[2375]: I0310 02:09:42.014613 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 02:09:42.015294 kubelet[2375]: I0310 02:09:42.014625 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 02:09:42.015294 kubelet[2375]: I0310 02:09:42.014636 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 10 02:09:42.138066 kubelet[2375]: I0310 02:09:42.137897 2375 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 10 02:09:42.138438 kubelet[2375]: E0310 02:09:42.138378 2375 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost" Mar 10 02:09:42.249329 kubelet[2375]: E0310 02:09:42.248727 2375 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:09:42.250577 containerd[1558]: time="2026-03-10T02:09:42.250199414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1b5e52719ea0892ff3d5bd4e3fcecbf6,Namespace:kube-system,Attempt:0,}" Mar 10 02:09:42.268471 kubelet[2375]: E0310 02:09:42.267257 2375 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:09:42.268596 containerd[1558]: time="2026-03-10T02:09:42.268419897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,}" Mar 10 02:09:42.279002 kubelet[2375]: E0310 02:09:42.278926 2375 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:09:42.280010 containerd[1558]: time="2026-03-10T02:09:42.279756996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,}" Mar 10 02:09:42.316003 kubelet[2375]: E0310 02:09:42.315416 2375 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="800ms" Mar 10 02:09:42.540536 kubelet[2375]: I0310 02:09:42.540282 2375 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 10 02:09:42.540818 kubelet[2375]: E0310 02:09:42.540764 2375 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost" Mar 10 02:09:42.828032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount234969.mount: Deactivated successfully. Mar 10 02:09:42.844597 containerd[1558]: time="2026-03-10T02:09:42.844243989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 02:09:42.852056 containerd[1558]: time="2026-03-10T02:09:42.851910030Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 10 02:09:42.857114 containerd[1558]: time="2026-03-10T02:09:42.856156266Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 02:09:42.865971 containerd[1558]: time="2026-03-10T02:09:42.865529113Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 02:09:42.869231 containerd[1558]: time="2026-03-10T02:09:42.869105353Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 02:09:42.872167 containerd[1558]: time="2026-03-10T02:09:42.872118869Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 10 02:09:42.876006 containerd[1558]: time="2026-03-10T02:09:42.875534686Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 10 02:09:42.877783 containerd[1558]: time="2026-03-10T02:09:42.877653253Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 02:09:42.880309 containerd[1558]: time="2026-03-10T02:09:42.880264643Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 608.344605ms" Mar 10 02:09:42.882415 containerd[1558]: time="2026-03-10T02:09:42.882344129Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 627.122463ms" Mar 10 02:09:42.899446 containerd[1558]: time="2026-03-10T02:09:42.899313817Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 615.067651ms" Mar 10 02:09:42.940255 containerd[1558]: time="2026-03-10T02:09:42.940124657Z" level=info msg="connecting to shim e87299fab9581405b681e7e1faf0f5e1166122e877e1c643c063974da9ba3dca" address="unix:///run/containerd/s/6d1677006f3f2524658983435f2ed3a5d4dbd7fba1187a88f0e39cb57622d657" namespace=k8s.io protocol=ttrpc version=3 Mar 10 02:09:42.950670 containerd[1558]: time="2026-03-10T02:09:42.950620912Z" level=info msg="connecting to shim 6c42fa2a80c4877a633f32f433123fe8ff3b7c3717c2dd8db2863e0b5f35e24b" address="unix:///run/containerd/s/ef21e3a608b7f44150940d2e501c844bb3cf47b35808f53b51de4d975680cc19" namespace=k8s.io protocol=ttrpc version=3 Mar 10 02:09:42.966024 containerd[1558]: time="2026-03-10T02:09:42.964315547Z" level=info msg="connecting to shim fa8853d5bf086d515c74697ca9d6371d4685850248a4b00635fb753f210a250d" address="unix:///run/containerd/s/feb34f318732377d79deb9b53ebe15cd43aad412c32ce10cd3e2605819407ed5" namespace=k8s.io protocol=ttrpc version=3 Mar 10 02:09:43.003587 systemd[1]: Started cri-containerd-e87299fab9581405b681e7e1faf0f5e1166122e877e1c643c063974da9ba3dca.scope - libcontainer container e87299fab9581405b681e7e1faf0f5e1166122e877e1c643c063974da9ba3dca. Mar 10 02:09:43.011113 systemd[1]: Started cri-containerd-6c42fa2a80c4877a633f32f433123fe8ff3b7c3717c2dd8db2863e0b5f35e24b.scope - libcontainer container 6c42fa2a80c4877a633f32f433123fe8ff3b7c3717c2dd8db2863e0b5f35e24b. Mar 10 02:09:43.020939 systemd[1]: Started cri-containerd-fa8853d5bf086d515c74697ca9d6371d4685850248a4b00635fb753f210a250d.scope - libcontainer container fa8853d5bf086d515c74697ca9d6371d4685850248a4b00635fb753f210a250d. Mar 10 02:09:43.117580 kubelet[2375]: E0310 02:09:43.117401 2375 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="1.6s" Mar 10 02:09:43.119111 containerd[1558]: time="2026-03-10T02:09:43.119056837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,} returns sandbox id \"e87299fab9581405b681e7e1faf0f5e1166122e877e1c643c063974da9ba3dca\"" Mar 10 02:09:43.124652 kubelet[2375]: E0310 02:09:43.124588 2375 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:09:43.134506 containerd[1558]: time="2026-03-10T02:09:43.134405791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1b5e52719ea0892ff3d5bd4e3fcecbf6,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c42fa2a80c4877a633f32f433123fe8ff3b7c3717c2dd8db2863e0b5f35e24b\"" Mar 10 02:09:43.134801 containerd[1558]: time="2026-03-10T02:09:43.134735564Z" level=info msg="CreateContainer within sandbox \"e87299fab9581405b681e7e1faf0f5e1166122e877e1c643c063974da9ba3dca\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 10 02:09:43.135925 containerd[1558]: time="2026-03-10T02:09:43.135806754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa8853d5bf086d515c74697ca9d6371d4685850248a4b00635fb753f210a250d\"" Mar 10 02:09:43.137287 kubelet[2375]: E0310 02:09:43.136737 2375 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:09:43.139830 kubelet[2375]: E0310 02:09:43.138153 2375 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:09:43.145871 containerd[1558]: time="2026-03-10T02:09:43.144106156Z" level=info msg="CreateContainer within sandbox \"6c42fa2a80c4877a633f32f433123fe8ff3b7c3717c2dd8db2863e0b5f35e24b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 10 02:09:43.149471 containerd[1558]: time="2026-03-10T02:09:43.149427998Z" level=info msg="CreateContainer within sandbox \"fa8853d5bf086d515c74697ca9d6371d4685850248a4b00635fb753f210a250d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 10 02:09:43.173083 containerd[1558]: time="2026-03-10T02:09:43.172611508Z" level=info msg="Container 3114f69500c09404f6b576e6a2919282397cc56b745ae3f2798dddc93c74ae01: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:09:43.184420 containerd[1558]: time="2026-03-10T02:09:43.184343055Z" level=info msg="Container 633f881917f5d5f8d10c5ab839279317fb971fc944fda95ded688f7259390e48: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:09:43.197441 containerd[1558]: time="2026-03-10T02:09:43.197209560Z" level=info msg="CreateContainer within sandbox \"e87299fab9581405b681e7e1faf0f5e1166122e877e1c643c063974da9ba3dca\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3114f69500c09404f6b576e6a2919282397cc56b745ae3f2798dddc93c74ae01\"" Mar 10 02:09:43.201104 containerd[1558]: time="2026-03-10T02:09:43.201068580Z" level=info msg="StartContainer for \"3114f69500c09404f6b576e6a2919282397cc56b745ae3f2798dddc93c74ae01\"" Mar 10 02:09:43.201516 containerd[1558]: time="2026-03-10T02:09:43.201417168Z" level=info msg="Container d0db48e77c4a04d3bc0410c8f5b8e77fe5c2f32b1d6b20acd6806d48031d90ca: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:09:43.202892 containerd[1558]: time="2026-03-10T02:09:43.202628432Z" level=info msg="connecting to shim 3114f69500c09404f6b576e6a2919282397cc56b745ae3f2798dddc93c74ae01" address="unix:///run/containerd/s/6d1677006f3f2524658983435f2ed3a5d4dbd7fba1187a88f0e39cb57622d657" protocol=ttrpc version=3 Mar 10 02:09:43.212312 containerd[1558]: time="2026-03-10T02:09:43.212111002Z" level=info msg="CreateContainer within sandbox \"6c42fa2a80c4877a633f32f433123fe8ff3b7c3717c2dd8db2863e0b5f35e24b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"633f881917f5d5f8d10c5ab839279317fb971fc944fda95ded688f7259390e48\"" Mar 10 02:09:43.213303 containerd[1558]: time="2026-03-10T02:09:43.213252556Z" level=info msg="StartContainer for \"633f881917f5d5f8d10c5ab839279317fb971fc944fda95ded688f7259390e48\"" Mar 10 02:09:43.215656 containerd[1558]: time="2026-03-10T02:09:43.215589117Z" level=info msg="connecting to shim 633f881917f5d5f8d10c5ab839279317fb971fc944fda95ded688f7259390e48" address="unix:///run/containerd/s/ef21e3a608b7f44150940d2e501c844bb3cf47b35808f53b51de4d975680cc19" protocol=ttrpc version=3 Mar 10 02:09:43.230480 containerd[1558]: time="2026-03-10T02:09:43.230370111Z" level=info msg="CreateContainer within sandbox \"fa8853d5bf086d515c74697ca9d6371d4685850248a4b00635fb753f210a250d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d0db48e77c4a04d3bc0410c8f5b8e77fe5c2f32b1d6b20acd6806d48031d90ca\"" Mar 10 02:09:43.232480 containerd[1558]: time="2026-03-10T02:09:43.232444526Z" level=info msg="StartContainer for \"d0db48e77c4a04d3bc0410c8f5b8e77fe5c2f32b1d6b20acd6806d48031d90ca\"" Mar 10 02:09:43.235026 containerd[1558]: time="2026-03-10T02:09:43.234636353Z" level=info msg="connecting to shim d0db48e77c4a04d3bc0410c8f5b8e77fe5c2f32b1d6b20acd6806d48031d90ca" address="unix:///run/containerd/s/feb34f318732377d79deb9b53ebe15cd43aad412c32ce10cd3e2605819407ed5" protocol=ttrpc version=3 Mar 10 02:09:43.236252 systemd[1]: Started cri-containerd-3114f69500c09404f6b576e6a2919282397cc56b745ae3f2798dddc93c74ae01.scope - libcontainer container 3114f69500c09404f6b576e6a2919282397cc56b745ae3f2798dddc93c74ae01. Mar 10 02:09:43.265747 systemd[1]: Started cri-containerd-633f881917f5d5f8d10c5ab839279317fb971fc944fda95ded688f7259390e48.scope - libcontainer container 633f881917f5d5f8d10c5ab839279317fb971fc944fda95ded688f7259390e48. Mar 10 02:09:43.283754 systemd[1]: Started cri-containerd-d0db48e77c4a04d3bc0410c8f5b8e77fe5c2f32b1d6b20acd6806d48031d90ca.scope - libcontainer container d0db48e77c4a04d3bc0410c8f5b8e77fe5c2f32b1d6b20acd6806d48031d90ca. Mar 10 02:09:43.346534 kubelet[2375]: I0310 02:09:43.346380 2375 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 10 02:09:43.351104 kubelet[2375]: E0310 02:09:43.351044 2375 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost" Mar 10 02:09:43.384893 containerd[1558]: time="2026-03-10T02:09:43.383357461Z" level=info msg="StartContainer for \"3114f69500c09404f6b576e6a2919282397cc56b745ae3f2798dddc93c74ae01\" returns successfully" Mar 10 02:09:43.407539 containerd[1558]: time="2026-03-10T02:09:43.405729126Z" level=info msg="StartContainer for \"633f881917f5d5f8d10c5ab839279317fb971fc944fda95ded688f7259390e48\" returns successfully" Mar 10 02:09:43.435268 containerd[1558]: time="2026-03-10T02:09:43.435181876Z" level=info msg="StartContainer for \"d0db48e77c4a04d3bc0410c8f5b8e77fe5c2f32b1d6b20acd6806d48031d90ca\" returns successfully" Mar 10 02:09:43.638286 update_engine[1543]: I20260310 02:09:43.638037 1543 update_attempter.cc:509] Updating boot flags... Mar 10 02:09:43.800555 kubelet[2375]: E0310 02:09:43.800414 2375 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 02:09:43.801327 kubelet[2375]: E0310 02:09:43.800561 2375 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:09:43.818922 kubelet[2375]: E0310 02:09:43.817818 2375 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 02:09:43.825152 kubelet[2375]: E0310 02:09:43.825110 2375 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:09:43.866262 kubelet[2375]: E0310 02:09:43.866213 2375 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 02:09:43.866505 kubelet[2375]: E0310 02:09:43.866421 2375 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:09:44.860849 kubelet[2375]: E0310 02:09:44.860399 2375 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 02:09:44.861523 kubelet[2375]: E0310 02:09:44.861503 2375 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:09:44.861721 kubelet[2375]: E0310 02:09:44.860772 2375 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 02:09:44.862021 kubelet[2375]: E0310 02:09:44.861903 2375 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:09:44.954049 kubelet[2375]: I0310 02:09:44.953918 2375 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 10 02:09:45.231907 kubelet[2375]: E0310 02:09:45.231749 2375 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 10 02:09:45.384074 kubelet[2375]: I0310 02:09:45.384031 2375 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 10 02:09:45.414159 kubelet[2375]: I0310 02:09:45.414082 2375 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 10 02:09:45.426793 kubelet[2375]: E0310 02:09:45.426195 2375 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 10 02:09:45.426793 kubelet[2375]: I0310 02:09:45.426258 2375 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 10 02:09:45.432064 kubelet[2375]: E0310 02:09:45.431535 2375 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 10 02:09:45.432064 kubelet[2375]: I0310 02:09:45.431570 2375 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 10 02:09:45.435444 kubelet[2375]: E0310 02:09:45.435180 2375 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 10 02:09:45.695844 kubelet[2375]: I0310 02:09:45.693866 2375 apiserver.go:52] "Watching apiserver" Mar 10 02:09:45.714455 kubelet[2375]: I0310 02:09:45.713184 2375 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 10 02:09:46.551704 kubelet[2375]: I0310 02:09:46.551376 2375 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 10 02:09:46.564773 kubelet[2375]: E0310 02:09:46.564641 2375 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:09:46.871832 kubelet[2375]: E0310 02:09:46.871734 2375 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:09:47.402030 kubelet[2375]: I0310 02:09:47.399573 2375 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 10 02:09:47.435928 kubelet[2375]: E0310 02:09:47.435798 2375 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:09:47.875758 kubelet[2375]: E0310 02:09:47.875629 2375 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:09:48.668059 systemd[1]: Reload requested from client PID 2678 ('systemctl') (unit session-7.scope)... Mar 10 02:09:48.668105 systemd[1]: Reloading... Mar 10 02:09:48.922169 zram_generator::config[2721]: No configuration found. Mar 10 02:09:49.327107 kubelet[2375]: I0310 02:09:49.326941 2375 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 10 02:09:49.368585 kubelet[2375]: E0310 02:09:49.366636 2375 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:09:49.586918 systemd[1]: Reloading finished in 913 ms. Mar 10 02:09:49.692519 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 02:09:49.722752 systemd[1]: kubelet.service: Deactivated successfully. Mar 10 02:09:49.724067 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 02:09:49.726606 systemd[1]: kubelet.service: Consumed 1.479s CPU time, 128M memory peak. Mar 10 02:09:49.735620 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 02:09:50.178708 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 02:09:50.192557 (kubelet)[2766]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 10 02:09:50.323700 kubelet[2766]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 10 02:09:50.339927 kubelet[2766]: I0310 02:09:50.339011 2766 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 10 02:09:50.339927 kubelet[2766]: I0310 02:09:50.339058 2766 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 10 02:09:50.339927 kubelet[2766]: I0310 02:09:50.339080 2766 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 10 02:09:50.339927 kubelet[2766]: I0310 02:09:50.339086 2766 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 10 02:09:50.339927 kubelet[2766]: I0310 02:09:50.339421 2766 server.go:951] "Client rotation is on, will bootstrap in background" Mar 10 02:09:50.345155 kubelet[2766]: I0310 02:09:50.345134 2766 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 10 02:09:50.357132 kubelet[2766]: I0310 02:09:50.357088 2766 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 10 02:09:50.393910 kubelet[2766]: I0310 02:09:50.393758 2766 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 10 02:09:50.416324 kubelet[2766]: I0310 02:09:50.416093 2766 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 10 02:09:50.418100 kubelet[2766]: I0310 02:09:50.416572 2766 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 10 02:09:50.418100 kubelet[2766]: I0310 02:09:50.417184 2766 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 10 02:09:50.418100 kubelet[2766]: I0310 02:09:50.417584 2766 topology_manager.go:143] "Creating topology manager with none policy" Mar 10 02:09:50.418100 kubelet[2766]: I0310 02:09:50.417598 2766 container_manager_linux.go:308] "Creating device plugin manager" Mar 10 02:09:50.424230 kubelet[2766]: I0310 02:09:50.418205 2766 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 10 02:09:50.424230 kubelet[2766]: I0310 02:09:50.419377 2766 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 10 02:09:50.424230 kubelet[2766]: I0310 02:09:50.422241 2766 kubelet.go:482] "Attempting to sync node with API server" Mar 10 02:09:50.424230 kubelet[2766]: I0310 02:09:50.422264 2766 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 10 02:09:50.424230 kubelet[2766]: I0310 02:09:50.422300 2766 kubelet.go:394] "Adding apiserver pod source" Mar 10 02:09:50.424230 kubelet[2766]: I0310 02:09:50.422317 2766 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 10 02:09:50.439492 kubelet[2766]: I0310 02:09:50.439217 2766 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 10 02:09:50.447321 kubelet[2766]: I0310 02:09:50.445525 2766 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 10 02:09:50.447321 kubelet[2766]: I0310 02:09:50.445567 2766 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 10 02:09:50.465590 kubelet[2766]: I0310 02:09:50.465524 2766 server.go:1257] "Started kubelet" Mar 10 02:09:50.485597 kubelet[2766]: I0310 02:09:50.484785 2766 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 10 02:09:50.485597 kubelet[2766]: I0310 02:09:50.484908 2766 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 10 02:09:50.487120 kubelet[2766]: I0310 02:09:50.484709 2766 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 10 02:09:50.495366 kubelet[2766]: I0310 02:09:50.495281 2766 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 10 02:09:50.498619 kubelet[2766]: I0310 02:09:50.498600 2766 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 10 02:09:50.503932 kubelet[2766]: I0310 02:09:50.499629 2766 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 10 02:09:50.503932 kubelet[2766]: I0310 02:09:50.502229 2766 server.go:317] "Adding debug handlers to kubelet server" Mar 10 02:09:50.504620 kubelet[2766]: I0310 02:09:50.504509 2766 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 10 02:09:50.504718 kubelet[2766]: E0310 02:09:50.504625 2766 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 02:09:50.505259 kubelet[2766]: I0310 02:09:50.505167 2766 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 10 02:09:50.510040 kubelet[2766]: I0310 02:09:50.508922 2766 reconciler.go:29] "Reconciler: start to sync state" Mar 10 02:09:50.513025 kubelet[2766]: I0310 02:09:50.512642 2766 factory.go:223] Registration of the systemd container factory successfully Mar 10 02:09:50.513097 kubelet[2766]: I0310 02:09:50.513079 2766 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 10 02:09:50.542800 kubelet[2766]: I0310 02:09:50.539479 2766 factory.go:223] Registration of the containerd container factory successfully Mar 10 02:09:50.542800 kubelet[2766]: E0310 02:09:50.540444 2766 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 10 02:09:50.582540 kubelet[2766]: I0310 02:09:50.581213 2766 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 10 02:09:50.584748 kubelet[2766]: I0310 02:09:50.584157 2766 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 10 02:09:50.584748 kubelet[2766]: I0310 02:09:50.584197 2766 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 10 02:09:50.584748 kubelet[2766]: I0310 02:09:50.584232 2766 kubelet.go:2501] "Starting kubelet main sync loop" Mar 10 02:09:50.584748 kubelet[2766]: E0310 02:09:50.584305 2766 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 10 02:09:50.685794 kubelet[2766]: E0310 02:09:50.685600 2766 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 10 02:09:50.695295 kubelet[2766]: I0310 02:09:50.693085 2766 cpu_manager.go:225] "Starting" policy="none" Mar 10 02:09:50.695295 kubelet[2766]: I0310 02:09:50.693136 2766 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 10 02:09:50.695295 kubelet[2766]: I0310 02:09:50.693246 2766 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 10 02:09:50.697276 kubelet[2766]: I0310 02:09:50.696926 2766 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 10 02:09:50.697276 kubelet[2766]: I0310 02:09:50.697023 2766 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 10 02:09:50.697276 kubelet[2766]: I0310 02:09:50.697059 2766 policy_none.go:50] "Start" Mar 10 02:09:50.697276 kubelet[2766]: I0310 02:09:50.697176 2766 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 10 02:09:50.697276 kubelet[2766]: I0310 02:09:50.697280 2766 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 10 02:09:50.699218 kubelet[2766]: I0310 02:09:50.697539 2766 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 10 02:09:50.699218 kubelet[2766]: I0310 02:09:50.697549 2766 policy_none.go:44] "Start" Mar 10 02:09:50.723398 kubelet[2766]: E0310 02:09:50.723102 2766 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 10 02:09:50.724808 kubelet[2766]: I0310 02:09:50.723605 2766 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 10 02:09:50.724808 kubelet[2766]: I0310 02:09:50.723622 2766 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 10 02:09:50.724808 kubelet[2766]: I0310 02:09:50.724298 2766 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 10 02:09:50.731265 kubelet[2766]: E0310 02:09:50.729500 2766 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 10 02:09:50.867238 kubelet[2766]: I0310 02:09:50.867050 2766 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 10 02:09:50.889899 kubelet[2766]: I0310 02:09:50.888439 2766 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 10 02:09:50.889899 kubelet[2766]: I0310 02:09:50.888530 2766 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 10 02:09:50.895266 kubelet[2766]: I0310 02:09:50.894094 2766 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 10 02:09:50.913454 kubelet[2766]: I0310 02:09:50.913361 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 02:09:50.913454 kubelet[2766]: I0310 02:09:50.913414 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 02:09:50.915247 kubelet[2766]: I0310 02:09:50.913507 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1b5e52719ea0892ff3d5bd4e3fcecbf6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1b5e52719ea0892ff3d5bd4e3fcecbf6\") " pod="kube-system/kube-apiserver-localhost" Mar 10 02:09:50.915247 kubelet[2766]: I0310 02:09:50.913533 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 02:09:50.915247 kubelet[2766]: I0310 02:09:50.913594 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 02:09:50.915247 kubelet[2766]: I0310 02:09:50.914757 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 10 02:09:50.915247 kubelet[2766]: I0310 02:09:50.914790 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1b5e52719ea0892ff3d5bd4e3fcecbf6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1b5e52719ea0892ff3d5bd4e3fcecbf6\") " pod="kube-system/kube-apiserver-localhost" Mar 10 02:09:50.915435 kubelet[2766]: I0310 02:09:50.914813 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1b5e52719ea0892ff3d5bd4e3fcecbf6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1b5e52719ea0892ff3d5bd4e3fcecbf6\") " pod="kube-system/kube-apiserver-localhost" Mar 10 02:09:50.915435 kubelet[2766]: I0310 02:09:50.914832 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 02:09:50.923116 kubelet[2766]: I0310 02:09:50.922098 2766 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Mar 10 02:09:50.923116 kubelet[2766]: E0310 02:09:50.922132 2766 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 10 02:09:50.923116 kubelet[2766]: I0310 02:09:50.922309 2766 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 10 02:09:50.930105 kubelet[2766]: E0310 02:09:50.929927 2766 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 10 02:09:50.930300 kubelet[2766]: E0310 02:09:50.930247 2766 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 10 02:09:51.225009 kubelet[2766]: E0310 02:09:51.223691 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:09:51.235036 kubelet[2766]: E0310 02:09:51.234756 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:09:51.235285 kubelet[2766]: E0310 02:09:51.235201 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:09:51.433621 kubelet[2766]: I0310 02:09:51.432866 2766 apiserver.go:52] "Watching apiserver" Mar 10 02:09:51.517171 kubelet[2766]: I0310 02:09:51.506042 2766 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 10 02:09:51.635503 kubelet[2766]: E0310 02:09:51.632546 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:09:51.637784 kubelet[2766]: E0310 02:09:51.636743 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:09:51.641001 kubelet[2766]: E0310 02:09:51.640305 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:09:51.746516 kubelet[2766]: I0310 02:09:51.746005 2766 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=5.742929577 podStartE2EDuration="5.742929577s" podCreationTimestamp="2026-03-10 02:09:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 02:09:51.739055319 +0000 UTC m=+1.539174618" watchObservedRunningTime="2026-03-10 02:09:51.742929577 +0000 UTC m=+1.543048876" Mar 10 02:09:51.746516 kubelet[2766]: I0310 02:09:51.746333 2766 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.746317319 podStartE2EDuration="2.746317319s" podCreationTimestamp="2026-03-10 02:09:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 02:09:51.685843736 +0000 UTC m=+1.485963026" watchObservedRunningTime="2026-03-10 02:09:51.746317319 +0000 UTC m=+1.546436607" Mar 10 02:09:51.806835 kubelet[2766]: I0310 02:09:51.806750 2766 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.806724923 podStartE2EDuration="4.806724923s" podCreationTimestamp="2026-03-10 02:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 02:09:51.762679812 +0000 UTC m=+1.562799131" watchObservedRunningTime="2026-03-10 02:09:51.806724923 +0000 UTC m=+1.606844233" Mar 10 02:09:52.649841 kubelet[2766]: E0310 02:09:52.649761 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:09:52.654400 kubelet[2766]: E0310 02:09:52.650068 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:09:53.655907 kubelet[2766]: E0310 02:09:53.655800 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:09:54.000936 kubelet[2766]: I0310 02:09:53.999530 2766 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 10 02:09:54.004474 containerd[1558]: time="2026-03-10T02:09:54.004396652Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 10 02:09:54.008773 kubelet[2766]: I0310 02:09:54.004837 2766 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 10 02:09:55.091068 systemd[1]: Created slice kubepods-besteffort-podc2d52107_0c50_4c25_b74a_649ae3645767.slice - libcontainer container kubepods-besteffort-podc2d52107_0c50_4c25_b74a_649ae3645767.slice. Mar 10 02:09:55.207082 kubelet[2766]: I0310 02:09:55.206915 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c2d52107-0c50-4c25-b74a-649ae3645767-kube-proxy\") pod \"kube-proxy-9wc5b\" (UID: \"c2d52107-0c50-4c25-b74a-649ae3645767\") " pod="kube-system/kube-proxy-9wc5b" Mar 10 02:09:55.207082 kubelet[2766]: I0310 02:09:55.208325 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2d52107-0c50-4c25-b74a-649ae3645767-xtables-lock\") pod \"kube-proxy-9wc5b\" (UID: \"c2d52107-0c50-4c25-b74a-649ae3645767\") " pod="kube-system/kube-proxy-9wc5b" Mar 10 02:09:55.207082 kubelet[2766]: I0310 02:09:55.208443 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c2d52107-0c50-4c25-b74a-649ae3645767-lib-modules\") pod \"kube-proxy-9wc5b\" (UID: \"c2d52107-0c50-4c25-b74a-649ae3645767\") " pod="kube-system/kube-proxy-9wc5b" Mar 10 02:09:55.207082 kubelet[2766]: I0310 02:09:55.208467 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnngp\" (UniqueName: \"kubernetes.io/projected/c2d52107-0c50-4c25-b74a-649ae3645767-kube-api-access-dnngp\") pod \"kube-proxy-9wc5b\" (UID: \"c2d52107-0c50-4c25-b74a-649ae3645767\") " pod="kube-system/kube-proxy-9wc5b" Mar 10 02:09:55.331535 systemd[1]: Created slice kubepods-besteffort-pod31a857b4_f5f3_4ad0_bc75_65f0f1a91788.slice - libcontainer container kubepods-besteffort-pod31a857b4_f5f3_4ad0_bc75_65f0f1a91788.slice. Mar 10 02:09:55.411739 kubelet[2766]: I0310 02:09:55.410881 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/31a857b4-f5f3-4ad0-bc75-65f0f1a91788-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-jzjjg\" (UID: \"31a857b4-f5f3-4ad0-bc75-65f0f1a91788\") " pod="tigera-operator/tigera-operator-6cf4cccc57-jzjjg" Mar 10 02:09:55.411739 kubelet[2766]: I0310 02:09:55.411449 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzz8h\" (UniqueName: \"kubernetes.io/projected/31a857b4-f5f3-4ad0-bc75-65f0f1a91788-kube-api-access-vzz8h\") pod \"tigera-operator-6cf4cccc57-jzjjg\" (UID: \"31a857b4-f5f3-4ad0-bc75-65f0f1a91788\") " pod="tigera-operator/tigera-operator-6cf4cccc57-jzjjg" Mar 10 02:09:55.422131 kubelet[2766]: E0310 02:09:55.421593 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:09:55.424213 containerd[1558]: time="2026-03-10T02:09:55.423431308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9wc5b,Uid:c2d52107-0c50-4c25-b74a-649ae3645767,Namespace:kube-system,Attempt:0,}" Mar 10 02:09:55.492759 containerd[1558]: time="2026-03-10T02:09:55.492240330Z" level=info msg="connecting to shim c8c5d7e4e56ec9f9f438baee94ba951ac0e208991c7c3a2b72b21a1361b9e633" address="unix:///run/containerd/s/f6e146d6fb369189830ee027de09e646b6d36a477adcce0e16cc5b4608f958ee" namespace=k8s.io protocol=ttrpc version=3 Mar 10 02:09:55.566339 systemd[1]: Started cri-containerd-c8c5d7e4e56ec9f9f438baee94ba951ac0e208991c7c3a2b72b21a1361b9e633.scope - libcontainer container c8c5d7e4e56ec9f9f438baee94ba951ac0e208991c7c3a2b72b21a1361b9e633. Mar 10 02:09:55.636429 containerd[1558]: time="2026-03-10T02:09:55.636314245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9wc5b,Uid:c2d52107-0c50-4c25-b74a-649ae3645767,Namespace:kube-system,Attempt:0,} returns sandbox id \"c8c5d7e4e56ec9f9f438baee94ba951ac0e208991c7c3a2b72b21a1361b9e633\"" Mar 10 02:09:55.639925 kubelet[2766]: E0310 02:09:55.639867 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:09:55.658741 containerd[1558]: time="2026-03-10T02:09:55.658562677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-jzjjg,Uid:31a857b4-f5f3-4ad0-bc75-65f0f1a91788,Namespace:tigera-operator,Attempt:0,}" Mar 10 02:09:55.660325 containerd[1558]: time="2026-03-10T02:09:55.660290527Z" level=info msg="CreateContainer within sandbox \"c8c5d7e4e56ec9f9f438baee94ba951ac0e208991c7c3a2b72b21a1361b9e633\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 10 02:09:55.716699 containerd[1558]: time="2026-03-10T02:09:55.716365851Z" level=info msg="Container e7ed920d07efc2e1bcf3dfc6d76de331e80b787c20acc70ecdcef3d026f6bcaa: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:09:55.742084 containerd[1558]: time="2026-03-10T02:09:55.741887812Z" level=info msg="CreateContainer within sandbox \"c8c5d7e4e56ec9f9f438baee94ba951ac0e208991c7c3a2b72b21a1361b9e633\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e7ed920d07efc2e1bcf3dfc6d76de331e80b787c20acc70ecdcef3d026f6bcaa\"" Mar 10 02:09:55.744182 containerd[1558]: time="2026-03-10T02:09:55.744090576Z" level=info msg="StartContainer for \"e7ed920d07efc2e1bcf3dfc6d76de331e80b787c20acc70ecdcef3d026f6bcaa\"" Mar 10 02:09:55.747478 containerd[1558]: time="2026-03-10T02:09:55.747415334Z" level=info msg="connecting to shim e7ed920d07efc2e1bcf3dfc6d76de331e80b787c20acc70ecdcef3d026f6bcaa" address="unix:///run/containerd/s/f6e146d6fb369189830ee027de09e646b6d36a477adcce0e16cc5b4608f958ee" protocol=ttrpc version=3 Mar 10 02:09:55.753404 containerd[1558]: time="2026-03-10T02:09:55.752628481Z" level=info msg="connecting to shim 6b387842c744e102ff3b387efbc39664595268d007fda84317f2e6e6ca0e1418" address="unix:///run/containerd/s/279fdac2687b15943acd7e95dfe35ff709f1af32ee1e83de456da1b56cd43369" namespace=k8s.io protocol=ttrpc version=3 Mar 10 02:09:55.806128 systemd[1]: Started cri-containerd-e7ed920d07efc2e1bcf3dfc6d76de331e80b787c20acc70ecdcef3d026f6bcaa.scope - libcontainer container e7ed920d07efc2e1bcf3dfc6d76de331e80b787c20acc70ecdcef3d026f6bcaa. Mar 10 02:09:55.833699 systemd[1]: Started cri-containerd-6b387842c744e102ff3b387efbc39664595268d007fda84317f2e6e6ca0e1418.scope - libcontainer container 6b387842c744e102ff3b387efbc39664595268d007fda84317f2e6e6ca0e1418. Mar 10 02:09:55.851215 kubelet[2766]: E0310 02:09:55.851071 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:09:55.959509 containerd[1558]: time="2026-03-10T02:09:55.959367961Z" level=info msg="StartContainer for \"e7ed920d07efc2e1bcf3dfc6d76de331e80b787c20acc70ecdcef3d026f6bcaa\" returns successfully" Mar 10 02:09:55.978590 containerd[1558]: time="2026-03-10T02:09:55.977697186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-jzjjg,Uid:31a857b4-f5f3-4ad0-bc75-65f0f1a91788,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6b387842c744e102ff3b387efbc39664595268d007fda84317f2e6e6ca0e1418\"" Mar 10 02:09:55.986078 containerd[1558]: time="2026-03-10T02:09:55.986042639Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 10 02:09:56.635024 kubelet[2766]: E0310 02:09:56.634844 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:09:56.684270 kubelet[2766]: E0310 02:09:56.684210 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:09:56.739317 kubelet[2766]: I0310 02:09:56.737777 2766 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-9wc5b" podStartSLOduration=2.737755805 podStartE2EDuration="2.737755805s" podCreationTimestamp="2026-03-10 02:09:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 02:09:56.735517082 +0000 UTC m=+6.535636381" watchObservedRunningTime="2026-03-10 02:09:56.737755805 +0000 UTC m=+6.537875093" Mar 10 02:09:56.777833 kubelet[2766]: E0310 02:09:56.777311 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:09:56.825707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1521900125.mount: Deactivated successfully. Mar 10 02:09:57.714016 kubelet[2766]: E0310 02:09:57.712732 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:10:00.194306 containerd[1558]: time="2026-03-10T02:10:00.194124747Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:10:00.197347 containerd[1558]: time="2026-03-10T02:10:00.197016591Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 10 02:10:00.201220 containerd[1558]: time="2026-03-10T02:10:00.200618795Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:10:00.204782 containerd[1558]: time="2026-03-10T02:10:00.204694119Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:10:00.205928 containerd[1558]: time="2026-03-10T02:10:00.205406390Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 4.219129564s" Mar 10 02:10:00.205928 containerd[1558]: time="2026-03-10T02:10:00.205433530Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 10 02:10:00.217822 containerd[1558]: time="2026-03-10T02:10:00.217712240Z" level=info msg="CreateContainer within sandbox \"6b387842c744e102ff3b387efbc39664595268d007fda84317f2e6e6ca0e1418\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 10 02:10:00.240800 containerd[1558]: time="2026-03-10T02:10:00.240024937Z" level=info msg="Container 0f8c3971d6ac6c1527590104a5447e332c01fc8ffe94ebf22bc0500ba8d711da: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:10:00.260711 containerd[1558]: time="2026-03-10T02:10:00.260435665Z" level=info msg="CreateContainer within sandbox \"6b387842c744e102ff3b387efbc39664595268d007fda84317f2e6e6ca0e1418\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0f8c3971d6ac6c1527590104a5447e332c01fc8ffe94ebf22bc0500ba8d711da\"" Mar 10 02:10:00.263246 containerd[1558]: time="2026-03-10T02:10:00.262526181Z" level=info msg="StartContainer for \"0f8c3971d6ac6c1527590104a5447e332c01fc8ffe94ebf22bc0500ba8d711da\"" Mar 10 02:10:00.264560 containerd[1558]: time="2026-03-10T02:10:00.264508137Z" level=info msg="connecting to shim 0f8c3971d6ac6c1527590104a5447e332c01fc8ffe94ebf22bc0500ba8d711da" address="unix:///run/containerd/s/279fdac2687b15943acd7e95dfe35ff709f1af32ee1e83de456da1b56cd43369" protocol=ttrpc version=3 Mar 10 02:10:00.304809 systemd[1]: Started cri-containerd-0f8c3971d6ac6c1527590104a5447e332c01fc8ffe94ebf22bc0500ba8d711da.scope - libcontainer container 0f8c3971d6ac6c1527590104a5447e332c01fc8ffe94ebf22bc0500ba8d711da. Mar 10 02:10:00.399506 containerd[1558]: time="2026-03-10T02:10:00.399363431Z" level=info msg="StartContainer for \"0f8c3971d6ac6c1527590104a5447e332c01fc8ffe94ebf22bc0500ba8d711da\" returns successfully" Mar 10 02:10:00.770030 kubelet[2766]: I0310 02:10:00.769895 2766 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-jzjjg" podStartSLOduration=1.547922912 podStartE2EDuration="5.7698813s" podCreationTimestamp="2026-03-10 02:09:55 +0000 UTC" firstStartedPulling="2026-03-10 02:09:55.984735342 +0000 UTC m=+5.784854631" lastFinishedPulling="2026-03-10 02:10:00.20669373 +0000 UTC m=+10.006813019" observedRunningTime="2026-03-10 02:10:00.769568526 +0000 UTC m=+10.569687815" watchObservedRunningTime="2026-03-10 02:10:00.7698813 +0000 UTC m=+10.570000619" Mar 10 02:10:05.879767 kubelet[2766]: E0310 02:10:05.876856 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:10:06.653935 kubelet[2766]: E0310 02:10:06.646743 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:10:06.787854 kubelet[2766]: E0310 02:10:06.787783 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:10:07.241319 sudo[1767]: pam_unix(sudo:session): session closed for user root Mar 10 02:10:07.248008 sshd[1766]: Connection closed by 10.0.0.1 port 55982 Mar 10 02:10:07.247034 sshd-session[1763]: pam_unix(sshd:session): session closed for user core Mar 10 02:10:07.256913 systemd-logind[1539]: Session 7 logged out. Waiting for processes to exit. Mar 10 02:10:07.258194 systemd[1]: sshd@6-10.0.0.112:22-10.0.0.1:55982.service: Deactivated successfully. Mar 10 02:10:07.270355 systemd[1]: session-7.scope: Deactivated successfully. Mar 10 02:10:07.274517 systemd[1]: session-7.scope: Consumed 9.951s CPU time, 230.4M memory peak. Mar 10 02:10:07.279487 systemd-logind[1539]: Removed session 7. Mar 10 02:10:11.006855 systemd[1]: Created slice kubepods-besteffort-podde850dc8_2318_43b2_8a69_3f42325d7de0.slice - libcontainer container kubepods-besteffort-podde850dc8_2318_43b2_8a69_3f42325d7de0.slice. Mar 10 02:10:11.103922 kubelet[2766]: I0310 02:10:11.103873 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/de850dc8-2318-43b2-8a69-3f42325d7de0-typha-certs\") pod \"calico-typha-7cbd986694-996ct\" (UID: \"de850dc8-2318-43b2-8a69-3f42325d7de0\") " pod="calico-system/calico-typha-7cbd986694-996ct" Mar 10 02:10:11.105171 kubelet[2766]: I0310 02:10:11.104885 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de850dc8-2318-43b2-8a69-3f42325d7de0-tigera-ca-bundle\") pod \"calico-typha-7cbd986694-996ct\" (UID: \"de850dc8-2318-43b2-8a69-3f42325d7de0\") " pod="calico-system/calico-typha-7cbd986694-996ct" Mar 10 02:10:11.105171 kubelet[2766]: I0310 02:10:11.104935 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xvdk\" (UniqueName: \"kubernetes.io/projected/de850dc8-2318-43b2-8a69-3f42325d7de0-kube-api-access-7xvdk\") pod \"calico-typha-7cbd986694-996ct\" (UID: \"de850dc8-2318-43b2-8a69-3f42325d7de0\") " pod="calico-system/calico-typha-7cbd986694-996ct" Mar 10 02:10:11.193615 systemd[1]: Created slice kubepods-besteffort-pod7b8dba56_6223_4985_afda_1adff862c115.slice - libcontainer container kubepods-besteffort-pod7b8dba56_6223_4985_afda_1adff862c115.slice. Mar 10 02:10:11.306539 kubelet[2766]: E0310 02:10:11.306339 2766 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cb4xk" podUID="a5c1c4e6-10bd-4317-8c91-c1420d34eabf" Mar 10 02:10:11.306539 kubelet[2766]: I0310 02:10:11.306375 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/7b8dba56-6223-4985-afda-1adff862c115-sys-fs\") pod \"calico-node-7j782\" (UID: \"7b8dba56-6223-4985-afda-1adff862c115\") " pod="calico-system/calico-node-7j782" Mar 10 02:10:11.306539 kubelet[2766]: I0310 02:10:11.306442 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7b8dba56-6223-4985-afda-1adff862c115-cni-log-dir\") pod \"calico-node-7j782\" (UID: \"7b8dba56-6223-4985-afda-1adff862c115\") " pod="calico-system/calico-node-7j782" Mar 10 02:10:11.306539 kubelet[2766]: I0310 02:10:11.306465 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b8dba56-6223-4985-afda-1adff862c115-lib-modules\") pod \"calico-node-7j782\" (UID: \"7b8dba56-6223-4985-afda-1adff862c115\") " pod="calico-system/calico-node-7j782" Mar 10 02:10:11.306895 kubelet[2766]: I0310 02:10:11.306549 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/7b8dba56-6223-4985-afda-1adff862c115-bpffs\") pod \"calico-node-7j782\" (UID: \"7b8dba56-6223-4985-afda-1adff862c115\") " pod="calico-system/calico-node-7j782" Mar 10 02:10:11.306895 kubelet[2766]: I0310 02:10:11.306572 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7b8dba56-6223-4985-afda-1adff862c115-cni-bin-dir\") pod \"calico-node-7j782\" (UID: \"7b8dba56-6223-4985-afda-1adff862c115\") " pod="calico-system/calico-node-7j782" Mar 10 02:10:11.306895 kubelet[2766]: I0310 02:10:11.306590 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7b8dba56-6223-4985-afda-1adff862c115-cni-net-dir\") pod \"calico-node-7j782\" (UID: \"7b8dba56-6223-4985-afda-1adff862c115\") " pod="calico-system/calico-node-7j782" Mar 10 02:10:11.306895 kubelet[2766]: I0310 02:10:11.306608 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b8dba56-6223-4985-afda-1adff862c115-tigera-ca-bundle\") pod \"calico-node-7j782\" (UID: \"7b8dba56-6223-4985-afda-1adff862c115\") " pod="calico-system/calico-node-7j782" Mar 10 02:10:11.306895 kubelet[2766]: I0310 02:10:11.306864 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk6tb\" (UniqueName: \"kubernetes.io/projected/7b8dba56-6223-4985-afda-1adff862c115-kube-api-access-xk6tb\") pod \"calico-node-7j782\" (UID: \"7b8dba56-6223-4985-afda-1adff862c115\") " pod="calico-system/calico-node-7j782" Mar 10 02:10:11.307112 kubelet[2766]: I0310 02:10:11.306923 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7b8dba56-6223-4985-afda-1adff862c115-flexvol-driver-host\") pod \"calico-node-7j782\" (UID: \"7b8dba56-6223-4985-afda-1adff862c115\") " pod="calico-system/calico-node-7j782" Mar 10 02:10:11.307112 kubelet[2766]: I0310 02:10:11.307057 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7b8dba56-6223-4985-afda-1adff862c115-var-lib-calico\") pod \"calico-node-7j782\" (UID: \"7b8dba56-6223-4985-afda-1adff862c115\") " pod="calico-system/calico-node-7j782" Mar 10 02:10:11.307185 kubelet[2766]: I0310 02:10:11.307110 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7b8dba56-6223-4985-afda-1adff862c115-policysync\") pod \"calico-node-7j782\" (UID: \"7b8dba56-6223-4985-afda-1adff862c115\") " pod="calico-system/calico-node-7j782" Mar 10 02:10:11.307185 kubelet[2766]: I0310 02:10:11.307149 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7b8dba56-6223-4985-afda-1adff862c115-node-certs\") pod \"calico-node-7j782\" (UID: \"7b8dba56-6223-4985-afda-1adff862c115\") " pod="calico-system/calico-node-7j782" Mar 10 02:10:11.307272 kubelet[2766]: I0310 02:10:11.307223 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7b8dba56-6223-4985-afda-1adff862c115-var-run-calico\") pod \"calico-node-7j782\" (UID: \"7b8dba56-6223-4985-afda-1adff862c115\") " pod="calico-system/calico-node-7j782" Mar 10 02:10:11.307272 kubelet[2766]: I0310 02:10:11.307250 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7b8dba56-6223-4985-afda-1adff862c115-xtables-lock\") pod \"calico-node-7j782\" (UID: \"7b8dba56-6223-4985-afda-1adff862c115\") " pod="calico-system/calico-node-7j782" Mar 10 02:10:11.307346 kubelet[2766]: I0310 02:10:11.307273 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/7b8dba56-6223-4985-afda-1adff862c115-nodeproc\") pod \"calico-node-7j782\" (UID: \"7b8dba56-6223-4985-afda-1adff862c115\") " pod="calico-system/calico-node-7j782" Mar 10 02:10:11.338424 kubelet[2766]: E0310 02:10:11.338310 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:10:11.340828 containerd[1558]: time="2026-03-10T02:10:11.339777562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7cbd986694-996ct,Uid:de850dc8-2318-43b2-8a69-3f42325d7de0,Namespace:calico-system,Attempt:0,}" Mar 10 02:10:11.408790 kubelet[2766]: I0310 02:10:11.408653 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a5c1c4e6-10bd-4317-8c91-c1420d34eabf-kubelet-dir\") pod \"csi-node-driver-cb4xk\" (UID: \"a5c1c4e6-10bd-4317-8c91-c1420d34eabf\") " pod="calico-system/csi-node-driver-cb4xk" Mar 10 02:10:11.409939 kubelet[2766]: I0310 02:10:11.409824 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a5c1c4e6-10bd-4317-8c91-c1420d34eabf-registration-dir\") pod \"csi-node-driver-cb4xk\" (UID: \"a5c1c4e6-10bd-4317-8c91-c1420d34eabf\") " pod="calico-system/csi-node-driver-cb4xk" Mar 10 02:10:11.410408 kubelet[2766]: I0310 02:10:11.410348 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a5c1c4e6-10bd-4317-8c91-c1420d34eabf-varrun\") pod \"csi-node-driver-cb4xk\" (UID: \"a5c1c4e6-10bd-4317-8c91-c1420d34eabf\") " pod="calico-system/csi-node-driver-cb4xk" Mar 10 02:10:11.410655 kubelet[2766]: I0310 02:10:11.410547 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s597x\" (UniqueName: \"kubernetes.io/projected/a5c1c4e6-10bd-4317-8c91-c1420d34eabf-kube-api-access-s597x\") pod \"csi-node-driver-cb4xk\" (UID: \"a5c1c4e6-10bd-4317-8c91-c1420d34eabf\") " pod="calico-system/csi-node-driver-cb4xk" Mar 10 02:10:11.410752 kubelet[2766]: I0310 02:10:11.410724 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a5c1c4e6-10bd-4317-8c91-c1420d34eabf-socket-dir\") pod \"csi-node-driver-cb4xk\" (UID: \"a5c1c4e6-10bd-4317-8c91-c1420d34eabf\") " pod="calico-system/csi-node-driver-cb4xk" Mar 10 02:10:11.416024 kubelet[2766]: E0310 02:10:11.415620 2766 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:10:11.416024 kubelet[2766]: W0310 02:10:11.415639 2766 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:10:11.416024 kubelet[2766]: E0310 02:10:11.415715 2766 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:10:11.426465 kubelet[2766]: E0310 02:10:11.426341 2766 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:10:11.426541 kubelet[2766]: W0310 02:10:11.426483 2766 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:10:11.426541 kubelet[2766]: E0310 02:10:11.426507 2766 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:10:11.455728 kubelet[2766]: E0310 02:10:11.455108 2766 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:10:11.455728 kubelet[2766]: W0310 02:10:11.455161 2766 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:10:11.455728 kubelet[2766]: E0310 02:10:11.455184 2766 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:10:11.484928 containerd[1558]: time="2026-03-10T02:10:11.483896080Z" level=info msg="connecting to shim 2d9d6202ddbf9dc7ae54e7c096d2a68852bcbce7719c9019b8742361219477c1" address="unix:///run/containerd/s/19c49e046799f801bd40e364dc61a2797a763173000e4bc9d2e8a336a23f0504" namespace=k8s.io protocol=ttrpc version=3 Mar 10 02:10:11.513512 containerd[1558]: time="2026-03-10T02:10:11.513356491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7j782,Uid:7b8dba56-6223-4985-afda-1adff862c115,Namespace:calico-system,Attempt:0,}" Mar 10 02:10:11.514304 kubelet[2766]: E0310 02:10:11.514184 2766 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:10:11.514400 kubelet[2766]: W0310 02:10:11.514324 2766 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:10:11.514400 kubelet[2766]: E0310 02:10:11.514352 2766 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:10:11.515215 kubelet[2766]: E0310 02:10:11.515088 2766 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:10:11.515215 kubelet[2766]: W0310 02:10:11.515123 2766 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:10:11.515215 kubelet[2766]: E0310 02:10:11.515141 2766 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:10:11.517053 kubelet[2766]: E0310 02:10:11.516531 2766 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:10:11.517053 kubelet[2766]: W0310 02:10:11.516884 2766 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:10:11.517053 kubelet[2766]: E0310 02:10:11.516904 2766 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:10:11.519455 kubelet[2766]: E0310 02:10:11.519097 2766 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:10:11.519455 kubelet[2766]: W0310 02:10:11.519141 2766 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:10:11.519455 kubelet[2766]: E0310 02:10:11.519158 2766 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:10:11.520931 kubelet[2766]: E0310 02:10:11.520798 2766 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:10:11.520931 kubelet[2766]: W0310 02:10:11.520840 2766 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:10:11.520931 kubelet[2766]: E0310 02:10:11.520857 2766 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:10:11.524164 kubelet[2766]: E0310 02:10:11.524099 2766 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:10:11.524164 kubelet[2766]: W0310 02:10:11.524144 2766 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:10:11.524164 kubelet[2766]: E0310 02:10:11.524160 2766 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:10:11.525233 kubelet[2766]: E0310 02:10:11.525168 2766 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:10:11.525233 kubelet[2766]: W0310 02:10:11.525212 2766 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:10:11.525233 kubelet[2766]: E0310 02:10:11.525231 2766 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:10:11.525808 kubelet[2766]: E0310 02:10:11.525762 2766 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:10:11.525808 kubelet[2766]: W0310 02:10:11.525798 2766 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:10:11.525887 kubelet[2766]: E0310 02:10:11.525813 2766 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:10:11.532281 kubelet[2766]: E0310 02:10:11.532040 2766 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:10:11.532281 kubelet[2766]: W0310 02:10:11.532061 2766 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:10:11.532281 kubelet[2766]: E0310 02:10:11.532116 2766 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:10:11.532770 kubelet[2766]: E0310 02:10:11.532753 2766 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:10:11.532840 kubelet[2766]: W0310 02:10:11.532826 2766 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:10:11.532914 kubelet[2766]: E0310 02:10:11.532900 2766 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:10:11.535411 kubelet[2766]: E0310 02:10:11.535396 2766 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:10:11.535482 kubelet[2766]: W0310 02:10:11.535471 2766 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:10:11.535527 kubelet[2766]: E0310 02:10:11.535517 2766 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:10:11.536871 kubelet[2766]: E0310 02:10:11.536853 2766 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:10:11.537241 kubelet[2766]: W0310 02:10:11.537020 2766 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:10:11.537241 kubelet[2766]: E0310 02:10:11.537037 2766 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:10:11.538233 kubelet[2766]: E0310 02:10:11.538216 2766 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:10:11.538307 kubelet[2766]: W0310 02:10:11.538294 2766 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:10:11.538380 kubelet[2766]: E0310 02:10:11.538366 2766 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:10:11.539447 kubelet[2766]: E0310 02:10:11.539348 2766 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:10:11.540089 kubelet[2766]: W0310 02:10:11.540070 2766 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:10:11.540169 kubelet[2766]: E0310 02:10:11.540155 2766 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:10:11.540881 kubelet[2766]: E0310 02:10:11.540617 2766 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:10:11.540881 kubelet[2766]: W0310 02:10:11.540714 2766 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:10:11.540881 kubelet[2766]: E0310 02:10:11.540730 2766 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:10:11.541190 kubelet[2766]: E0310 02:10:11.541174 2766 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:10:11.541264 kubelet[2766]: W0310 02:10:11.541251 2766 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:10:11.541438 kubelet[2766]: E0310 02:10:11.541420 2766 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:10:11.542158 kubelet[2766]: E0310 02:10:11.541868 2766 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:10:11.542158 kubelet[2766]: W0310 02:10:11.541882 2766 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:10:11.542158 kubelet[2766]: E0310 02:10:11.541895 2766 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:10:11.542620 kubelet[2766]: E0310 02:10:11.542603 2766 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:10:11.542742 kubelet[2766]: W0310 02:10:11.542725 2766 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:10:11.542823 kubelet[2766]: E0310 02:10:11.542809 2766 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:10:11.544722 kubelet[2766]: E0310 02:10:11.543746 2766 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:10:11.544722 kubelet[2766]: W0310 02:10:11.543762 2766 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:10:11.544722 kubelet[2766]: E0310 02:10:11.543775 2766 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:10:11.545823 kubelet[2766]: E0310 02:10:11.545807 2766 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:10:11.545900 kubelet[2766]: W0310 02:10:11.545887 2766 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:10:11.546073 kubelet[2766]: E0310 02:10:11.546057 2766 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:10:11.548583 kubelet[2766]: E0310 02:10:11.548373 2766 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:10:11.548583 kubelet[2766]: W0310 02:10:11.548390 2766 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:10:11.548583 kubelet[2766]: E0310 02:10:11.548403 2766 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:10:11.550793 kubelet[2766]: E0310 02:10:11.550774 2766 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:10:11.551198 kubelet[2766]: W0310 02:10:11.551017 2766 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:10:11.551198 kubelet[2766]: E0310 02:10:11.551037 2766 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:10:11.552486 kubelet[2766]: E0310 02:10:11.552473 2766 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:10:11.552620 kubelet[2766]: W0310 02:10:11.552608 2766 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:10:11.552903 kubelet[2766]: E0310 02:10:11.552739 2766 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:10:11.553476 kubelet[2766]: E0310 02:10:11.553460 2766 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:10:11.553579 kubelet[2766]: W0310 02:10:11.553564 2766 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:10:11.553745 kubelet[2766]: E0310 02:10:11.553728 2766 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:10:11.554487 kubelet[2766]: E0310 02:10:11.554468 2766 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:10:11.554604 kubelet[2766]: W0310 02:10:11.554554 2766 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:10:11.554604 kubelet[2766]: E0310 02:10:11.554575 2766 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:10:11.586532 containerd[1558]: time="2026-03-10T02:10:11.582351780Z" level=info msg="connecting to shim 885046d79570312e278c1343f06f29d91e050017f03daffbebcadd032c592fc6" address="unix:///run/containerd/s/2a731fa939a0baea813b68ea7971c7e7819c38b92b57ea1dcdb664d8ef821eae" namespace=k8s.io protocol=ttrpc version=3 Mar 10 02:10:11.603737 systemd[1]: Started cri-containerd-2d9d6202ddbf9dc7ae54e7c096d2a68852bcbce7719c9019b8742361219477c1.scope - libcontainer container 2d9d6202ddbf9dc7ae54e7c096d2a68852bcbce7719c9019b8742361219477c1. Mar 10 02:10:11.604883 kubelet[2766]: E0310 02:10:11.604513 2766 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:10:11.604883 kubelet[2766]: W0310 02:10:11.604541 2766 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:10:11.604883 kubelet[2766]: E0310 02:10:11.604565 2766 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:10:11.648756 systemd[1]: Started cri-containerd-885046d79570312e278c1343f06f29d91e050017f03daffbebcadd032c592fc6.scope - libcontainer container 885046d79570312e278c1343f06f29d91e050017f03daffbebcadd032c592fc6. Mar 10 02:10:11.770461 containerd[1558]: time="2026-03-10T02:10:11.770318882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7j782,Uid:7b8dba56-6223-4985-afda-1adff862c115,Namespace:calico-system,Attempt:0,} returns sandbox id \"885046d79570312e278c1343f06f29d91e050017f03daffbebcadd032c592fc6\"" Mar 10 02:10:11.776855 containerd[1558]: time="2026-03-10T02:10:11.776770540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7cbd986694-996ct,Uid:de850dc8-2318-43b2-8a69-3f42325d7de0,Namespace:calico-system,Attempt:0,} returns sandbox id \"2d9d6202ddbf9dc7ae54e7c096d2a68852bcbce7719c9019b8742361219477c1\"" Mar 10 02:10:11.778377 kubelet[2766]: E0310 02:10:11.778349 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:10:11.789608 containerd[1558]: time="2026-03-10T02:10:11.788875957Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 10 02:10:12.590176 kubelet[2766]: E0310 02:10:12.589616 2766 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cb4xk" podUID="a5c1c4e6-10bd-4317-8c91-c1420d34eabf" Mar 10 02:10:12.810930 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2969871658.mount: Deactivated successfully. Mar 10 02:10:13.052086 containerd[1558]: time="2026-03-10T02:10:13.051533115Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:10:13.055316 containerd[1558]: time="2026-03-10T02:10:13.055210190Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=6186433" Mar 10 02:10:13.057318 containerd[1558]: time="2026-03-10T02:10:13.057248745Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:10:13.064775 containerd[1558]: time="2026-03-10T02:10:13.064552425Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:10:13.067183 containerd[1558]: time="2026-03-10T02:10:13.065567910Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.276332993s" Mar 10 02:10:13.067183 containerd[1558]: time="2026-03-10T02:10:13.065630678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 10 02:10:13.070067 containerd[1558]: time="2026-03-10T02:10:13.069259481Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 10 02:10:13.096005 containerd[1558]: time="2026-03-10T02:10:13.091842653Z" level=info msg="CreateContainer within sandbox \"885046d79570312e278c1343f06f29d91e050017f03daffbebcadd032c592fc6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 10 02:10:13.116448 containerd[1558]: time="2026-03-10T02:10:13.114737066Z" level=info msg="Container 671400d8187836b2179706e09c068379c68bb13c35dfe3be600f2ac21caeb196: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:10:13.138764 containerd[1558]: time="2026-03-10T02:10:13.137383450Z" level=info msg="CreateContainer within sandbox \"885046d79570312e278c1343f06f29d91e050017f03daffbebcadd032c592fc6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"671400d8187836b2179706e09c068379c68bb13c35dfe3be600f2ac21caeb196\"" Mar 10 02:10:13.141003 containerd[1558]: time="2026-03-10T02:10:13.140709285Z" level=info msg="StartContainer for \"671400d8187836b2179706e09c068379c68bb13c35dfe3be600f2ac21caeb196\"" Mar 10 02:10:13.143372 containerd[1558]: time="2026-03-10T02:10:13.143272509Z" level=info msg="connecting to shim 671400d8187836b2179706e09c068379c68bb13c35dfe3be600f2ac21caeb196" address="unix:///run/containerd/s/2a731fa939a0baea813b68ea7971c7e7819c38b92b57ea1dcdb664d8ef821eae" protocol=ttrpc version=3 Mar 10 02:10:13.201226 systemd[1]: Started cri-containerd-671400d8187836b2179706e09c068379c68bb13c35dfe3be600f2ac21caeb196.scope - libcontainer container 671400d8187836b2179706e09c068379c68bb13c35dfe3be600f2ac21caeb196. Mar 10 02:10:13.353227 containerd[1558]: time="2026-03-10T02:10:13.353188991Z" level=info msg="StartContainer for \"671400d8187836b2179706e09c068379c68bb13c35dfe3be600f2ac21caeb196\" returns successfully" Mar 10 02:10:13.374581 systemd[1]: cri-containerd-671400d8187836b2179706e09c068379c68bb13c35dfe3be600f2ac21caeb196.scope: Deactivated successfully. Mar 10 02:10:13.384428 containerd[1558]: time="2026-03-10T02:10:13.384089212Z" level=info msg="received container exit event container_id:\"671400d8187836b2179706e09c068379c68bb13c35dfe3be600f2ac21caeb196\" id:\"671400d8187836b2179706e09c068379c68bb13c35dfe3be600f2ac21caeb196\" pid:3341 exited_at:{seconds:1773108613 nanos:383531882}" Mar 10 02:10:13.436569 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-671400d8187836b2179706e09c068379c68bb13c35dfe3be600f2ac21caeb196-rootfs.mount: Deactivated successfully. Mar 10 02:10:14.586722 kubelet[2766]: E0310 02:10:14.586467 2766 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cb4xk" podUID="a5c1c4e6-10bd-4317-8c91-c1420d34eabf" Mar 10 02:10:16.592399 kubelet[2766]: E0310 02:10:16.591224 2766 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cb4xk" podUID="a5c1c4e6-10bd-4317-8c91-c1420d34eabf" Mar 10 02:10:18.591873 kubelet[2766]: E0310 02:10:18.591441 2766 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cb4xk" podUID="a5c1c4e6-10bd-4317-8c91-c1420d34eabf" Mar 10 02:10:20.304215 containerd[1558]: time="2026-03-10T02:10:20.302772783Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:10:20.314667 containerd[1558]: time="2026-03-10T02:10:20.314439568Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=34551413" Mar 10 02:10:20.317725 containerd[1558]: time="2026-03-10T02:10:20.316316526Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:10:20.330545 containerd[1558]: time="2026-03-10T02:10:20.326394891Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:10:20.331210 containerd[1558]: time="2026-03-10T02:10:20.328487085Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 7.259010409s" Mar 10 02:10:20.331210 containerd[1558]: time="2026-03-10T02:10:20.330847804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 10 02:10:20.336622 containerd[1558]: time="2026-03-10T02:10:20.336556026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 10 02:10:20.377854 containerd[1558]: time="2026-03-10T02:10:20.377326618Z" level=info msg="CreateContainer within sandbox \"2d9d6202ddbf9dc7ae54e7c096d2a68852bcbce7719c9019b8742361219477c1\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 10 02:10:20.418831 containerd[1558]: time="2026-03-10T02:10:20.418129320Z" level=info msg="Container 2b9c6fc169f0314ffa046d4ebb449a9773605b30e5fd1d2cdd58478f706dd515: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:10:20.439813 containerd[1558]: time="2026-03-10T02:10:20.439276231Z" level=info msg="CreateContainer within sandbox \"2d9d6202ddbf9dc7ae54e7c096d2a68852bcbce7719c9019b8742361219477c1\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2b9c6fc169f0314ffa046d4ebb449a9773605b30e5fd1d2cdd58478f706dd515\"" Mar 10 02:10:20.440277 containerd[1558]: time="2026-03-10T02:10:20.440247671Z" level=info msg="StartContainer for \"2b9c6fc169f0314ffa046d4ebb449a9773605b30e5fd1d2cdd58478f706dd515\"" Mar 10 02:10:20.442827 containerd[1558]: time="2026-03-10T02:10:20.442375963Z" level=info msg="connecting to shim 2b9c6fc169f0314ffa046d4ebb449a9773605b30e5fd1d2cdd58478f706dd515" address="unix:///run/containerd/s/19c49e046799f801bd40e364dc61a2797a763173000e4bc9d2e8a336a23f0504" protocol=ttrpc version=3 Mar 10 02:10:20.518186 systemd[1]: Started cri-containerd-2b9c6fc169f0314ffa046d4ebb449a9773605b30e5fd1d2cdd58478f706dd515.scope - libcontainer container 2b9c6fc169f0314ffa046d4ebb449a9773605b30e5fd1d2cdd58478f706dd515. Mar 10 02:10:20.590307 kubelet[2766]: E0310 02:10:20.589930 2766 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cb4xk" podUID="a5c1c4e6-10bd-4317-8c91-c1420d34eabf" Mar 10 02:10:20.799316 containerd[1558]: time="2026-03-10T02:10:20.799177997Z" level=info msg="StartContainer for \"2b9c6fc169f0314ffa046d4ebb449a9773605b30e5fd1d2cdd58478f706dd515\" returns successfully" Mar 10 02:10:20.889440 kubelet[2766]: E0310 02:10:20.888298 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:10:20.939117 kubelet[2766]: I0310 02:10:20.939044 2766 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-7cbd986694-996ct" podStartSLOduration=2.39474619 podStartE2EDuration="10.939025513s" podCreationTimestamp="2026-03-10 02:10:10 +0000 UTC" firstStartedPulling="2026-03-10 02:10:11.788896523 +0000 UTC m=+21.589015813" lastFinishedPulling="2026-03-10 02:10:20.333175847 +0000 UTC m=+30.133295136" observedRunningTime="2026-03-10 02:10:20.933454986 +0000 UTC m=+30.733574285" watchObservedRunningTime="2026-03-10 02:10:20.939025513 +0000 UTC m=+30.739144812" Mar 10 02:10:21.913407 kubelet[2766]: I0310 02:10:21.913134 2766 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 10 02:10:21.914796 kubelet[2766]: E0310 02:10:21.914316 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:10:22.591873 kubelet[2766]: E0310 02:10:22.591118 2766 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cb4xk" podUID="a5c1c4e6-10bd-4317-8c91-c1420d34eabf" Mar 10 02:10:23.119229 kubelet[2766]: I0310 02:10:23.107147 2766 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 10 02:10:23.119229 kubelet[2766]: E0310 02:10:23.107647 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:10:23.921870 kubelet[2766]: E0310 02:10:23.917610 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:10:24.586922 kubelet[2766]: E0310 02:10:24.586226 2766 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cb4xk" podUID="a5c1c4e6-10bd-4317-8c91-c1420d34eabf" Mar 10 02:10:26.597560 kubelet[2766]: E0310 02:10:26.597438 2766 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cb4xk" podUID="a5c1c4e6-10bd-4317-8c91-c1420d34eabf" Mar 10 02:10:28.600416 kubelet[2766]: E0310 02:10:28.600327 2766 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cb4xk" podUID="a5c1c4e6-10bd-4317-8c91-c1420d34eabf" Mar 10 02:10:30.591792 kubelet[2766]: E0310 02:10:30.590634 2766 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cb4xk" podUID="a5c1c4e6-10bd-4317-8c91-c1420d34eabf" Mar 10 02:10:32.600458 kubelet[2766]: E0310 02:10:32.600360 2766 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cb4xk" podUID="a5c1c4e6-10bd-4317-8c91-c1420d34eabf" Mar 10 02:10:34.585656 kubelet[2766]: E0310 02:10:34.585275 2766 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cb4xk" podUID="a5c1c4e6-10bd-4317-8c91-c1420d34eabf" Mar 10 02:10:36.589422 kubelet[2766]: E0310 02:10:36.589361 2766 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cb4xk" podUID="a5c1c4e6-10bd-4317-8c91-c1420d34eabf" Mar 10 02:10:38.589189 kubelet[2766]: E0310 02:10:38.588931 2766 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cb4xk" podUID="a5c1c4e6-10bd-4317-8c91-c1420d34eabf" Mar 10 02:10:40.594537 kubelet[2766]: E0310 02:10:40.594332 2766 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cb4xk" podUID="a5c1c4e6-10bd-4317-8c91-c1420d34eabf" Mar 10 02:10:42.587907 kubelet[2766]: E0310 02:10:42.587519 2766 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cb4xk" podUID="a5c1c4e6-10bd-4317-8c91-c1420d34eabf" Mar 10 02:10:44.586526 kubelet[2766]: E0310 02:10:44.586067 2766 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cb4xk" podUID="a5c1c4e6-10bd-4317-8c91-c1420d34eabf" Mar 10 02:10:46.589504 kubelet[2766]: E0310 02:10:46.588560 2766 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cb4xk" podUID="a5c1c4e6-10bd-4317-8c91-c1420d34eabf" Mar 10 02:10:48.585739 kubelet[2766]: E0310 02:10:48.584581 2766 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cb4xk" podUID="a5c1c4e6-10bd-4317-8c91-c1420d34eabf" Mar 10 02:10:48.982502 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2179378191.mount: Deactivated successfully. Mar 10 02:10:49.156132 containerd[1558]: time="2026-03-10T02:10:49.153385376Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:10:49.174139 containerd[1558]: time="2026-03-10T02:10:49.174016649Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 10 02:10:49.198511 containerd[1558]: time="2026-03-10T02:10:49.198236763Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:10:49.211420 containerd[1558]: time="2026-03-10T02:10:49.209608822Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:10:49.211420 containerd[1558]: time="2026-03-10T02:10:49.210630102Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 28.874034594s" Mar 10 02:10:49.211420 containerd[1558]: time="2026-03-10T02:10:49.210664325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 10 02:10:49.244783 containerd[1558]: time="2026-03-10T02:10:49.235236869Z" level=info msg="CreateContainer within sandbox \"885046d79570312e278c1343f06f29d91e050017f03daffbebcadd032c592fc6\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 10 02:10:49.390115 containerd[1558]: time="2026-03-10T02:10:49.388371634Z" level=info msg="Container e924c86c1db9ed383ac9673cfb978edaefb5d1921fa043f1f7e488e592840026: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:10:49.518484 containerd[1558]: time="2026-03-10T02:10:49.517710672Z" level=info msg="CreateContainer within sandbox \"885046d79570312e278c1343f06f29d91e050017f03daffbebcadd032c592fc6\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"e924c86c1db9ed383ac9673cfb978edaefb5d1921fa043f1f7e488e592840026\"" Mar 10 02:10:49.521921 containerd[1558]: time="2026-03-10T02:10:49.519304808Z" level=info msg="StartContainer for \"e924c86c1db9ed383ac9673cfb978edaefb5d1921fa043f1f7e488e592840026\"" Mar 10 02:10:49.521921 containerd[1558]: time="2026-03-10T02:10:49.521449736Z" level=info msg="connecting to shim e924c86c1db9ed383ac9673cfb978edaefb5d1921fa043f1f7e488e592840026" address="unix:///run/containerd/s/2a731fa939a0baea813b68ea7971c7e7819c38b92b57ea1dcdb664d8ef821eae" protocol=ttrpc version=3 Mar 10 02:10:49.701641 systemd[1]: Started cri-containerd-e924c86c1db9ed383ac9673cfb978edaefb5d1921fa043f1f7e488e592840026.scope - libcontainer container e924c86c1db9ed383ac9673cfb978edaefb5d1921fa043f1f7e488e592840026. Mar 10 02:10:50.090289 containerd[1558]: time="2026-03-10T02:10:50.090014862Z" level=info msg="StartContainer for \"e924c86c1db9ed383ac9673cfb978edaefb5d1921fa043f1f7e488e592840026\" returns successfully" Mar 10 02:10:50.291500 systemd[1]: cri-containerd-e924c86c1db9ed383ac9673cfb978edaefb5d1921fa043f1f7e488e592840026.scope: Deactivated successfully. Mar 10 02:10:50.297190 containerd[1558]: time="2026-03-10T02:10:50.297129799Z" level=info msg="received container exit event container_id:\"e924c86c1db9ed383ac9673cfb978edaefb5d1921fa043f1f7e488e592840026\" id:\"e924c86c1db9ed383ac9673cfb978edaefb5d1921fa043f1f7e488e592840026\" pid:3449 exited_at:{seconds:1773108650 nanos:295589123}" Mar 10 02:10:50.428995 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e924c86c1db9ed383ac9673cfb978edaefb5d1921fa043f1f7e488e592840026-rootfs.mount: Deactivated successfully. Mar 10 02:10:50.587534 kubelet[2766]: E0310 02:10:50.586322 2766 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cb4xk" podUID="a5c1c4e6-10bd-4317-8c91-c1420d34eabf" Mar 10 02:10:51.215908 containerd[1558]: time="2026-03-10T02:10:51.215418225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 10 02:10:52.614344 kubelet[2766]: E0310 02:10:52.607757 2766 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cb4xk" podUID="a5c1c4e6-10bd-4317-8c91-c1420d34eabf" Mar 10 02:10:54.587899 kubelet[2766]: E0310 02:10:54.587757 2766 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cb4xk" podUID="a5c1c4e6-10bd-4317-8c91-c1420d34eabf" Mar 10 02:10:56.588560 kubelet[2766]: E0310 02:10:56.588488 2766 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cb4xk" podUID="a5c1c4e6-10bd-4317-8c91-c1420d34eabf" Mar 10 02:10:58.591277 kubelet[2766]: E0310 02:10:58.587535 2766 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cb4xk" podUID="a5c1c4e6-10bd-4317-8c91-c1420d34eabf" Mar 10 02:11:00.587566 kubelet[2766]: E0310 02:11:00.587274 2766 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cb4xk" podUID="a5c1c4e6-10bd-4317-8c91-c1420d34eabf" Mar 10 02:11:02.586597 kubelet[2766]: E0310 02:11:02.586340 2766 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cb4xk" podUID="a5c1c4e6-10bd-4317-8c91-c1420d34eabf" Mar 10 02:11:03.505114 containerd[1558]: time="2026-03-10T02:11:03.503378362Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:11:03.510259 containerd[1558]: time="2026-03-10T02:11:03.510137587Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 10 02:11:03.519103 containerd[1558]: time="2026-03-10T02:11:03.514530847Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:11:03.524152 containerd[1558]: time="2026-03-10T02:11:03.523779322Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 12.30831397s" Mar 10 02:11:03.524152 containerd[1558]: time="2026-03-10T02:11:03.523871904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 10 02:11:03.527130 containerd[1558]: time="2026-03-10T02:11:03.526894860Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:11:03.561807 containerd[1558]: time="2026-03-10T02:11:03.561767941Z" level=info msg="CreateContainer within sandbox \"885046d79570312e278c1343f06f29d91e050017f03daffbebcadd032c592fc6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 10 02:11:03.628901 containerd[1558]: time="2026-03-10T02:11:03.625710411Z" level=info msg="Container d6b9f50dc84ded00131070ad8f1048ef81e333b109b1b7106b3fd806cf9184bd: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:11:03.804541 containerd[1558]: time="2026-03-10T02:11:03.804327528Z" level=info msg="CreateContainer within sandbox \"885046d79570312e278c1343f06f29d91e050017f03daffbebcadd032c592fc6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d6b9f50dc84ded00131070ad8f1048ef81e333b109b1b7106b3fd806cf9184bd\"" Mar 10 02:11:03.817844 containerd[1558]: time="2026-03-10T02:11:03.817524465Z" level=info msg="StartContainer for \"d6b9f50dc84ded00131070ad8f1048ef81e333b109b1b7106b3fd806cf9184bd\"" Mar 10 02:11:03.834545 containerd[1558]: time="2026-03-10T02:11:03.834406646Z" level=info msg="connecting to shim d6b9f50dc84ded00131070ad8f1048ef81e333b109b1b7106b3fd806cf9184bd" address="unix:///run/containerd/s/2a731fa939a0baea813b68ea7971c7e7819c38b92b57ea1dcdb664d8ef821eae" protocol=ttrpc version=3 Mar 10 02:11:03.943197 systemd[1]: Started cri-containerd-d6b9f50dc84ded00131070ad8f1048ef81e333b109b1b7106b3fd806cf9184bd.scope - libcontainer container d6b9f50dc84ded00131070ad8f1048ef81e333b109b1b7106b3fd806cf9184bd. Mar 10 02:11:04.279196 containerd[1558]: time="2026-03-10T02:11:04.278261387Z" level=info msg="StartContainer for \"d6b9f50dc84ded00131070ad8f1048ef81e333b109b1b7106b3fd806cf9184bd\" returns successfully" Mar 10 02:11:04.606577 kubelet[2766]: E0310 02:11:04.606294 2766 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cb4xk" podUID="a5c1c4e6-10bd-4317-8c91-c1420d34eabf" Mar 10 02:11:06.590468 kubelet[2766]: E0310 02:11:06.590173 2766 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cb4xk" podUID="a5c1c4e6-10bd-4317-8c91-c1420d34eabf" Mar 10 02:11:07.327158 systemd[1]: cri-containerd-d6b9f50dc84ded00131070ad8f1048ef81e333b109b1b7106b3fd806cf9184bd.scope: Deactivated successfully. Mar 10 02:11:07.327612 systemd[1]: cri-containerd-d6b9f50dc84ded00131070ad8f1048ef81e333b109b1b7106b3fd806cf9184bd.scope: Consumed 1.359s CPU time, 184.7M memory peak, 6M read from disk, 177M written to disk. Mar 10 02:11:07.424695 kubelet[2766]: I0310 02:11:07.424662 2766 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 10 02:11:07.515530 containerd[1558]: time="2026-03-10T02:11:07.512733710Z" level=info msg="received container exit event container_id:\"d6b9f50dc84ded00131070ad8f1048ef81e333b109b1b7106b3fd806cf9184bd\" id:\"d6b9f50dc84ded00131070ad8f1048ef81e333b109b1b7106b3fd806cf9184bd\" pid:3509 exited_at:{seconds:1773108667 nanos:512221307}" Mar 10 02:11:07.680063 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6b9f50dc84ded00131070ad8f1048ef81e333b109b1b7106b3fd806cf9184bd-rootfs.mount: Deactivated successfully. Mar 10 02:11:07.758418 systemd[1]: Created slice kubepods-besteffort-podcf4837fd_30fd_4418_b0ae_29ef45b52c78.slice - libcontainer container kubepods-besteffort-podcf4837fd_30fd_4418_b0ae_29ef45b52c78.slice. Mar 10 02:11:07.873376 kubelet[2766]: I0310 02:11:07.872624 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cf4837fd-30fd-4418-b0ae-29ef45b52c78-whisker-backend-key-pair\") pod \"whisker-5dbcdbd4c9-9mbr4\" (UID: \"cf4837fd-30fd-4418-b0ae-29ef45b52c78\") " pod="calico-system/whisker-5dbcdbd4c9-9mbr4" Mar 10 02:11:07.873376 kubelet[2766]: I0310 02:11:07.872734 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf4837fd-30fd-4418-b0ae-29ef45b52c78-whisker-ca-bundle\") pod \"whisker-5dbcdbd4c9-9mbr4\" (UID: \"cf4837fd-30fd-4418-b0ae-29ef45b52c78\") " pod="calico-system/whisker-5dbcdbd4c9-9mbr4" Mar 10 02:11:07.873376 kubelet[2766]: I0310 02:11:07.872762 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5dv4\" (UniqueName: \"kubernetes.io/projected/cf4837fd-30fd-4418-b0ae-29ef45b52c78-kube-api-access-t5dv4\") pod \"whisker-5dbcdbd4c9-9mbr4\" (UID: \"cf4837fd-30fd-4418-b0ae-29ef45b52c78\") " pod="calico-system/whisker-5dbcdbd4c9-9mbr4" Mar 10 02:11:07.873376 kubelet[2766]: I0310 02:11:07.872793 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/cf4837fd-30fd-4418-b0ae-29ef45b52c78-nginx-config\") pod \"whisker-5dbcdbd4c9-9mbr4\" (UID: \"cf4837fd-30fd-4418-b0ae-29ef45b52c78\") " pod="calico-system/whisker-5dbcdbd4c9-9mbr4" Mar 10 02:11:07.924487 systemd[1]: Created slice kubepods-besteffort-pod00069936_391e_4dcb_9db5_c1a4f99a929c.slice - libcontainer container kubepods-besteffort-pod00069936_391e_4dcb_9db5_c1a4f99a929c.slice. Mar 10 02:11:07.968519 systemd[1]: Created slice kubepods-besteffort-pod842fa873_9477_4391_bb58_9db26033f987.slice - libcontainer container kubepods-besteffort-pod842fa873_9477_4391_bb58_9db26033f987.slice. Mar 10 02:11:07.978834 kubelet[2766]: I0310 02:11:07.977545 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/00069936-391e-4dcb-9db5-c1a4f99a929c-calico-apiserver-certs\") pod \"calico-apiserver-86bd949797-twmg8\" (UID: \"00069936-391e-4dcb-9db5-c1a4f99a929c\") " pod="calico-system/calico-apiserver-86bd949797-twmg8" Mar 10 02:11:07.978834 kubelet[2766]: I0310 02:11:07.977629 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/842fa873-9477-4391-bb58-9db26033f987-calico-apiserver-certs\") pod \"calico-apiserver-86bd949797-bcqfh\" (UID: \"842fa873-9477-4391-bb58-9db26033f987\") " pod="calico-system/calico-apiserver-86bd949797-bcqfh" Mar 10 02:11:07.978834 kubelet[2766]: I0310 02:11:07.977687 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvq8d\" (UniqueName: \"kubernetes.io/projected/00069936-391e-4dcb-9db5-c1a4f99a929c-kube-api-access-rvq8d\") pod \"calico-apiserver-86bd949797-twmg8\" (UID: \"00069936-391e-4dcb-9db5-c1a4f99a929c\") " pod="calico-system/calico-apiserver-86bd949797-twmg8" Mar 10 02:11:07.978834 kubelet[2766]: I0310 02:11:07.977733 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnc98\" (UniqueName: \"kubernetes.io/projected/842fa873-9477-4391-bb58-9db26033f987-kube-api-access-nnc98\") pod \"calico-apiserver-86bd949797-bcqfh\" (UID: \"842fa873-9477-4391-bb58-9db26033f987\") " pod="calico-system/calico-apiserver-86bd949797-bcqfh" Mar 10 02:11:08.022647 systemd[1]: Created slice kubepods-burstable-pod1fd86dd5_a99d_4590_9bee_7a83a7560ea5.slice - libcontainer container kubepods-burstable-pod1fd86dd5_a99d_4590_9bee_7a83a7560ea5.slice. Mar 10 02:11:08.061358 systemd[1]: Created slice kubepods-besteffort-poda1ff8ffe_44c3_4eb5_a6c5_13677f36e3af.slice - libcontainer container kubepods-besteffort-poda1ff8ffe_44c3_4eb5_a6c5_13677f36e3af.slice. Mar 10 02:11:08.080165 kubelet[2766]: I0310 02:11:08.079601 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv292\" (UniqueName: \"kubernetes.io/projected/1fd86dd5-a99d-4590-9bee-7a83a7560ea5-kube-api-access-pv292\") pod \"coredns-7d764666f9-z7nwk\" (UID: \"1fd86dd5-a99d-4590-9bee-7a83a7560ea5\") " pod="kube-system/coredns-7d764666f9-z7nwk" Mar 10 02:11:08.080165 kubelet[2766]: I0310 02:11:08.079690 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1ff8ffe-44c3-4eb5-a6c5-13677f36e3af-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-rqjmq\" (UID: \"a1ff8ffe-44c3-4eb5-a6c5-13677f36e3af\") " pod="calico-system/goldmane-9f7667bb8-rqjmq" Mar 10 02:11:08.080165 kubelet[2766]: I0310 02:11:08.079717 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2414ddd-640a-46d0-a1b7-587c0cfd947d-config-volume\") pod \"coredns-7d764666f9-vz72q\" (UID: \"a2414ddd-640a-46d0-a1b7-587c0cfd947d\") " pod="kube-system/coredns-7d764666f9-vz72q" Mar 10 02:11:08.080165 kubelet[2766]: I0310 02:11:08.079746 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1ff8ffe-44c3-4eb5-a6c5-13677f36e3af-config\") pod \"goldmane-9f7667bb8-rqjmq\" (UID: \"a1ff8ffe-44c3-4eb5-a6c5-13677f36e3af\") " pod="calico-system/goldmane-9f7667bb8-rqjmq" Mar 10 02:11:08.080165 kubelet[2766]: I0310 02:11:08.079771 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hzrj\" (UniqueName: \"kubernetes.io/projected/a1ff8ffe-44c3-4eb5-a6c5-13677f36e3af-kube-api-access-2hzrj\") pod \"goldmane-9f7667bb8-rqjmq\" (UID: \"a1ff8ffe-44c3-4eb5-a6c5-13677f36e3af\") " pod="calico-system/goldmane-9f7667bb8-rqjmq" Mar 10 02:11:08.080467 kubelet[2766]: I0310 02:11:08.079797 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6wjk\" (UniqueName: \"kubernetes.io/projected/a2414ddd-640a-46d0-a1b7-587c0cfd947d-kube-api-access-h6wjk\") pod \"coredns-7d764666f9-vz72q\" (UID: \"a2414ddd-640a-46d0-a1b7-587c0cfd947d\") " pod="kube-system/coredns-7d764666f9-vz72q" Mar 10 02:11:08.080467 kubelet[2766]: I0310 02:11:08.079839 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1fd86dd5-a99d-4590-9bee-7a83a7560ea5-config-volume\") pod \"coredns-7d764666f9-z7nwk\" (UID: \"1fd86dd5-a99d-4590-9bee-7a83a7560ea5\") " pod="kube-system/coredns-7d764666f9-z7nwk" Mar 10 02:11:08.080467 kubelet[2766]: I0310 02:11:08.079862 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a1ff8ffe-44c3-4eb5-a6c5-13677f36e3af-goldmane-key-pair\") pod \"goldmane-9f7667bb8-rqjmq\" (UID: \"a1ff8ffe-44c3-4eb5-a6c5-13677f36e3af\") " pod="calico-system/goldmane-9f7667bb8-rqjmq" Mar 10 02:11:08.083647 systemd[1]: Created slice kubepods-burstable-poda2414ddd_640a_46d0_a1b7_587c0cfd947d.slice - libcontainer container kubepods-burstable-poda2414ddd_640a_46d0_a1b7_587c0cfd947d.slice. Mar 10 02:11:08.187048 kubelet[2766]: I0310 02:11:08.180750 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/595f5ed2-29fa-4602-8ae6-ba221a4a42bd-tigera-ca-bundle\") pod \"calico-kube-controllers-689586c974-wfw9j\" (UID: \"595f5ed2-29fa-4602-8ae6-ba221a4a42bd\") " pod="calico-system/calico-kube-controllers-689586c974-wfw9j" Mar 10 02:11:08.187048 kubelet[2766]: I0310 02:11:08.180826 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxkz6\" (UniqueName: \"kubernetes.io/projected/595f5ed2-29fa-4602-8ae6-ba221a4a42bd-kube-api-access-pxkz6\") pod \"calico-kube-controllers-689586c974-wfw9j\" (UID: \"595f5ed2-29fa-4602-8ae6-ba221a4a42bd\") " pod="calico-system/calico-kube-controllers-689586c974-wfw9j" Mar 10 02:11:08.187059 systemd[1]: Created slice kubepods-besteffort-pod595f5ed2_29fa_4602_8ae6_ba221a4a42bd.slice - libcontainer container kubepods-besteffort-pod595f5ed2_29fa_4602_8ae6_ba221a4a42bd.slice. Mar 10 02:11:08.257098 containerd[1558]: time="2026-03-10T02:11:08.255811984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86bd949797-twmg8,Uid:00069936-391e-4dcb-9db5-c1a4f99a929c,Namespace:calico-system,Attempt:0,}" Mar 10 02:11:08.308613 containerd[1558]: time="2026-03-10T02:11:08.305336518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86bd949797-bcqfh,Uid:842fa873-9477-4391-bb58-9db26033f987,Namespace:calico-system,Attempt:0,}" Mar 10 02:11:08.354348 kubelet[2766]: E0310 02:11:08.351623 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:11:08.365891 containerd[1558]: time="2026-03-10T02:11:08.365494209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-z7nwk,Uid:1fd86dd5-a99d-4590-9bee-7a83a7560ea5,Namespace:kube-system,Attempt:0,}" Mar 10 02:11:08.401267 containerd[1558]: time="2026-03-10T02:11:08.399667834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5dbcdbd4c9-9mbr4,Uid:cf4837fd-30fd-4418-b0ae-29ef45b52c78,Namespace:calico-system,Attempt:0,}" Mar 10 02:11:08.411487 containerd[1558]: time="2026-03-10T02:11:08.411444950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-rqjmq,Uid:a1ff8ffe-44c3-4eb5-a6c5-13677f36e3af,Namespace:calico-system,Attempt:0,}" Mar 10 02:11:08.428769 kubelet[2766]: E0310 02:11:08.428726 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:11:08.440561 containerd[1558]: time="2026-03-10T02:11:08.440516717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-vz72q,Uid:a2414ddd-640a-46d0-a1b7-587c0cfd947d,Namespace:kube-system,Attempt:0,}" Mar 10 02:11:08.550303 containerd[1558]: time="2026-03-10T02:11:08.547816962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-689586c974-wfw9j,Uid:595f5ed2-29fa-4602-8ae6-ba221a4a42bd,Namespace:calico-system,Attempt:0,}" Mar 10 02:11:08.613824 systemd[1]: Created slice kubepods-besteffort-poda5c1c4e6_10bd_4317_8c91_c1420d34eabf.slice - libcontainer container kubepods-besteffort-poda5c1c4e6_10bd_4317_8c91_c1420d34eabf.slice. Mar 10 02:11:08.683480 containerd[1558]: time="2026-03-10T02:11:08.677393501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cb4xk,Uid:a5c1c4e6-10bd-4317-8c91-c1420d34eabf,Namespace:calico-system,Attempt:0,}" Mar 10 02:11:08.772427 containerd[1558]: time="2026-03-10T02:11:08.772380262Z" level=info msg="CreateContainer within sandbox \"885046d79570312e278c1343f06f29d91e050017f03daffbebcadd032c592fc6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 10 02:11:09.012657 containerd[1558]: time="2026-03-10T02:11:09.012612353Z" level=info msg="Container 7313cb90afb4ff239126b97ffd8678f037ef6d1eef168718d157e23c6c3acb9d: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:11:09.088941 containerd[1558]: time="2026-03-10T02:11:09.088563600Z" level=info msg="CreateContainer within sandbox \"885046d79570312e278c1343f06f29d91e050017f03daffbebcadd032c592fc6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7313cb90afb4ff239126b97ffd8678f037ef6d1eef168718d157e23c6c3acb9d\"" Mar 10 02:11:09.116801 containerd[1558]: time="2026-03-10T02:11:09.116625449Z" level=info msg="StartContainer for \"7313cb90afb4ff239126b97ffd8678f037ef6d1eef168718d157e23c6c3acb9d\"" Mar 10 02:11:09.128346 containerd[1558]: time="2026-03-10T02:11:09.128297922Z" level=info msg="connecting to shim 7313cb90afb4ff239126b97ffd8678f037ef6d1eef168718d157e23c6c3acb9d" address="unix:///run/containerd/s/2a731fa939a0baea813b68ea7971c7e7819c38b92b57ea1dcdb664d8ef821eae" protocol=ttrpc version=3 Mar 10 02:11:09.176857 containerd[1558]: time="2026-03-10T02:11:09.176234334Z" level=error msg="Failed to destroy network for sandbox \"b50c23cdea45d605607f715e1bec82a12f18ee8fba3b61f45586b309517bbf4b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 02:11:09.190761 containerd[1558]: time="2026-03-10T02:11:09.190706050Z" level=error msg="Failed to destroy network for sandbox \"d16db840867095fd3e63f5ed85d5b97cb67d48d9445969ad7d3b593ddb27316f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 02:11:09.193421 containerd[1558]: time="2026-03-10T02:11:09.193373668Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86bd949797-twmg8,Uid:00069936-391e-4dcb-9db5-c1a4f99a929c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b50c23cdea45d605607f715e1bec82a12f18ee8fba3b61f45586b309517bbf4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 02:11:09.200693 containerd[1558]: time="2026-03-10T02:11:09.200649173Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-z7nwk,Uid:1fd86dd5-a99d-4590-9bee-7a83a7560ea5,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d16db840867095fd3e63f5ed85d5b97cb67d48d9445969ad7d3b593ddb27316f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 02:11:09.234013 kubelet[2766]: E0310 02:11:09.230749 2766 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d16db840867095fd3e63f5ed85d5b97cb67d48d9445969ad7d3b593ddb27316f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 02:11:09.234013 kubelet[2766]: E0310 02:11:09.230853 2766 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d16db840867095fd3e63f5ed85d5b97cb67d48d9445969ad7d3b593ddb27316f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-z7nwk" Mar 10 02:11:09.234013 kubelet[2766]: E0310 02:11:09.230879 2766 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d16db840867095fd3e63f5ed85d5b97cb67d48d9445969ad7d3b593ddb27316f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-z7nwk" Mar 10 02:11:09.234013 kubelet[2766]: E0310 02:11:09.231289 2766 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b50c23cdea45d605607f715e1bec82a12f18ee8fba3b61f45586b309517bbf4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 02:11:09.234639 kubelet[2766]: E0310 02:11:09.231341 2766 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b50c23cdea45d605607f715e1bec82a12f18ee8fba3b61f45586b309517bbf4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-86bd949797-twmg8" Mar 10 02:11:09.234639 kubelet[2766]: E0310 02:11:09.231368 2766 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b50c23cdea45d605607f715e1bec82a12f18ee8fba3b61f45586b309517bbf4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-86bd949797-twmg8" Mar 10 02:11:09.234639 kubelet[2766]: E0310 02:11:09.231426 2766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-86bd949797-twmg8_calico-system(00069936-391e-4dcb-9db5-c1a4f99a929c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-86bd949797-twmg8_calico-system(00069936-391e-4dcb-9db5-c1a4f99a929c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b50c23cdea45d605607f715e1bec82a12f18ee8fba3b61f45586b309517bbf4b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-86bd949797-twmg8" podUID="00069936-391e-4dcb-9db5-c1a4f99a929c" Mar 10 02:11:09.238340 kubelet[2766]: E0310 02:11:09.238019 2766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-z7nwk_kube-system(1fd86dd5-a99d-4590-9bee-7a83a7560ea5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-z7nwk_kube-system(1fd86dd5-a99d-4590-9bee-7a83a7560ea5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d16db840867095fd3e63f5ed85d5b97cb67d48d9445969ad7d3b593ddb27316f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-z7nwk" podUID="1fd86dd5-a99d-4590-9bee-7a83a7560ea5" Mar 10 02:11:09.320409 containerd[1558]: time="2026-03-10T02:11:09.319167073Z" level=error msg="Failed to destroy network for sandbox \"34d3ef7588fa531e197493243611707bb3b3218689bee1502d6b39893e353f6e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 02:11:09.325776 containerd[1558]: time="2026-03-10T02:11:09.325650052Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-689586c974-wfw9j,Uid:595f5ed2-29fa-4602-8ae6-ba221a4a42bd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"34d3ef7588fa531e197493243611707bb3b3218689bee1502d6b39893e353f6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 02:11:09.326178 kubelet[2766]: E0310 02:11:09.326090 2766 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34d3ef7588fa531e197493243611707bb3b3218689bee1502d6b39893e353f6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 02:11:09.326314 kubelet[2766]: E0310 02:11:09.326190 2766 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34d3ef7588fa531e197493243611707bb3b3218689bee1502d6b39893e353f6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-689586c974-wfw9j" Mar 10 02:11:09.326314 kubelet[2766]: E0310 02:11:09.326215 2766 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34d3ef7588fa531e197493243611707bb3b3218689bee1502d6b39893e353f6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-689586c974-wfw9j" Mar 10 02:11:09.326314 kubelet[2766]: E0310 02:11:09.326275 2766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-689586c974-wfw9j_calico-system(595f5ed2-29fa-4602-8ae6-ba221a4a42bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-689586c974-wfw9j_calico-system(595f5ed2-29fa-4602-8ae6-ba221a4a42bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"34d3ef7588fa531e197493243611707bb3b3218689bee1502d6b39893e353f6e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-689586c974-wfw9j" podUID="595f5ed2-29fa-4602-8ae6-ba221a4a42bd" Mar 10 02:11:09.346504 containerd[1558]: time="2026-03-10T02:11:09.346400068Z" level=error msg="Failed to destroy network for sandbox \"9dd414a9a14a8edee33ec4e96f6378f00f055d917283df3ed3db8f778da1cfaf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 02:11:09.354274 systemd[1]: Started cri-containerd-7313cb90afb4ff239126b97ffd8678f037ef6d1eef168718d157e23c6c3acb9d.scope - libcontainer container 7313cb90afb4ff239126b97ffd8678f037ef6d1eef168718d157e23c6c3acb9d. Mar 10 02:11:09.354778 containerd[1558]: time="2026-03-10T02:11:09.354077467Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-rqjmq,Uid:a1ff8ffe-44c3-4eb5-a6c5-13677f36e3af,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dd414a9a14a8edee33ec4e96f6378f00f055d917283df3ed3db8f778da1cfaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 02:11:09.355030 kubelet[2766]: E0310 02:11:09.354551 2766 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dd414a9a14a8edee33ec4e96f6378f00f055d917283df3ed3db8f778da1cfaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 02:11:09.355030 kubelet[2766]: E0310 02:11:09.354613 2766 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dd414a9a14a8edee33ec4e96f6378f00f055d917283df3ed3db8f778da1cfaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-rqjmq" Mar 10 02:11:09.355030 kubelet[2766]: E0310 02:11:09.354643 2766 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dd414a9a14a8edee33ec4e96f6378f00f055d917283df3ed3db8f778da1cfaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-rqjmq" Mar 10 02:11:09.355163 kubelet[2766]: E0310 02:11:09.354745 2766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-rqjmq_calico-system(a1ff8ffe-44c3-4eb5-a6c5-13677f36e3af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-rqjmq_calico-system(a1ff8ffe-44c3-4eb5-a6c5-13677f36e3af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9dd414a9a14a8edee33ec4e96f6378f00f055d917283df3ed3db8f778da1cfaf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-rqjmq" podUID="a1ff8ffe-44c3-4eb5-a6c5-13677f36e3af" Mar 10 02:11:09.359058 containerd[1558]: time="2026-03-10T02:11:09.358891801Z" level=error msg="Failed to destroy network for sandbox \"912865ba1825fe0d1316b39d9e34e6b145137b7481c2bdb1298a80cb0b315388\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 02:11:09.371621 containerd[1558]: time="2026-03-10T02:11:09.371376929Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5dbcdbd4c9-9mbr4,Uid:cf4837fd-30fd-4418-b0ae-29ef45b52c78,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"912865ba1825fe0d1316b39d9e34e6b145137b7481c2bdb1298a80cb0b315388\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 02:11:09.384185 containerd[1558]: time="2026-03-10T02:11:09.383822403Z" level=error msg="Failed to destroy network for sandbox \"a9083d17468ba1cd12422a661ec769e4112bb7a850ccab4467e9a3d2af4a4ef6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 02:11:09.384374 kubelet[2766]: E0310 02:11:09.384260 2766 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"912865ba1825fe0d1316b39d9e34e6b145137b7481c2bdb1298a80cb0b315388\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 02:11:09.384374 kubelet[2766]: E0310 02:11:09.384325 2766 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"912865ba1825fe0d1316b39d9e34e6b145137b7481c2bdb1298a80cb0b315388\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5dbcdbd4c9-9mbr4" Mar 10 02:11:09.384374 kubelet[2766]: E0310 02:11:09.384351 2766 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"912865ba1825fe0d1316b39d9e34e6b145137b7481c2bdb1298a80cb0b315388\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5dbcdbd4c9-9mbr4" Mar 10 02:11:09.384662 kubelet[2766]: E0310 02:11:09.384495 2766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5dbcdbd4c9-9mbr4_calico-system(cf4837fd-30fd-4418-b0ae-29ef45b52c78)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5dbcdbd4c9-9mbr4_calico-system(cf4837fd-30fd-4418-b0ae-29ef45b52c78)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"912865ba1825fe0d1316b39d9e34e6b145137b7481c2bdb1298a80cb0b315388\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5dbcdbd4c9-9mbr4" podUID="cf4837fd-30fd-4418-b0ae-29ef45b52c78" Mar 10 02:11:09.401294 containerd[1558]: time="2026-03-10T02:11:09.401177976Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-vz72q,Uid:a2414ddd-640a-46d0-a1b7-587c0cfd947d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9083d17468ba1cd12422a661ec769e4112bb7a850ccab4467e9a3d2af4a4ef6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 02:11:09.403666 kubelet[2766]: E0310 02:11:09.403472 2766 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9083d17468ba1cd12422a661ec769e4112bb7a850ccab4467e9a3d2af4a4ef6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 02:11:09.403666 kubelet[2766]: E0310 02:11:09.403595 2766 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9083d17468ba1cd12422a661ec769e4112bb7a850ccab4467e9a3d2af4a4ef6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-vz72q" Mar 10 02:11:09.403666 kubelet[2766]: E0310 02:11:09.403625 2766 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9083d17468ba1cd12422a661ec769e4112bb7a850ccab4467e9a3d2af4a4ef6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-vz72q" Mar 10 02:11:09.403835 kubelet[2766]: E0310 02:11:09.403691 2766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-vz72q_kube-system(a2414ddd-640a-46d0-a1b7-587c0cfd947d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-vz72q_kube-system(a2414ddd-640a-46d0-a1b7-587c0cfd947d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a9083d17468ba1cd12422a661ec769e4112bb7a850ccab4467e9a3d2af4a4ef6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-vz72q" podUID="a2414ddd-640a-46d0-a1b7-587c0cfd947d" Mar 10 02:11:09.426198 containerd[1558]: time="2026-03-10T02:11:09.425153069Z" level=error msg="Failed to destroy network for sandbox \"f105eabd60195d874ebecc1df6d2b77fa3e7dce0c7c52942c3d9e89f6df4e9d1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 02:11:09.437369 containerd[1558]: time="2026-03-10T02:11:09.437144645Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86bd949797-bcqfh,Uid:842fa873-9477-4391-bb58-9db26033f987,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f105eabd60195d874ebecc1df6d2b77fa3e7dce0c7c52942c3d9e89f6df4e9d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 02:11:09.437560 kubelet[2766]: E0310 02:11:09.437504 2766 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f105eabd60195d874ebecc1df6d2b77fa3e7dce0c7c52942c3d9e89f6df4e9d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 02:11:09.437631 kubelet[2766]: E0310 02:11:09.437573 2766 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f105eabd60195d874ebecc1df6d2b77fa3e7dce0c7c52942c3d9e89f6df4e9d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-86bd949797-bcqfh" Mar 10 02:11:09.437631 kubelet[2766]: E0310 02:11:09.437596 2766 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f105eabd60195d874ebecc1df6d2b77fa3e7dce0c7c52942c3d9e89f6df4e9d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-86bd949797-bcqfh" Mar 10 02:11:09.437725 kubelet[2766]: E0310 02:11:09.437653 2766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-86bd949797-bcqfh_calico-system(842fa873-9477-4391-bb58-9db26033f987)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-86bd949797-bcqfh_calico-system(842fa873-9477-4391-bb58-9db26033f987)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f105eabd60195d874ebecc1df6d2b77fa3e7dce0c7c52942c3d9e89f6df4e9d1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-86bd949797-bcqfh" podUID="842fa873-9477-4391-bb58-9db26033f987" Mar 10 02:11:09.479177 containerd[1558]: time="2026-03-10T02:11:09.478595073Z" level=error msg="Failed to destroy network for sandbox \"19cf953186ad2c5247e6d9108f2a5f031457269dc36eca78a4269f02a9e1e0b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 02:11:09.491150 containerd[1558]: time="2026-03-10T02:11:09.491014790Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cb4xk,Uid:a5c1c4e6-10bd-4317-8c91-c1420d34eabf,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"19cf953186ad2c5247e6d9108f2a5f031457269dc36eca78a4269f02a9e1e0b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 02:11:09.491376 kubelet[2766]: E0310 02:11:09.491308 2766 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19cf953186ad2c5247e6d9108f2a5f031457269dc36eca78a4269f02a9e1e0b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 02:11:09.491468 kubelet[2766]: E0310 02:11:09.491372 2766 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19cf953186ad2c5247e6d9108f2a5f031457269dc36eca78a4269f02a9e1e0b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cb4xk" Mar 10 02:11:09.491468 kubelet[2766]: E0310 02:11:09.491399 2766 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19cf953186ad2c5247e6d9108f2a5f031457269dc36eca78a4269f02a9e1e0b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cb4xk" Mar 10 02:11:09.491561 kubelet[2766]: E0310 02:11:09.491466 2766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cb4xk_calico-system(a5c1c4e6-10bd-4317-8c91-c1420d34eabf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cb4xk_calico-system(a5c1c4e6-10bd-4317-8c91-c1420d34eabf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"19cf953186ad2c5247e6d9108f2a5f031457269dc36eca78a4269f02a9e1e0b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cb4xk" podUID="a5c1c4e6-10bd-4317-8c91-c1420d34eabf" Mar 10 02:11:09.705997 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4099266467.mount: Deactivated successfully. Mar 10 02:11:09.706143 systemd[1]: run-netns-cni\x2d3b9cf945\x2d957d\x2da244\x2dda1c\x2dc509a9e65451.mount: Deactivated successfully. Mar 10 02:11:09.706235 systemd[1]: run-netns-cni\x2d37904288\x2da43c\x2d9514\x2db28f\x2dc654ef891361.mount: Deactivated successfully. Mar 10 02:11:09.706328 systemd[1]: run-netns-cni\x2d02dad4c6\x2d3771\x2dea6a\x2d5d76\x2d4181c0816c76.mount: Deactivated successfully. Mar 10 02:11:09.706419 systemd[1]: run-netns-cni\x2d11a2011f\x2d1591\x2d8049\x2ddfba\x2db747fccdbb4f.mount: Deactivated successfully. Mar 10 02:11:09.706506 systemd[1]: run-netns-cni\x2d1da9776a\x2d0d2f\x2d0674\x2d21bd\x2dccad160adbd1.mount: Deactivated successfully. Mar 10 02:11:09.706588 systemd[1]: run-netns-cni\x2d7f3ebbd9\x2d0b50\x2d8836\x2d27d3\x2da8a123a1ad79.mount: Deactivated successfully. Mar 10 02:11:09.706671 systemd[1]: run-netns-cni\x2d5d19bee9\x2d7d2a\x2d1eb4\x2df13a\x2dfce84bde2352.mount: Deactivated successfully. Mar 10 02:11:09.706764 systemd[1]: run-netns-cni\x2d927275b3\x2d28ae\x2ddc52\x2d62ca\x2df5048bcafd81.mount: Deactivated successfully. Mar 10 02:11:09.759012 containerd[1558]: time="2026-03-10T02:11:09.758813218Z" level=info msg="StartContainer for \"7313cb90afb4ff239126b97ffd8678f037ef6d1eef168718d157e23c6c3acb9d\" returns successfully" Mar 10 02:11:10.732096 kubelet[2766]: I0310 02:11:10.731461 2766 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-7j782" podStartSLOduration=2.984447397 podStartE2EDuration="59.731446088s" podCreationTimestamp="2026-03-10 02:10:11 +0000 UTC" firstStartedPulling="2026-03-10 02:10:11.78814019 +0000 UTC m=+21.588259479" lastFinishedPulling="2026-03-10 02:11:08.535138841 +0000 UTC m=+78.335258170" observedRunningTime="2026-03-10 02:11:10.727796522 +0000 UTC m=+80.527915810" watchObservedRunningTime="2026-03-10 02:11:10.731446088 +0000 UTC m=+80.531565377" Mar 10 02:11:11.280848 kubelet[2766]: I0310 02:11:11.279155 2766 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/cf4837fd-30fd-4418-b0ae-29ef45b52c78-kube-api-access-t5dv4\" (UniqueName: \"kubernetes.io/projected/cf4837fd-30fd-4418-b0ae-29ef45b52c78-kube-api-access-t5dv4\") pod \"cf4837fd-30fd-4418-b0ae-29ef45b52c78\" (UID: \"cf4837fd-30fd-4418-b0ae-29ef45b52c78\") " Mar 10 02:11:11.281778 kubelet[2766]: I0310 02:11:11.281323 2766 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/cf4837fd-30fd-4418-b0ae-29ef45b52c78-nginx-config\" (UniqueName: \"kubernetes.io/configmap/cf4837fd-30fd-4418-b0ae-29ef45b52c78-nginx-config\") pod \"cf4837fd-30fd-4418-b0ae-29ef45b52c78\" (UID: \"cf4837fd-30fd-4418-b0ae-29ef45b52c78\") " Mar 10 02:11:11.281778 kubelet[2766]: I0310 02:11:11.281600 2766 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/cf4837fd-30fd-4418-b0ae-29ef45b52c78-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cf4837fd-30fd-4418-b0ae-29ef45b52c78-whisker-backend-key-pair\") pod \"cf4837fd-30fd-4418-b0ae-29ef45b52c78\" (UID: \"cf4837fd-30fd-4418-b0ae-29ef45b52c78\") " Mar 10 02:11:11.281778 kubelet[2766]: I0310 02:11:11.281722 2766 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/cf4837fd-30fd-4418-b0ae-29ef45b52c78-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf4837fd-30fd-4418-b0ae-29ef45b52c78-whisker-ca-bundle\") pod \"cf4837fd-30fd-4418-b0ae-29ef45b52c78\" (UID: \"cf4837fd-30fd-4418-b0ae-29ef45b52c78\") " Mar 10 02:11:11.290299 kubelet[2766]: I0310 02:11:11.290220 2766 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf4837fd-30fd-4418-b0ae-29ef45b52c78-nginx-config" pod "cf4837fd-30fd-4418-b0ae-29ef45b52c78" (UID: "cf4837fd-30fd-4418-b0ae-29ef45b52c78"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 10 02:11:11.290520 kubelet[2766]: I0310 02:11:11.290359 2766 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf4837fd-30fd-4418-b0ae-29ef45b52c78-whisker-ca-bundle" pod "cf4837fd-30fd-4418-b0ae-29ef45b52c78" (UID: "cf4837fd-30fd-4418-b0ae-29ef45b52c78"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 10 02:11:11.302489 systemd[1]: var-lib-kubelet-pods-cf4837fd\x2d30fd\x2d4418\x2db0ae\x2d29ef45b52c78-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt5dv4.mount: Deactivated successfully. Mar 10 02:11:11.305640 kubelet[2766]: I0310 02:11:11.305570 2766 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf4837fd-30fd-4418-b0ae-29ef45b52c78-kube-api-access-t5dv4" pod "cf4837fd-30fd-4418-b0ae-29ef45b52c78" (UID: "cf4837fd-30fd-4418-b0ae-29ef45b52c78"). InnerVolumeSpecName "kube-api-access-t5dv4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 10 02:11:11.307233 kubelet[2766]: I0310 02:11:11.307189 2766 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf4837fd-30fd-4418-b0ae-29ef45b52c78-whisker-backend-key-pair" pod "cf4837fd-30fd-4418-b0ae-29ef45b52c78" (UID: "cf4837fd-30fd-4418-b0ae-29ef45b52c78"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 10 02:11:11.316189 systemd[1]: var-lib-kubelet-pods-cf4837fd\x2d30fd\x2d4418\x2db0ae\x2d29ef45b52c78-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 10 02:11:11.387997 kubelet[2766]: I0310 02:11:11.386531 2766 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t5dv4\" (UniqueName: \"kubernetes.io/projected/cf4837fd-30fd-4418-b0ae-29ef45b52c78-kube-api-access-t5dv4\") on node \"localhost\" DevicePath \"\"" Mar 10 02:11:11.387997 kubelet[2766]: I0310 02:11:11.386577 2766 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/cf4837fd-30fd-4418-b0ae-29ef45b52c78-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 10 02:11:11.387997 kubelet[2766]: I0310 02:11:11.386593 2766 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cf4837fd-30fd-4418-b0ae-29ef45b52c78-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 10 02:11:11.387997 kubelet[2766]: I0310 02:11:11.386605 2766 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf4837fd-30fd-4418-b0ae-29ef45b52c78-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 10 02:11:11.634006 systemd[1]: Removed slice kubepods-besteffort-podcf4837fd_30fd_4418_b0ae_29ef45b52c78.slice - libcontainer container kubepods-besteffort-podcf4837fd_30fd_4418_b0ae_29ef45b52c78.slice. Mar 10 02:11:11.944444 systemd[1]: Created slice kubepods-besteffort-podf1ea4b2d_7754_4639_8cc2_56367de4b18d.slice - libcontainer container kubepods-besteffort-podf1ea4b2d_7754_4639_8cc2_56367de4b18d.slice. Mar 10 02:11:11.994412 kubelet[2766]: I0310 02:11:11.993799 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ghqv\" (UniqueName: \"kubernetes.io/projected/f1ea4b2d-7754-4639-8cc2-56367de4b18d-kube-api-access-4ghqv\") pod \"whisker-7f8bc5f4d4-8vk5t\" (UID: \"f1ea4b2d-7754-4639-8cc2-56367de4b18d\") " pod="calico-system/whisker-7f8bc5f4d4-8vk5t" Mar 10 02:11:11.994412 kubelet[2766]: I0310 02:11:11.993908 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f1ea4b2d-7754-4639-8cc2-56367de4b18d-whisker-backend-key-pair\") pod \"whisker-7f8bc5f4d4-8vk5t\" (UID: \"f1ea4b2d-7754-4639-8cc2-56367de4b18d\") " pod="calico-system/whisker-7f8bc5f4d4-8vk5t" Mar 10 02:11:11.997287 kubelet[2766]: I0310 02:11:11.996020 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/f1ea4b2d-7754-4639-8cc2-56367de4b18d-nginx-config\") pod \"whisker-7f8bc5f4d4-8vk5t\" (UID: \"f1ea4b2d-7754-4639-8cc2-56367de4b18d\") " pod="calico-system/whisker-7f8bc5f4d4-8vk5t" Mar 10 02:11:11.997287 kubelet[2766]: I0310 02:11:11.997021 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1ea4b2d-7754-4639-8cc2-56367de4b18d-whisker-ca-bundle\") pod \"whisker-7f8bc5f4d4-8vk5t\" (UID: \"f1ea4b2d-7754-4639-8cc2-56367de4b18d\") " pod="calico-system/whisker-7f8bc5f4d4-8vk5t" Mar 10 02:11:12.281380 containerd[1558]: time="2026-03-10T02:11:12.281216123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f8bc5f4d4-8vk5t,Uid:f1ea4b2d-7754-4639-8cc2-56367de4b18d,Namespace:calico-system,Attempt:0,}" Mar 10 02:11:12.594992 kubelet[2766]: I0310 02:11:12.594676 2766 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="cf4837fd-30fd-4418-b0ae-29ef45b52c78" path="/var/lib/kubelet/pods/cf4837fd-30fd-4418-b0ae-29ef45b52c78/volumes" Mar 10 02:11:12.794119 systemd-networkd[1464]: cali9e1d5ebad0b: Link UP Mar 10 02:11:12.794396 systemd-networkd[1464]: cali9e1d5ebad0b: Gained carrier Mar 10 02:11:12.837995 containerd[1558]: 2026-03-10 02:11:12.394 [ERROR][3873] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 10 02:11:12.837995 containerd[1558]: 2026-03-10 02:11:12.475 [INFO][3873] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7f8bc5f4d4--8vk5t-eth0 whisker-7f8bc5f4d4- calico-system f1ea4b2d-7754-4639-8cc2-56367de4b18d 1077 0 2026-03-10 02:11:11 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7f8bc5f4d4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7f8bc5f4d4-8vk5t eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali9e1d5ebad0b [] [] }} ContainerID="3663dd8a3832843d2bb6bd848bcfd4218444b4e3a8b18299885807270e4c60c9" Namespace="calico-system" Pod="whisker-7f8bc5f4d4-8vk5t" WorkloadEndpoint="localhost-k8s-whisker--7f8bc5f4d4--8vk5t-" Mar 10 02:11:12.837995 containerd[1558]: 2026-03-10 02:11:12.475 [INFO][3873] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3663dd8a3832843d2bb6bd848bcfd4218444b4e3a8b18299885807270e4c60c9" Namespace="calico-system" Pod="whisker-7f8bc5f4d4-8vk5t" WorkloadEndpoint="localhost-k8s-whisker--7f8bc5f4d4--8vk5t-eth0" Mar 10 02:11:12.837995 containerd[1558]: 2026-03-10 02:11:12.582 [INFO][3887] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3663dd8a3832843d2bb6bd848bcfd4218444b4e3a8b18299885807270e4c60c9" HandleID="k8s-pod-network.3663dd8a3832843d2bb6bd848bcfd4218444b4e3a8b18299885807270e4c60c9" Workload="localhost-k8s-whisker--7f8bc5f4d4--8vk5t-eth0" Mar 10 02:11:12.838328 containerd[1558]: 2026-03-10 02:11:12.608 [INFO][3887] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3663dd8a3832843d2bb6bd848bcfd4218444b4e3a8b18299885807270e4c60c9" HandleID="k8s-pod-network.3663dd8a3832843d2bb6bd848bcfd4218444b4e3a8b18299885807270e4c60c9" Workload="localhost-k8s-whisker--7f8bc5f4d4--8vk5t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00047a170), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7f8bc5f4d4-8vk5t", "timestamp":"2026-03-10 02:11:12.582672706 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003d4b00)} Mar 10 02:11:12.838328 containerd[1558]: 2026-03-10 02:11:12.608 [INFO][3887] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 02:11:12.838328 containerd[1558]: 2026-03-10 02:11:12.608 [INFO][3887] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 02:11:12.838328 containerd[1558]: 2026-03-10 02:11:12.608 [INFO][3887] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 10 02:11:12.838328 containerd[1558]: 2026-03-10 02:11:12.623 [INFO][3887] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3663dd8a3832843d2bb6bd848bcfd4218444b4e3a8b18299885807270e4c60c9" host="localhost" Mar 10 02:11:12.838328 containerd[1558]: 2026-03-10 02:11:12.649 [INFO][3887] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 10 02:11:12.838328 containerd[1558]: 2026-03-10 02:11:12.669 [INFO][3887] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 10 02:11:12.838328 containerd[1558]: 2026-03-10 02:11:12.677 [INFO][3887] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 10 02:11:12.838328 containerd[1558]: 2026-03-10 02:11:12.690 [INFO][3887] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 10 02:11:12.838328 containerd[1558]: 2026-03-10 02:11:12.690 [INFO][3887] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3663dd8a3832843d2bb6bd848bcfd4218444b4e3a8b18299885807270e4c60c9" host="localhost" Mar 10 02:11:12.838689 containerd[1558]: 2026-03-10 02:11:12.703 [INFO][3887] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3663dd8a3832843d2bb6bd848bcfd4218444b4e3a8b18299885807270e4c60c9 Mar 10 02:11:12.838689 containerd[1558]: 2026-03-10 02:11:12.712 [INFO][3887] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3663dd8a3832843d2bb6bd848bcfd4218444b4e3a8b18299885807270e4c60c9" host="localhost" Mar 10 02:11:12.838689 containerd[1558]: 2026-03-10 02:11:12.734 [INFO][3887] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.3663dd8a3832843d2bb6bd848bcfd4218444b4e3a8b18299885807270e4c60c9" host="localhost" Mar 10 02:11:12.838689 containerd[1558]: 2026-03-10 02:11:12.735 [INFO][3887] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.3663dd8a3832843d2bb6bd848bcfd4218444b4e3a8b18299885807270e4c60c9" host="localhost" Mar 10 02:11:12.838689 containerd[1558]: 2026-03-10 02:11:12.735 [INFO][3887] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 02:11:12.838689 containerd[1558]: 2026-03-10 02:11:12.735 [INFO][3887] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="3663dd8a3832843d2bb6bd848bcfd4218444b4e3a8b18299885807270e4c60c9" HandleID="k8s-pod-network.3663dd8a3832843d2bb6bd848bcfd4218444b4e3a8b18299885807270e4c60c9" Workload="localhost-k8s-whisker--7f8bc5f4d4--8vk5t-eth0" Mar 10 02:11:12.839042 containerd[1558]: 2026-03-10 02:11:12.743 [INFO][3873] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3663dd8a3832843d2bb6bd848bcfd4218444b4e3a8b18299885807270e4c60c9" Namespace="calico-system" Pod="whisker-7f8bc5f4d4-8vk5t" WorkloadEndpoint="localhost-k8s-whisker--7f8bc5f4d4--8vk5t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7f8bc5f4d4--8vk5t-eth0", GenerateName:"whisker-7f8bc5f4d4-", Namespace:"calico-system", SelfLink:"", UID:"f1ea4b2d-7754-4639-8cc2-56367de4b18d", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 11, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7f8bc5f4d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7f8bc5f4d4-8vk5t", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9e1d5ebad0b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:11:12.839042 containerd[1558]: 2026-03-10 02:11:12.743 [INFO][3873] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="3663dd8a3832843d2bb6bd848bcfd4218444b4e3a8b18299885807270e4c60c9" Namespace="calico-system" Pod="whisker-7f8bc5f4d4-8vk5t" WorkloadEndpoint="localhost-k8s-whisker--7f8bc5f4d4--8vk5t-eth0" Mar 10 02:11:12.839205 containerd[1558]: 2026-03-10 02:11:12.743 [INFO][3873] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9e1d5ebad0b ContainerID="3663dd8a3832843d2bb6bd848bcfd4218444b4e3a8b18299885807270e4c60c9" Namespace="calico-system" Pod="whisker-7f8bc5f4d4-8vk5t" WorkloadEndpoint="localhost-k8s-whisker--7f8bc5f4d4--8vk5t-eth0" Mar 10 02:11:12.839205 containerd[1558]: 2026-03-10 02:11:12.794 [INFO][3873] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3663dd8a3832843d2bb6bd848bcfd4218444b4e3a8b18299885807270e4c60c9" Namespace="calico-system" Pod="whisker-7f8bc5f4d4-8vk5t" WorkloadEndpoint="localhost-k8s-whisker--7f8bc5f4d4--8vk5t-eth0" Mar 10 02:11:12.839266 containerd[1558]: 2026-03-10 02:11:12.794 [INFO][3873] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3663dd8a3832843d2bb6bd848bcfd4218444b4e3a8b18299885807270e4c60c9" Namespace="calico-system" Pod="whisker-7f8bc5f4d4-8vk5t" WorkloadEndpoint="localhost-k8s-whisker--7f8bc5f4d4--8vk5t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7f8bc5f4d4--8vk5t-eth0", GenerateName:"whisker-7f8bc5f4d4-", Namespace:"calico-system", SelfLink:"", UID:"f1ea4b2d-7754-4639-8cc2-56367de4b18d", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 11, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7f8bc5f4d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3663dd8a3832843d2bb6bd848bcfd4218444b4e3a8b18299885807270e4c60c9", Pod:"whisker-7f8bc5f4d4-8vk5t", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9e1d5ebad0b", MAC:"be:01:9c:5f:aa:0c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:11:12.839389 containerd[1558]: 2026-03-10 02:11:12.829 [INFO][3873] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3663dd8a3832843d2bb6bd848bcfd4218444b4e3a8b18299885807270e4c60c9" Namespace="calico-system" Pod="whisker-7f8bc5f4d4-8vk5t" WorkloadEndpoint="localhost-k8s-whisker--7f8bc5f4d4--8vk5t-eth0" Mar 10 02:11:13.104881 containerd[1558]: time="2026-03-10T02:11:13.104646360Z" level=info msg="connecting to shim 3663dd8a3832843d2bb6bd848bcfd4218444b4e3a8b18299885807270e4c60c9" address="unix:///run/containerd/s/0bb5e0ebbf5aae0c2b7bafd178d3c8d48d90a9860b8b58d0d961d386c4f83939" namespace=k8s.io protocol=ttrpc version=3 Mar 10 02:11:13.265288 systemd[1]: Started cri-containerd-3663dd8a3832843d2bb6bd848bcfd4218444b4e3a8b18299885807270e4c60c9.scope - libcontainer container 3663dd8a3832843d2bb6bd848bcfd4218444b4e3a8b18299885807270e4c60c9. Mar 10 02:11:13.349253 systemd-resolved[1468]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 10 02:11:13.514551 containerd[1558]: time="2026-03-10T02:11:13.514336324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f8bc5f4d4-8vk5t,Uid:f1ea4b2d-7754-4639-8cc2-56367de4b18d,Namespace:calico-system,Attempt:0,} returns sandbox id \"3663dd8a3832843d2bb6bd848bcfd4218444b4e3a8b18299885807270e4c60c9\"" Mar 10 02:11:13.533039 containerd[1558]: time="2026-03-10T02:11:13.532902511Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 10 02:11:13.588804 kubelet[2766]: E0310 02:11:13.588332 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:11:14.829172 systemd-networkd[1464]: cali9e1d5ebad0b: Gained IPv6LL Mar 10 02:11:15.156873 containerd[1558]: time="2026-03-10T02:11:15.156520898Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:11:15.169084 containerd[1558]: time="2026-03-10T02:11:15.165855773Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 10 02:11:15.172927 containerd[1558]: time="2026-03-10T02:11:15.171483592Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:11:15.182020 containerd[1558]: time="2026-03-10T02:11:15.181232504Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:11:15.182587 containerd[1558]: time="2026-03-10T02:11:15.182555886Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.64752352s" Mar 10 02:11:15.182720 containerd[1558]: time="2026-03-10T02:11:15.182697358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 10 02:11:15.207017 containerd[1558]: time="2026-03-10T02:11:15.202797377Z" level=info msg="CreateContainer within sandbox \"3663dd8a3832843d2bb6bd848bcfd4218444b4e3a8b18299885807270e4c60c9\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 10 02:11:15.255778 containerd[1558]: time="2026-03-10T02:11:15.254119747Z" level=info msg="Container 038ee338ac5079f4ee96ecd7892a8c1722aac0941439e69c5d1764f4197b8ea3: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:11:15.303184 containerd[1558]: time="2026-03-10T02:11:15.303016572Z" level=info msg="CreateContainer within sandbox \"3663dd8a3832843d2bb6bd848bcfd4218444b4e3a8b18299885807270e4c60c9\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"038ee338ac5079f4ee96ecd7892a8c1722aac0941439e69c5d1764f4197b8ea3\"" Mar 10 02:11:15.312204 containerd[1558]: time="2026-03-10T02:11:15.312063044Z" level=info msg="StartContainer for \"038ee338ac5079f4ee96ecd7892a8c1722aac0941439e69c5d1764f4197b8ea3\"" Mar 10 02:11:15.322140 containerd[1558]: time="2026-03-10T02:11:15.321277327Z" level=info msg="connecting to shim 038ee338ac5079f4ee96ecd7892a8c1722aac0941439e69c5d1764f4197b8ea3" address="unix:///run/containerd/s/0bb5e0ebbf5aae0c2b7bafd178d3c8d48d90a9860b8b58d0d961d386c4f83939" protocol=ttrpc version=3 Mar 10 02:11:15.423388 systemd[1]: Started cri-containerd-038ee338ac5079f4ee96ecd7892a8c1722aac0941439e69c5d1764f4197b8ea3.scope - libcontainer container 038ee338ac5079f4ee96ecd7892a8c1722aac0941439e69c5d1764f4197b8ea3. Mar 10 02:11:15.742502 systemd-networkd[1464]: vxlan.calico: Link UP Mar 10 02:11:15.742517 systemd-networkd[1464]: vxlan.calico: Gained carrier Mar 10 02:11:15.933871 containerd[1558]: time="2026-03-10T02:11:15.933503760Z" level=info msg="StartContainer for \"038ee338ac5079f4ee96ecd7892a8c1722aac0941439e69c5d1764f4197b8ea3\" returns successfully" Mar 10 02:11:15.958273 containerd[1558]: time="2026-03-10T02:11:15.958199156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 10 02:11:17.771442 systemd-networkd[1464]: vxlan.calico: Gained IPv6LL Mar 10 02:11:18.805177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3460056189.mount: Deactivated successfully. Mar 10 02:11:18.972500 containerd[1558]: time="2026-03-10T02:11:18.972203341Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:11:18.975820 containerd[1558]: time="2026-03-10T02:11:18.975574126Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 10 02:11:18.979739 containerd[1558]: time="2026-03-10T02:11:18.979432341Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:11:18.997682 containerd[1558]: time="2026-03-10T02:11:18.995336334Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:11:18.997682 containerd[1558]: time="2026-03-10T02:11:18.997680831Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 3.039401816s" Mar 10 02:11:18.998021 containerd[1558]: time="2026-03-10T02:11:18.997718430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 10 02:11:19.043873 containerd[1558]: time="2026-03-10T02:11:19.041561576Z" level=info msg="CreateContainer within sandbox \"3663dd8a3832843d2bb6bd848bcfd4218444b4e3a8b18299885807270e4c60c9\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 10 02:11:19.088637 containerd[1558]: time="2026-03-10T02:11:19.088545114Z" level=info msg="Container bee14caf987cbdde20fca6d49ba4e4f3ecbd172a31eafa7bf1ee3b488c108cdd: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:11:19.137065 containerd[1558]: time="2026-03-10T02:11:19.136765237Z" level=info msg="CreateContainer within sandbox \"3663dd8a3832843d2bb6bd848bcfd4218444b4e3a8b18299885807270e4c60c9\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"bee14caf987cbdde20fca6d49ba4e4f3ecbd172a31eafa7bf1ee3b488c108cdd\"" Mar 10 02:11:19.140069 containerd[1558]: time="2026-03-10T02:11:19.138614716Z" level=info msg="StartContainer for \"bee14caf987cbdde20fca6d49ba4e4f3ecbd172a31eafa7bf1ee3b488c108cdd\"" Mar 10 02:11:19.145472 containerd[1558]: time="2026-03-10T02:11:19.142600315Z" level=info msg="connecting to shim bee14caf987cbdde20fca6d49ba4e4f3ecbd172a31eafa7bf1ee3b488c108cdd" address="unix:///run/containerd/s/0bb5e0ebbf5aae0c2b7bafd178d3c8d48d90a9860b8b58d0d961d386c4f83939" protocol=ttrpc version=3 Mar 10 02:11:19.244294 systemd[1]: Started cri-containerd-bee14caf987cbdde20fca6d49ba4e4f3ecbd172a31eafa7bf1ee3b488c108cdd.scope - libcontainer container bee14caf987cbdde20fca6d49ba4e4f3ecbd172a31eafa7bf1ee3b488c108cdd. Mar 10 02:11:19.485219 containerd[1558]: time="2026-03-10T02:11:19.475747654Z" level=info msg="StartContainer for \"bee14caf987cbdde20fca6d49ba4e4f3ecbd172a31eafa7bf1ee3b488c108cdd\" returns successfully" Mar 10 02:11:19.601148 containerd[1558]: time="2026-03-10T02:11:19.598715683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86bd949797-twmg8,Uid:00069936-391e-4dcb-9db5-c1a4f99a929c,Namespace:calico-system,Attempt:0,}" Mar 10 02:11:19.916909 kubelet[2766]: I0310 02:11:19.907369 2766 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-7f8bc5f4d4-8vk5t" podStartSLOduration=3.428908243 podStartE2EDuration="8.907347945s" podCreationTimestamp="2026-03-10 02:11:11 +0000 UTC" firstStartedPulling="2026-03-10 02:11:13.520676286 +0000 UTC m=+83.320795575" lastFinishedPulling="2026-03-10 02:11:18.999115988 +0000 UTC m=+88.799235277" observedRunningTime="2026-03-10 02:11:19.895715919 +0000 UTC m=+89.695835208" watchObservedRunningTime="2026-03-10 02:11:19.907347945 +0000 UTC m=+89.707467234" Mar 10 02:11:20.273150 systemd-networkd[1464]: calib459739a772: Link UP Mar 10 02:11:20.273876 systemd-networkd[1464]: calib459739a772: Gained carrier Mar 10 02:11:20.353104 containerd[1558]: 2026-03-10 02:11:19.879 [INFO][4262] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--86bd949797--twmg8-eth0 calico-apiserver-86bd949797- calico-system 00069936-391e-4dcb-9db5-c1a4f99a929c 1014 0 2026-03-10 02:10:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:86bd949797 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-86bd949797-twmg8 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calib459739a772 [] [] }} ContainerID="5d18a3e5e681d3b9218c603a61fc341c5896443bd164415b20786fa87c0531cc" Namespace="calico-system" Pod="calico-apiserver-86bd949797-twmg8" WorkloadEndpoint="localhost-k8s-calico--apiserver--86bd949797--twmg8-" Mar 10 02:11:20.353104 containerd[1558]: 2026-03-10 02:11:19.883 [INFO][4262] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5d18a3e5e681d3b9218c603a61fc341c5896443bd164415b20786fa87c0531cc" Namespace="calico-system" Pod="calico-apiserver-86bd949797-twmg8" WorkloadEndpoint="localhost-k8s-calico--apiserver--86bd949797--twmg8-eth0" Mar 10 02:11:20.353104 containerd[1558]: 2026-03-10 02:11:20.045 [INFO][4277] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5d18a3e5e681d3b9218c603a61fc341c5896443bd164415b20786fa87c0531cc" HandleID="k8s-pod-network.5d18a3e5e681d3b9218c603a61fc341c5896443bd164415b20786fa87c0531cc" Workload="localhost-k8s-calico--apiserver--86bd949797--twmg8-eth0" Mar 10 02:11:20.353684 containerd[1558]: 2026-03-10 02:11:20.073 [INFO][4277] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5d18a3e5e681d3b9218c603a61fc341c5896443bd164415b20786fa87c0531cc" HandleID="k8s-pod-network.5d18a3e5e681d3b9218c603a61fc341c5896443bd164415b20786fa87c0531cc" Workload="localhost-k8s-calico--apiserver--86bd949797--twmg8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00047c560), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-86bd949797-twmg8", "timestamp":"2026-03-10 02:11:20.045146439 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002062c0)} Mar 10 02:11:20.353684 containerd[1558]: 2026-03-10 02:11:20.073 [INFO][4277] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 02:11:20.353684 containerd[1558]: 2026-03-10 02:11:20.074 [INFO][4277] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 02:11:20.353684 containerd[1558]: 2026-03-10 02:11:20.075 [INFO][4277] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 10 02:11:20.353684 containerd[1558]: 2026-03-10 02:11:20.088 [INFO][4277] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5d18a3e5e681d3b9218c603a61fc341c5896443bd164415b20786fa87c0531cc" host="localhost" Mar 10 02:11:20.353684 containerd[1558]: 2026-03-10 02:11:20.133 [INFO][4277] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 10 02:11:20.353684 containerd[1558]: 2026-03-10 02:11:20.160 [INFO][4277] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 10 02:11:20.353684 containerd[1558]: 2026-03-10 02:11:20.176 [INFO][4277] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 10 02:11:20.353684 containerd[1558]: 2026-03-10 02:11:20.192 [INFO][4277] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 10 02:11:20.353684 containerd[1558]: 2026-03-10 02:11:20.192 [INFO][4277] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5d18a3e5e681d3b9218c603a61fc341c5896443bd164415b20786fa87c0531cc" host="localhost" Mar 10 02:11:20.354264 containerd[1558]: 2026-03-10 02:11:20.198 [INFO][4277] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5d18a3e5e681d3b9218c603a61fc341c5896443bd164415b20786fa87c0531cc Mar 10 02:11:20.354264 containerd[1558]: 2026-03-10 02:11:20.218 [INFO][4277] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5d18a3e5e681d3b9218c603a61fc341c5896443bd164415b20786fa87c0531cc" host="localhost" Mar 10 02:11:20.354264 containerd[1558]: 2026-03-10 02:11:20.244 [INFO][4277] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.5d18a3e5e681d3b9218c603a61fc341c5896443bd164415b20786fa87c0531cc" host="localhost" Mar 10 02:11:20.354264 containerd[1558]: 2026-03-10 02:11:20.244 [INFO][4277] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.5d18a3e5e681d3b9218c603a61fc341c5896443bd164415b20786fa87c0531cc" host="localhost" Mar 10 02:11:20.354264 containerd[1558]: 2026-03-10 02:11:20.244 [INFO][4277] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 02:11:20.354264 containerd[1558]: 2026-03-10 02:11:20.246 [INFO][4277] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="5d18a3e5e681d3b9218c603a61fc341c5896443bd164415b20786fa87c0531cc" HandleID="k8s-pod-network.5d18a3e5e681d3b9218c603a61fc341c5896443bd164415b20786fa87c0531cc" Workload="localhost-k8s-calico--apiserver--86bd949797--twmg8-eth0" Mar 10 02:11:20.354450 containerd[1558]: 2026-03-10 02:11:20.258 [INFO][4262] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5d18a3e5e681d3b9218c603a61fc341c5896443bd164415b20786fa87c0531cc" Namespace="calico-system" Pod="calico-apiserver-86bd949797-twmg8" WorkloadEndpoint="localhost-k8s-calico--apiserver--86bd949797--twmg8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--86bd949797--twmg8-eth0", GenerateName:"calico-apiserver-86bd949797-", Namespace:"calico-system", SelfLink:"", UID:"00069936-391e-4dcb-9db5-c1a4f99a929c", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 10, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86bd949797", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-86bd949797-twmg8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib459739a772", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:11:20.354566 containerd[1558]: 2026-03-10 02:11:20.258 [INFO][4262] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="5d18a3e5e681d3b9218c603a61fc341c5896443bd164415b20786fa87c0531cc" Namespace="calico-system" Pod="calico-apiserver-86bd949797-twmg8" WorkloadEndpoint="localhost-k8s-calico--apiserver--86bd949797--twmg8-eth0" Mar 10 02:11:20.354566 containerd[1558]: 2026-03-10 02:11:20.258 [INFO][4262] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib459739a772 ContainerID="5d18a3e5e681d3b9218c603a61fc341c5896443bd164415b20786fa87c0531cc" Namespace="calico-system" Pod="calico-apiserver-86bd949797-twmg8" WorkloadEndpoint="localhost-k8s-calico--apiserver--86bd949797--twmg8-eth0" Mar 10 02:11:20.354566 containerd[1558]: 2026-03-10 02:11:20.265 [INFO][4262] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5d18a3e5e681d3b9218c603a61fc341c5896443bd164415b20786fa87c0531cc" Namespace="calico-system" Pod="calico-apiserver-86bd949797-twmg8" WorkloadEndpoint="localhost-k8s-calico--apiserver--86bd949797--twmg8-eth0" Mar 10 02:11:20.354655 containerd[1558]: 2026-03-10 02:11:20.288 [INFO][4262] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5d18a3e5e681d3b9218c603a61fc341c5896443bd164415b20786fa87c0531cc" Namespace="calico-system" Pod="calico-apiserver-86bd949797-twmg8" WorkloadEndpoint="localhost-k8s-calico--apiserver--86bd949797--twmg8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--86bd949797--twmg8-eth0", GenerateName:"calico-apiserver-86bd949797-", Namespace:"calico-system", SelfLink:"", UID:"00069936-391e-4dcb-9db5-c1a4f99a929c", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 10, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86bd949797", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5d18a3e5e681d3b9218c603a61fc341c5896443bd164415b20786fa87c0531cc", Pod:"calico-apiserver-86bd949797-twmg8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib459739a772", MAC:"a6:d4:f5:92:0d:c3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:11:20.354808 containerd[1558]: 2026-03-10 02:11:20.342 [INFO][4262] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5d18a3e5e681d3b9218c603a61fc341c5896443bd164415b20786fa87c0531cc" Namespace="calico-system" Pod="calico-apiserver-86bd949797-twmg8" WorkloadEndpoint="localhost-k8s-calico--apiserver--86bd949797--twmg8-eth0" Mar 10 02:11:20.465904 containerd[1558]: time="2026-03-10T02:11:20.464598715Z" level=info msg="connecting to shim 5d18a3e5e681d3b9218c603a61fc341c5896443bd164415b20786fa87c0531cc" address="unix:///run/containerd/s/e005bdb7ce846c972a5f373d6fdd25bdefc96ea194ea2d8e1d0c9ab2f8ebb7b9" namespace=k8s.io protocol=ttrpc version=3 Mar 10 02:11:20.583339 systemd[1]: Started cri-containerd-5d18a3e5e681d3b9218c603a61fc341c5896443bd164415b20786fa87c0531cc.scope - libcontainer container 5d18a3e5e681d3b9218c603a61fc341c5896443bd164415b20786fa87c0531cc. Mar 10 02:11:20.605254 kubelet[2766]: E0310 02:11:20.603755 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:11:20.606919 containerd[1558]: time="2026-03-10T02:11:20.606732761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-z7nwk,Uid:1fd86dd5-a99d-4590-9bee-7a83a7560ea5,Namespace:kube-system,Attempt:0,}" Mar 10 02:11:20.697099 systemd-resolved[1468]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 10 02:11:20.818289 containerd[1558]: time="2026-03-10T02:11:20.818172498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86bd949797-twmg8,Uid:00069936-391e-4dcb-9db5-c1a4f99a929c,Namespace:calico-system,Attempt:0,} returns sandbox id \"5d18a3e5e681d3b9218c603a61fc341c5896443bd164415b20786fa87c0531cc\"" Mar 10 02:11:20.828890 containerd[1558]: time="2026-03-10T02:11:20.822072650Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 10 02:11:21.120425 systemd-networkd[1464]: cali29f99bb0dfe: Link UP Mar 10 02:11:21.129850 systemd-networkd[1464]: cali29f99bb0dfe: Gained carrier Mar 10 02:11:21.187189 containerd[1558]: 2026-03-10 02:11:20.767 [INFO][4346] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--z7nwk-eth0 coredns-7d764666f9- kube-system 1fd86dd5-a99d-4590-9bee-7a83a7560ea5 1016 0 2026-03-10 02:09:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-z7nwk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali29f99bb0dfe [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="65f70b2d8bfa98dd3f7d6b56ab04671cadeb75223aa61ed6b0d303e2984556ce" Namespace="kube-system" Pod="coredns-7d764666f9-z7nwk" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--z7nwk-" Mar 10 02:11:21.187189 containerd[1558]: 2026-03-10 02:11:20.769 [INFO][4346] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="65f70b2d8bfa98dd3f7d6b56ab04671cadeb75223aa61ed6b0d303e2984556ce" Namespace="kube-system" Pod="coredns-7d764666f9-z7nwk" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--z7nwk-eth0" Mar 10 02:11:21.187189 containerd[1558]: 2026-03-10 02:11:20.895 [INFO][4366] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="65f70b2d8bfa98dd3f7d6b56ab04671cadeb75223aa61ed6b0d303e2984556ce" HandleID="k8s-pod-network.65f70b2d8bfa98dd3f7d6b56ab04671cadeb75223aa61ed6b0d303e2984556ce" Workload="localhost-k8s-coredns--7d764666f9--z7nwk-eth0" Mar 10 02:11:21.187543 containerd[1558]: 2026-03-10 02:11:20.921 [INFO][4366] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="65f70b2d8bfa98dd3f7d6b56ab04671cadeb75223aa61ed6b0d303e2984556ce" HandleID="k8s-pod-network.65f70b2d8bfa98dd3f7d6b56ab04671cadeb75223aa61ed6b0d303e2984556ce" Workload="localhost-k8s-coredns--7d764666f9--z7nwk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000b8930), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-z7nwk", "timestamp":"2026-03-10 02:11:20.895669603 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004cc580)} Mar 10 02:11:21.187543 containerd[1558]: 2026-03-10 02:11:20.921 [INFO][4366] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 02:11:21.187543 containerd[1558]: 2026-03-10 02:11:20.921 [INFO][4366] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 02:11:21.187543 containerd[1558]: 2026-03-10 02:11:20.922 [INFO][4366] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 10 02:11:21.187543 containerd[1558]: 2026-03-10 02:11:20.937 [INFO][4366] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.65f70b2d8bfa98dd3f7d6b56ab04671cadeb75223aa61ed6b0d303e2984556ce" host="localhost" Mar 10 02:11:21.187543 containerd[1558]: 2026-03-10 02:11:20.964 [INFO][4366] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 10 02:11:21.187543 containerd[1558]: 2026-03-10 02:11:20.988 [INFO][4366] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 10 02:11:21.187543 containerd[1558]: 2026-03-10 02:11:20.997 [INFO][4366] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 10 02:11:21.187543 containerd[1558]: 2026-03-10 02:11:21.010 [INFO][4366] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 10 02:11:21.187543 containerd[1558]: 2026-03-10 02:11:21.011 [INFO][4366] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.65f70b2d8bfa98dd3f7d6b56ab04671cadeb75223aa61ed6b0d303e2984556ce" host="localhost" Mar 10 02:11:21.192181 containerd[1558]: 2026-03-10 02:11:21.027 [INFO][4366] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.65f70b2d8bfa98dd3f7d6b56ab04671cadeb75223aa61ed6b0d303e2984556ce Mar 10 02:11:21.192181 containerd[1558]: 2026-03-10 02:11:21.060 [INFO][4366] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.65f70b2d8bfa98dd3f7d6b56ab04671cadeb75223aa61ed6b0d303e2984556ce" host="localhost" Mar 10 02:11:21.192181 containerd[1558]: 2026-03-10 02:11:21.097 [INFO][4366] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.65f70b2d8bfa98dd3f7d6b56ab04671cadeb75223aa61ed6b0d303e2984556ce" host="localhost" Mar 10 02:11:21.192181 containerd[1558]: 2026-03-10 02:11:21.098 [INFO][4366] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.65f70b2d8bfa98dd3f7d6b56ab04671cadeb75223aa61ed6b0d303e2984556ce" host="localhost" Mar 10 02:11:21.192181 containerd[1558]: 2026-03-10 02:11:21.098 [INFO][4366] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 02:11:21.192181 containerd[1558]: 2026-03-10 02:11:21.098 [INFO][4366] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="65f70b2d8bfa98dd3f7d6b56ab04671cadeb75223aa61ed6b0d303e2984556ce" HandleID="k8s-pod-network.65f70b2d8bfa98dd3f7d6b56ab04671cadeb75223aa61ed6b0d303e2984556ce" Workload="localhost-k8s-coredns--7d764666f9--z7nwk-eth0" Mar 10 02:11:21.192411 containerd[1558]: 2026-03-10 02:11:21.111 [INFO][4346] cni-plugin/k8s.go 418: Populated endpoint ContainerID="65f70b2d8bfa98dd3f7d6b56ab04671cadeb75223aa61ed6b0d303e2984556ce" Namespace="kube-system" Pod="coredns-7d764666f9-z7nwk" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--z7nwk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--z7nwk-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"1fd86dd5-a99d-4590-9bee-7a83a7560ea5", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 9, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-z7nwk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali29f99bb0dfe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:11:21.192411 containerd[1558]: 2026-03-10 02:11:21.111 [INFO][4346] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="65f70b2d8bfa98dd3f7d6b56ab04671cadeb75223aa61ed6b0d303e2984556ce" Namespace="kube-system" Pod="coredns-7d764666f9-z7nwk" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--z7nwk-eth0" Mar 10 02:11:21.192411 containerd[1558]: 2026-03-10 02:11:21.111 [INFO][4346] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali29f99bb0dfe ContainerID="65f70b2d8bfa98dd3f7d6b56ab04671cadeb75223aa61ed6b0d303e2984556ce" Namespace="kube-system" Pod="coredns-7d764666f9-z7nwk" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--z7nwk-eth0" Mar 10 02:11:21.192411 containerd[1558]: 2026-03-10 02:11:21.118 [INFO][4346] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="65f70b2d8bfa98dd3f7d6b56ab04671cadeb75223aa61ed6b0d303e2984556ce" Namespace="kube-system" Pod="coredns-7d764666f9-z7nwk" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--z7nwk-eth0" Mar 10 02:11:21.192411 containerd[1558]: 2026-03-10 02:11:21.132 [INFO][4346] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="65f70b2d8bfa98dd3f7d6b56ab04671cadeb75223aa61ed6b0d303e2984556ce" Namespace="kube-system" Pod="coredns-7d764666f9-z7nwk" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--z7nwk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--z7nwk-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"1fd86dd5-a99d-4590-9bee-7a83a7560ea5", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 9, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"65f70b2d8bfa98dd3f7d6b56ab04671cadeb75223aa61ed6b0d303e2984556ce", Pod:"coredns-7d764666f9-z7nwk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali29f99bb0dfe", MAC:"ae:aa:9d:61:87:2c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:11:21.192411 containerd[1558]: 2026-03-10 02:11:21.171 [INFO][4346] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="65f70b2d8bfa98dd3f7d6b56ab04671cadeb75223aa61ed6b0d303e2984556ce" Namespace="kube-system" Pod="coredns-7d764666f9-z7nwk" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--z7nwk-eth0" Mar 10 02:11:21.363210 systemd-networkd[1464]: calib459739a772: Gained IPv6LL Mar 10 02:11:21.371131 containerd[1558]: time="2026-03-10T02:11:21.370737807Z" level=info msg="connecting to shim 65f70b2d8bfa98dd3f7d6b56ab04671cadeb75223aa61ed6b0d303e2984556ce" address="unix:///run/containerd/s/ab44599574dc5511bd235359a2d029e5cb0e9ea5e586c998759217aa7678da3f" namespace=k8s.io protocol=ttrpc version=3 Mar 10 02:11:21.507458 systemd[1]: Started cri-containerd-65f70b2d8bfa98dd3f7d6b56ab04671cadeb75223aa61ed6b0d303e2984556ce.scope - libcontainer container 65f70b2d8bfa98dd3f7d6b56ab04671cadeb75223aa61ed6b0d303e2984556ce. Mar 10 02:11:21.599561 systemd-resolved[1468]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 10 02:11:21.601716 containerd[1558]: time="2026-03-10T02:11:21.601034192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cb4xk,Uid:a5c1c4e6-10bd-4317-8c91-c1420d34eabf,Namespace:calico-system,Attempt:0,}" Mar 10 02:11:21.900547 containerd[1558]: time="2026-03-10T02:11:21.900317351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-z7nwk,Uid:1fd86dd5-a99d-4590-9bee-7a83a7560ea5,Namespace:kube-system,Attempt:0,} returns sandbox id \"65f70b2d8bfa98dd3f7d6b56ab04671cadeb75223aa61ed6b0d303e2984556ce\"" Mar 10 02:11:21.911890 kubelet[2766]: E0310 02:11:21.908601 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:11:21.974263 containerd[1558]: time="2026-03-10T02:11:21.973590559Z" level=info msg="CreateContainer within sandbox \"65f70b2d8bfa98dd3f7d6b56ab04671cadeb75223aa61ed6b0d303e2984556ce\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 10 02:11:22.175677 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount426023344.mount: Deactivated successfully. Mar 10 02:11:22.243893 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount83834403.mount: Deactivated successfully. Mar 10 02:11:22.268881 containerd[1558]: time="2026-03-10T02:11:22.267632012Z" level=info msg="Container 8b6b165a31977adabd9e7e9d9c0a281c3cd52154ed41040af600ccd789c01923: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:11:22.317045 systemd-networkd[1464]: cali29f99bb0dfe: Gained IPv6LL Mar 10 02:11:22.409938 containerd[1558]: time="2026-03-10T02:11:22.377636101Z" level=info msg="CreateContainer within sandbox \"65f70b2d8bfa98dd3f7d6b56ab04671cadeb75223aa61ed6b0d303e2984556ce\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8b6b165a31977adabd9e7e9d9c0a281c3cd52154ed41040af600ccd789c01923\"" Mar 10 02:11:22.415305 containerd[1558]: time="2026-03-10T02:11:22.415269015Z" level=info msg="StartContainer for \"8b6b165a31977adabd9e7e9d9c0a281c3cd52154ed41040af600ccd789c01923\"" Mar 10 02:11:22.421541 containerd[1558]: time="2026-03-10T02:11:22.421491246Z" level=info msg="connecting to shim 8b6b165a31977adabd9e7e9d9c0a281c3cd52154ed41040af600ccd789c01923" address="unix:///run/containerd/s/ab44599574dc5511bd235359a2d029e5cb0e9ea5e586c998759217aa7678da3f" protocol=ttrpc version=3 Mar 10 02:11:22.528093 systemd[1]: Started cri-containerd-8b6b165a31977adabd9e7e9d9c0a281c3cd52154ed41040af600ccd789c01923.scope - libcontainer container 8b6b165a31977adabd9e7e9d9c0a281c3cd52154ed41040af600ccd789c01923. Mar 10 02:11:22.531024 systemd-networkd[1464]: cali41f6a1fb374: Link UP Mar 10 02:11:22.533532 systemd-networkd[1464]: cali41f6a1fb374: Gained carrier Mar 10 02:11:22.604661 containerd[1558]: 2026-03-10 02:11:22.057 [INFO][4435] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--cb4xk-eth0 csi-node-driver- calico-system a5c1c4e6-10bd-4317-8c91-c1420d34eabf 786 0 2026-03-10 02:10:11 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-cb4xk eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali41f6a1fb374 [] [] }} ContainerID="46efdc83618d3a476cf3e4b301f84e69a777553eb448301f5c41be74228cf211" Namespace="calico-system" Pod="csi-node-driver-cb4xk" WorkloadEndpoint="localhost-k8s-csi--node--driver--cb4xk-" Mar 10 02:11:22.604661 containerd[1558]: 2026-03-10 02:11:22.063 [INFO][4435] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="46efdc83618d3a476cf3e4b301f84e69a777553eb448301f5c41be74228cf211" Namespace="calico-system" Pod="csi-node-driver-cb4xk" WorkloadEndpoint="localhost-k8s-csi--node--driver--cb4xk-eth0" Mar 10 02:11:22.604661 containerd[1558]: 2026-03-10 02:11:22.251 [INFO][4457] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="46efdc83618d3a476cf3e4b301f84e69a777553eb448301f5c41be74228cf211" HandleID="k8s-pod-network.46efdc83618d3a476cf3e4b301f84e69a777553eb448301f5c41be74228cf211" Workload="localhost-k8s-csi--node--driver--cb4xk-eth0" Mar 10 02:11:22.604661 containerd[1558]: 2026-03-10 02:11:22.292 [INFO][4457] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="46efdc83618d3a476cf3e4b301f84e69a777553eb448301f5c41be74228cf211" HandleID="k8s-pod-network.46efdc83618d3a476cf3e4b301f84e69a777553eb448301f5c41be74228cf211" Workload="localhost-k8s-csi--node--driver--cb4xk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011dc00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-cb4xk", "timestamp":"2026-03-10 02:11:22.25153834 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002fa000)} Mar 10 02:11:22.604661 containerd[1558]: 2026-03-10 02:11:22.292 [INFO][4457] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 02:11:22.604661 containerd[1558]: 2026-03-10 02:11:22.292 [INFO][4457] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 02:11:22.604661 containerd[1558]: 2026-03-10 02:11:22.292 [INFO][4457] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 10 02:11:22.604661 containerd[1558]: 2026-03-10 02:11:22.307 [INFO][4457] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.46efdc83618d3a476cf3e4b301f84e69a777553eb448301f5c41be74228cf211" host="localhost" Mar 10 02:11:22.604661 containerd[1558]: 2026-03-10 02:11:22.355 [INFO][4457] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 10 02:11:22.604661 containerd[1558]: 2026-03-10 02:11:22.436 [INFO][4457] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 10 02:11:22.604661 containerd[1558]: 2026-03-10 02:11:22.453 [INFO][4457] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 10 02:11:22.604661 containerd[1558]: 2026-03-10 02:11:22.463 [INFO][4457] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 10 02:11:22.604661 containerd[1558]: 2026-03-10 02:11:22.463 [INFO][4457] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.46efdc83618d3a476cf3e4b301f84e69a777553eb448301f5c41be74228cf211" host="localhost" Mar 10 02:11:22.604661 containerd[1558]: 2026-03-10 02:11:22.472 [INFO][4457] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.46efdc83618d3a476cf3e4b301f84e69a777553eb448301f5c41be74228cf211 Mar 10 02:11:22.604661 containerd[1558]: 2026-03-10 02:11:22.489 [INFO][4457] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.46efdc83618d3a476cf3e4b301f84e69a777553eb448301f5c41be74228cf211" host="localhost" Mar 10 02:11:22.604661 containerd[1558]: 2026-03-10 02:11:22.512 [INFO][4457] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.46efdc83618d3a476cf3e4b301f84e69a777553eb448301f5c41be74228cf211" host="localhost" Mar 10 02:11:22.604661 containerd[1558]: 2026-03-10 02:11:22.512 [INFO][4457] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.46efdc83618d3a476cf3e4b301f84e69a777553eb448301f5c41be74228cf211" host="localhost" Mar 10 02:11:22.604661 containerd[1558]: 2026-03-10 02:11:22.512 [INFO][4457] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 02:11:22.604661 containerd[1558]: 2026-03-10 02:11:22.512 [INFO][4457] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="46efdc83618d3a476cf3e4b301f84e69a777553eb448301f5c41be74228cf211" HandleID="k8s-pod-network.46efdc83618d3a476cf3e4b301f84e69a777553eb448301f5c41be74228cf211" Workload="localhost-k8s-csi--node--driver--cb4xk-eth0" Mar 10 02:11:22.614008 containerd[1558]: 2026-03-10 02:11:22.522 [INFO][4435] cni-plugin/k8s.go 418: Populated endpoint ContainerID="46efdc83618d3a476cf3e4b301f84e69a777553eb448301f5c41be74228cf211" Namespace="calico-system" Pod="csi-node-driver-cb4xk" WorkloadEndpoint="localhost-k8s-csi--node--driver--cb4xk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cb4xk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a5c1c4e6-10bd-4317-8c91-c1420d34eabf", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 10, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-cb4xk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali41f6a1fb374", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:11:22.614008 containerd[1558]: 2026-03-10 02:11:22.522 [INFO][4435] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="46efdc83618d3a476cf3e4b301f84e69a777553eb448301f5c41be74228cf211" Namespace="calico-system" Pod="csi-node-driver-cb4xk" WorkloadEndpoint="localhost-k8s-csi--node--driver--cb4xk-eth0" Mar 10 02:11:22.614008 containerd[1558]: 2026-03-10 02:11:22.522 [INFO][4435] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali41f6a1fb374 ContainerID="46efdc83618d3a476cf3e4b301f84e69a777553eb448301f5c41be74228cf211" Namespace="calico-system" Pod="csi-node-driver-cb4xk" WorkloadEndpoint="localhost-k8s-csi--node--driver--cb4xk-eth0" Mar 10 02:11:22.614008 containerd[1558]: 2026-03-10 02:11:22.536 [INFO][4435] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="46efdc83618d3a476cf3e4b301f84e69a777553eb448301f5c41be74228cf211" Namespace="calico-system" Pod="csi-node-driver-cb4xk" WorkloadEndpoint="localhost-k8s-csi--node--driver--cb4xk-eth0" Mar 10 02:11:22.614008 containerd[1558]: 2026-03-10 02:11:22.537 [INFO][4435] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="46efdc83618d3a476cf3e4b301f84e69a777553eb448301f5c41be74228cf211" Namespace="calico-system" Pod="csi-node-driver-cb4xk" WorkloadEndpoint="localhost-k8s-csi--node--driver--cb4xk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cb4xk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a5c1c4e6-10bd-4317-8c91-c1420d34eabf", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 10, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"46efdc83618d3a476cf3e4b301f84e69a777553eb448301f5c41be74228cf211", Pod:"csi-node-driver-cb4xk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali41f6a1fb374", MAC:"a6:98:77:c1:56:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:11:22.614008 containerd[1558]: 2026-03-10 02:11:22.571 [INFO][4435] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="46efdc83618d3a476cf3e4b301f84e69a777553eb448301f5c41be74228cf211" Namespace="calico-system" Pod="csi-node-driver-cb4xk" WorkloadEndpoint="localhost-k8s-csi--node--driver--cb4xk-eth0" Mar 10 02:11:22.632657 containerd[1558]: time="2026-03-10T02:11:22.632347290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-rqjmq,Uid:a1ff8ffe-44c3-4eb5-a6c5-13677f36e3af,Namespace:calico-system,Attempt:0,}" Mar 10 02:11:22.799790 containerd[1558]: time="2026-03-10T02:11:22.795721305Z" level=info msg="StartContainer for \"8b6b165a31977adabd9e7e9d9c0a281c3cd52154ed41040af600ccd789c01923\" returns successfully" Mar 10 02:11:22.799790 containerd[1558]: time="2026-03-10T02:11:22.797019610Z" level=info msg="connecting to shim 46efdc83618d3a476cf3e4b301f84e69a777553eb448301f5c41be74228cf211" address="unix:///run/containerd/s/3e9194bedb45a9d47de08fa4043a9225cd33e045c2622b469f7d7271affd4b11" namespace=k8s.io protocol=ttrpc version=3 Mar 10 02:11:22.911014 kubelet[2766]: E0310 02:11:22.909463 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:11:22.958444 systemd[1]: Started cri-containerd-46efdc83618d3a476cf3e4b301f84e69a777553eb448301f5c41be74228cf211.scope - libcontainer container 46efdc83618d3a476cf3e4b301f84e69a777553eb448301f5c41be74228cf211. Mar 10 02:11:23.062390 systemd-resolved[1468]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 10 02:11:23.222029 containerd[1558]: time="2026-03-10T02:11:23.221866479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cb4xk,Uid:a5c1c4e6-10bd-4317-8c91-c1420d34eabf,Namespace:calico-system,Attempt:0,} returns sandbox id \"46efdc83618d3a476cf3e4b301f84e69a777553eb448301f5c41be74228cf211\"" Mar 10 02:11:23.287306 systemd-networkd[1464]: cali17b1bcc48ce: Link UP Mar 10 02:11:23.292021 systemd-networkd[1464]: cali17b1bcc48ce: Gained carrier Mar 10 02:11:23.354104 kubelet[2766]: I0310 02:11:23.352435 2766 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-z7nwk" podStartSLOduration=88.352415074 podStartE2EDuration="1m28.352415074s" podCreationTimestamp="2026-03-10 02:09:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 02:11:22.975171756 +0000 UTC m=+92.775291115" watchObservedRunningTime="2026-03-10 02:11:23.352415074 +0000 UTC m=+93.152534413" Mar 10 02:11:23.369079 containerd[1558]: 2026-03-10 02:11:22.889 [INFO][4514] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--9f7667bb8--rqjmq-eth0 goldmane-9f7667bb8- calico-system a1ff8ffe-44c3-4eb5-a6c5-13677f36e3af 1018 0 2026-03-10 02:10:10 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-9f7667bb8-rqjmq eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali17b1bcc48ce [] [] }} ContainerID="1678ca65eba387252013a11a9a6aca7343f328376239b86093384f1e723757a0" Namespace="calico-system" Pod="goldmane-9f7667bb8-rqjmq" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--rqjmq-" Mar 10 02:11:23.369079 containerd[1558]: 2026-03-10 02:11:22.889 [INFO][4514] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1678ca65eba387252013a11a9a6aca7343f328376239b86093384f1e723757a0" Namespace="calico-system" Pod="goldmane-9f7667bb8-rqjmq" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--rqjmq-eth0" Mar 10 02:11:23.369079 containerd[1558]: 2026-03-10 02:11:23.045 [INFO][4575] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1678ca65eba387252013a11a9a6aca7343f328376239b86093384f1e723757a0" HandleID="k8s-pod-network.1678ca65eba387252013a11a9a6aca7343f328376239b86093384f1e723757a0" Workload="localhost-k8s-goldmane--9f7667bb8--rqjmq-eth0" Mar 10 02:11:23.369079 containerd[1558]: 2026-03-10 02:11:23.091 [INFO][4575] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1678ca65eba387252013a11a9a6aca7343f328376239b86093384f1e723757a0" HandleID="k8s-pod-network.1678ca65eba387252013a11a9a6aca7343f328376239b86093384f1e723757a0" Workload="localhost-k8s-goldmane--9f7667bb8--rqjmq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00044e1a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-9f7667bb8-rqjmq", "timestamp":"2026-03-10 02:11:23.04568601 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001f22c0)} Mar 10 02:11:23.369079 containerd[1558]: 2026-03-10 02:11:23.092 [INFO][4575] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 02:11:23.369079 containerd[1558]: 2026-03-10 02:11:23.092 [INFO][4575] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 02:11:23.369079 containerd[1558]: 2026-03-10 02:11:23.092 [INFO][4575] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 10 02:11:23.369079 containerd[1558]: 2026-03-10 02:11:23.109 [INFO][4575] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1678ca65eba387252013a11a9a6aca7343f328376239b86093384f1e723757a0" host="localhost" Mar 10 02:11:23.369079 containerd[1558]: 2026-03-10 02:11:23.150 [INFO][4575] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 10 02:11:23.369079 containerd[1558]: 2026-03-10 02:11:23.181 [INFO][4575] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 10 02:11:23.369079 containerd[1558]: 2026-03-10 02:11:23.187 [INFO][4575] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 10 02:11:23.369079 containerd[1558]: 2026-03-10 02:11:23.204 [INFO][4575] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 10 02:11:23.369079 containerd[1558]: 2026-03-10 02:11:23.204 [INFO][4575] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1678ca65eba387252013a11a9a6aca7343f328376239b86093384f1e723757a0" host="localhost" Mar 10 02:11:23.369079 containerd[1558]: 2026-03-10 02:11:23.213 [INFO][4575] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1678ca65eba387252013a11a9a6aca7343f328376239b86093384f1e723757a0 Mar 10 02:11:23.369079 containerd[1558]: 2026-03-10 02:11:23.234 [INFO][4575] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1678ca65eba387252013a11a9a6aca7343f328376239b86093384f1e723757a0" host="localhost" Mar 10 02:11:23.369079 containerd[1558]: 2026-03-10 02:11:23.270 [INFO][4575] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.1678ca65eba387252013a11a9a6aca7343f328376239b86093384f1e723757a0" host="localhost" Mar 10 02:11:23.369079 containerd[1558]: 2026-03-10 02:11:23.270 [INFO][4575] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.1678ca65eba387252013a11a9a6aca7343f328376239b86093384f1e723757a0" host="localhost" Mar 10 02:11:23.369079 containerd[1558]: 2026-03-10 02:11:23.270 [INFO][4575] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 02:11:23.369079 containerd[1558]: 2026-03-10 02:11:23.271 [INFO][4575] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="1678ca65eba387252013a11a9a6aca7343f328376239b86093384f1e723757a0" HandleID="k8s-pod-network.1678ca65eba387252013a11a9a6aca7343f328376239b86093384f1e723757a0" Workload="localhost-k8s-goldmane--9f7667bb8--rqjmq-eth0" Mar 10 02:11:23.370156 containerd[1558]: 2026-03-10 02:11:23.280 [INFO][4514] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1678ca65eba387252013a11a9a6aca7343f328376239b86093384f1e723757a0" Namespace="calico-system" Pod="goldmane-9f7667bb8-rqjmq" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--rqjmq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--rqjmq-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"a1ff8ffe-44c3-4eb5-a6c5-13677f36e3af", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 10, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-9f7667bb8-rqjmq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali17b1bcc48ce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:11:23.370156 containerd[1558]: 2026-03-10 02:11:23.280 [INFO][4514] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="1678ca65eba387252013a11a9a6aca7343f328376239b86093384f1e723757a0" Namespace="calico-system" Pod="goldmane-9f7667bb8-rqjmq" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--rqjmq-eth0" Mar 10 02:11:23.370156 containerd[1558]: 2026-03-10 02:11:23.280 [INFO][4514] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali17b1bcc48ce ContainerID="1678ca65eba387252013a11a9a6aca7343f328376239b86093384f1e723757a0" Namespace="calico-system" Pod="goldmane-9f7667bb8-rqjmq" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--rqjmq-eth0" Mar 10 02:11:23.370156 containerd[1558]: 2026-03-10 02:11:23.293 [INFO][4514] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1678ca65eba387252013a11a9a6aca7343f328376239b86093384f1e723757a0" Namespace="calico-system" Pod="goldmane-9f7667bb8-rqjmq" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--rqjmq-eth0" Mar 10 02:11:23.370156 containerd[1558]: 2026-03-10 02:11:23.294 [INFO][4514] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1678ca65eba387252013a11a9a6aca7343f328376239b86093384f1e723757a0" Namespace="calico-system" Pod="goldmane-9f7667bb8-rqjmq" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--rqjmq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--rqjmq-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"a1ff8ffe-44c3-4eb5-a6c5-13677f36e3af", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 10, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1678ca65eba387252013a11a9a6aca7343f328376239b86093384f1e723757a0", Pod:"goldmane-9f7667bb8-rqjmq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali17b1bcc48ce", MAC:"f6:d7:58:93:1e:c0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:11:23.370156 containerd[1558]: 2026-03-10 02:11:23.349 [INFO][4514] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1678ca65eba387252013a11a9a6aca7343f328376239b86093384f1e723757a0" Namespace="calico-system" Pod="goldmane-9f7667bb8-rqjmq" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--rqjmq-eth0" Mar 10 02:11:23.470925 containerd[1558]: time="2026-03-10T02:11:23.470856727Z" level=info msg="connecting to shim 1678ca65eba387252013a11a9a6aca7343f328376239b86093384f1e723757a0" address="unix:///run/containerd/s/f729fd4063f8e29812b1e13c07f737fc846f8cb29f331bad79a60cba251dc6a0" namespace=k8s.io protocol=ttrpc version=3 Mar 10 02:11:23.558856 systemd[1]: Started cri-containerd-1678ca65eba387252013a11a9a6aca7343f328376239b86093384f1e723757a0.scope - libcontainer container 1678ca65eba387252013a11a9a6aca7343f328376239b86093384f1e723757a0. Mar 10 02:11:23.592908 kubelet[2766]: E0310 02:11:23.591504 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:11:23.606180 containerd[1558]: time="2026-03-10T02:11:23.605904767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86bd949797-bcqfh,Uid:842fa873-9477-4391-bb58-9db26033f987,Namespace:calico-system,Attempt:0,}" Mar 10 02:11:23.645444 systemd-resolved[1468]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 10 02:11:23.843912 containerd[1558]: time="2026-03-10T02:11:23.843708953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-rqjmq,Uid:a1ff8ffe-44c3-4eb5-a6c5-13677f36e3af,Namespace:calico-system,Attempt:0,} returns sandbox id \"1678ca65eba387252013a11a9a6aca7343f328376239b86093384f1e723757a0\"" Mar 10 02:11:23.991574 kubelet[2766]: E0310 02:11:23.991121 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:11:24.238383 systemd-networkd[1464]: cali41f6a1fb374: Gained IPv6LL Mar 10 02:11:24.488233 systemd-networkd[1464]: cali5fd40e1d66d: Link UP Mar 10 02:11:24.505377 systemd-networkd[1464]: cali5fd40e1d66d: Gained carrier Mar 10 02:11:24.547811 containerd[1558]: 2026-03-10 02:11:23.767 [INFO][4666] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--86bd949797--bcqfh-eth0 calico-apiserver-86bd949797- calico-system 842fa873-9477-4391-bb58-9db26033f987 1015 0 2026-03-10 02:10:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:86bd949797 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-86bd949797-bcqfh eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali5fd40e1d66d [] [] }} ContainerID="d8877d79b38ba60b29001a7e4f7ed6e68c76dad102428e7f4ad49119deee0a14" Namespace="calico-system" Pod="calico-apiserver-86bd949797-bcqfh" WorkloadEndpoint="localhost-k8s-calico--apiserver--86bd949797--bcqfh-" Mar 10 02:11:24.547811 containerd[1558]: 2026-03-10 02:11:23.767 [INFO][4666] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d8877d79b38ba60b29001a7e4f7ed6e68c76dad102428e7f4ad49119deee0a14" Namespace="calico-system" Pod="calico-apiserver-86bd949797-bcqfh" WorkloadEndpoint="localhost-k8s-calico--apiserver--86bd949797--bcqfh-eth0" Mar 10 02:11:24.547811 containerd[1558]: 2026-03-10 02:11:23.928 [INFO][4687] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d8877d79b38ba60b29001a7e4f7ed6e68c76dad102428e7f4ad49119deee0a14" HandleID="k8s-pod-network.d8877d79b38ba60b29001a7e4f7ed6e68c76dad102428e7f4ad49119deee0a14" Workload="localhost-k8s-calico--apiserver--86bd949797--bcqfh-eth0" Mar 10 02:11:24.547811 containerd[1558]: 2026-03-10 02:11:24.027 [INFO][4687] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d8877d79b38ba60b29001a7e4f7ed6e68c76dad102428e7f4ad49119deee0a14" HandleID="k8s-pod-network.d8877d79b38ba60b29001a7e4f7ed6e68c76dad102428e7f4ad49119deee0a14" Workload="localhost-k8s-calico--apiserver--86bd949797--bcqfh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003e6030), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-86bd949797-bcqfh", "timestamp":"2026-03-10 02:11:23.928841732 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00016d600)} Mar 10 02:11:24.547811 containerd[1558]: 2026-03-10 02:11:24.028 [INFO][4687] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 02:11:24.547811 containerd[1558]: 2026-03-10 02:11:24.028 [INFO][4687] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 02:11:24.547811 containerd[1558]: 2026-03-10 02:11:24.040 [INFO][4687] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 10 02:11:24.547811 containerd[1558]: 2026-03-10 02:11:24.080 [INFO][4687] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d8877d79b38ba60b29001a7e4f7ed6e68c76dad102428e7f4ad49119deee0a14" host="localhost" Mar 10 02:11:24.547811 containerd[1558]: 2026-03-10 02:11:24.230 [INFO][4687] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 10 02:11:24.547811 containerd[1558]: 2026-03-10 02:11:24.356 [INFO][4687] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 10 02:11:24.547811 containerd[1558]: 2026-03-10 02:11:24.374 [INFO][4687] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 10 02:11:24.547811 containerd[1558]: 2026-03-10 02:11:24.393 [INFO][4687] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 10 02:11:24.547811 containerd[1558]: 2026-03-10 02:11:24.393 [INFO][4687] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d8877d79b38ba60b29001a7e4f7ed6e68c76dad102428e7f4ad49119deee0a14" host="localhost" Mar 10 02:11:24.547811 containerd[1558]: 2026-03-10 02:11:24.404 [INFO][4687] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d8877d79b38ba60b29001a7e4f7ed6e68c76dad102428e7f4ad49119deee0a14 Mar 10 02:11:24.547811 containerd[1558]: 2026-03-10 02:11:24.439 [INFO][4687] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d8877d79b38ba60b29001a7e4f7ed6e68c76dad102428e7f4ad49119deee0a14" host="localhost" Mar 10 02:11:24.547811 containerd[1558]: 2026-03-10 02:11:24.469 [INFO][4687] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.d8877d79b38ba60b29001a7e4f7ed6e68c76dad102428e7f4ad49119deee0a14" host="localhost" Mar 10 02:11:24.547811 containerd[1558]: 2026-03-10 02:11:24.469 [INFO][4687] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.d8877d79b38ba60b29001a7e4f7ed6e68c76dad102428e7f4ad49119deee0a14" host="localhost" Mar 10 02:11:24.547811 containerd[1558]: 2026-03-10 02:11:24.469 [INFO][4687] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 02:11:24.547811 containerd[1558]: 2026-03-10 02:11:24.469 [INFO][4687] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="d8877d79b38ba60b29001a7e4f7ed6e68c76dad102428e7f4ad49119deee0a14" HandleID="k8s-pod-network.d8877d79b38ba60b29001a7e4f7ed6e68c76dad102428e7f4ad49119deee0a14" Workload="localhost-k8s-calico--apiserver--86bd949797--bcqfh-eth0" Mar 10 02:11:24.549305 containerd[1558]: 2026-03-10 02:11:24.479 [INFO][4666] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d8877d79b38ba60b29001a7e4f7ed6e68c76dad102428e7f4ad49119deee0a14" Namespace="calico-system" Pod="calico-apiserver-86bd949797-bcqfh" WorkloadEndpoint="localhost-k8s-calico--apiserver--86bd949797--bcqfh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--86bd949797--bcqfh-eth0", GenerateName:"calico-apiserver-86bd949797-", Namespace:"calico-system", SelfLink:"", UID:"842fa873-9477-4391-bb58-9db26033f987", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 10, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86bd949797", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-86bd949797-bcqfh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali5fd40e1d66d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:11:24.549305 containerd[1558]: 2026-03-10 02:11:24.479 [INFO][4666] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="d8877d79b38ba60b29001a7e4f7ed6e68c76dad102428e7f4ad49119deee0a14" Namespace="calico-system" Pod="calico-apiserver-86bd949797-bcqfh" WorkloadEndpoint="localhost-k8s-calico--apiserver--86bd949797--bcqfh-eth0" Mar 10 02:11:24.549305 containerd[1558]: 2026-03-10 02:11:24.479 [INFO][4666] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5fd40e1d66d ContainerID="d8877d79b38ba60b29001a7e4f7ed6e68c76dad102428e7f4ad49119deee0a14" Namespace="calico-system" Pod="calico-apiserver-86bd949797-bcqfh" WorkloadEndpoint="localhost-k8s-calico--apiserver--86bd949797--bcqfh-eth0" Mar 10 02:11:24.549305 containerd[1558]: 2026-03-10 02:11:24.489 [INFO][4666] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d8877d79b38ba60b29001a7e4f7ed6e68c76dad102428e7f4ad49119deee0a14" Namespace="calico-system" Pod="calico-apiserver-86bd949797-bcqfh" WorkloadEndpoint="localhost-k8s-calico--apiserver--86bd949797--bcqfh-eth0" Mar 10 02:11:24.549305 containerd[1558]: 2026-03-10 02:11:24.496 [INFO][4666] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d8877d79b38ba60b29001a7e4f7ed6e68c76dad102428e7f4ad49119deee0a14" Namespace="calico-system" Pod="calico-apiserver-86bd949797-bcqfh" WorkloadEndpoint="localhost-k8s-calico--apiserver--86bd949797--bcqfh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--86bd949797--bcqfh-eth0", GenerateName:"calico-apiserver-86bd949797-", Namespace:"calico-system", SelfLink:"", UID:"842fa873-9477-4391-bb58-9db26033f987", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 10, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86bd949797", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d8877d79b38ba60b29001a7e4f7ed6e68c76dad102428e7f4ad49119deee0a14", Pod:"calico-apiserver-86bd949797-bcqfh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali5fd40e1d66d", MAC:"de:b3:fb:e9:5c:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:11:24.549305 containerd[1558]: 2026-03-10 02:11:24.536 [INFO][4666] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d8877d79b38ba60b29001a7e4f7ed6e68c76dad102428e7f4ad49119deee0a14" Namespace="calico-system" Pod="calico-apiserver-86bd949797-bcqfh" WorkloadEndpoint="localhost-k8s-calico--apiserver--86bd949797--bcqfh-eth0" Mar 10 02:11:24.606933 containerd[1558]: time="2026-03-10T02:11:24.606099444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-689586c974-wfw9j,Uid:595f5ed2-29fa-4602-8ae6-ba221a4a42bd,Namespace:calico-system,Attempt:0,}" Mar 10 02:11:24.621852 systemd-networkd[1464]: cali17b1bcc48ce: Gained IPv6LL Mar 10 02:11:24.641618 kubelet[2766]: E0310 02:11:24.638302 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:11:24.658254 containerd[1558]: time="2026-03-10T02:11:24.657807957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-vz72q,Uid:a2414ddd-640a-46d0-a1b7-587c0cfd947d,Namespace:kube-system,Attempt:0,}" Mar 10 02:11:24.773132 containerd[1558]: time="2026-03-10T02:11:24.771992407Z" level=info msg="connecting to shim d8877d79b38ba60b29001a7e4f7ed6e68c76dad102428e7f4ad49119deee0a14" address="unix:///run/containerd/s/ad512321e5650bee88808a67a931ec29e6ba5cd0b7d18f90002548ea171a2998" namespace=k8s.io protocol=ttrpc version=3 Mar 10 02:11:24.988823 kubelet[2766]: E0310 02:11:24.985517 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:11:25.031482 systemd[1]: Started cri-containerd-d8877d79b38ba60b29001a7e4f7ed6e68c76dad102428e7f4ad49119deee0a14.scope - libcontainer container d8877d79b38ba60b29001a7e4f7ed6e68c76dad102428e7f4ad49119deee0a14. Mar 10 02:11:25.108672 systemd-resolved[1468]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 10 02:11:25.332702 containerd[1558]: time="2026-03-10T02:11:25.332385992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86bd949797-bcqfh,Uid:842fa873-9477-4391-bb58-9db26033f987,Namespace:calico-system,Attempt:0,} returns sandbox id \"d8877d79b38ba60b29001a7e4f7ed6e68c76dad102428e7f4ad49119deee0a14\"" Mar 10 02:11:25.403691 systemd-networkd[1464]: cali172ec3ae0c5: Link UP Mar 10 02:11:25.408170 systemd-networkd[1464]: cali172ec3ae0c5: Gained carrier Mar 10 02:11:25.473093 containerd[1558]: 2026-03-10 02:11:24.930 [INFO][4714] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--689586c974--wfw9j-eth0 calico-kube-controllers-689586c974- calico-system 595f5ed2-29fa-4602-8ae6-ba221a4a42bd 1020 0 2026-03-10 02:10:11 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:689586c974 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-689586c974-wfw9j eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali172ec3ae0c5 [] [] }} ContainerID="9ff2b4c2e541de75a5ad7c2a9eed24c38e1715fef53607eb93680148f3f6c6b3" Namespace="calico-system" Pod="calico-kube-controllers-689586c974-wfw9j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--689586c974--wfw9j-" Mar 10 02:11:25.473093 containerd[1558]: 2026-03-10 02:11:24.935 [INFO][4714] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9ff2b4c2e541de75a5ad7c2a9eed24c38e1715fef53607eb93680148f3f6c6b3" Namespace="calico-system" Pod="calico-kube-controllers-689586c974-wfw9j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--689586c974--wfw9j-eth0" Mar 10 02:11:25.473093 containerd[1558]: 2026-03-10 02:11:25.100 [INFO][4779] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9ff2b4c2e541de75a5ad7c2a9eed24c38e1715fef53607eb93680148f3f6c6b3" HandleID="k8s-pod-network.9ff2b4c2e541de75a5ad7c2a9eed24c38e1715fef53607eb93680148f3f6c6b3" Workload="localhost-k8s-calico--kube--controllers--689586c974--wfw9j-eth0" Mar 10 02:11:25.473093 containerd[1558]: 2026-03-10 02:11:25.130 [INFO][4779] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9ff2b4c2e541de75a5ad7c2a9eed24c38e1715fef53607eb93680148f3f6c6b3" HandleID="k8s-pod-network.9ff2b4c2e541de75a5ad7c2a9eed24c38e1715fef53607eb93680148f3f6c6b3" Workload="localhost-k8s-calico--kube--controllers--689586c974--wfw9j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e6d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-689586c974-wfw9j", "timestamp":"2026-03-10 02:11:25.10015665 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003ddce0)} Mar 10 02:11:25.473093 containerd[1558]: 2026-03-10 02:11:25.130 [INFO][4779] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 02:11:25.473093 containerd[1558]: 2026-03-10 02:11:25.130 [INFO][4779] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 02:11:25.473093 containerd[1558]: 2026-03-10 02:11:25.130 [INFO][4779] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 10 02:11:25.473093 containerd[1558]: 2026-03-10 02:11:25.156 [INFO][4779] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9ff2b4c2e541de75a5ad7c2a9eed24c38e1715fef53607eb93680148f3f6c6b3" host="localhost" Mar 10 02:11:25.473093 containerd[1558]: 2026-03-10 02:11:25.212 [INFO][4779] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 10 02:11:25.473093 containerd[1558]: 2026-03-10 02:11:25.265 [INFO][4779] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 10 02:11:25.473093 containerd[1558]: 2026-03-10 02:11:25.279 [INFO][4779] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 10 02:11:25.473093 containerd[1558]: 2026-03-10 02:11:25.304 [INFO][4779] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 10 02:11:25.473093 containerd[1558]: 2026-03-10 02:11:25.304 [INFO][4779] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9ff2b4c2e541de75a5ad7c2a9eed24c38e1715fef53607eb93680148f3f6c6b3" host="localhost" Mar 10 02:11:25.473093 containerd[1558]: 2026-03-10 02:11:25.314 [INFO][4779] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9ff2b4c2e541de75a5ad7c2a9eed24c38e1715fef53607eb93680148f3f6c6b3 Mar 10 02:11:25.473093 containerd[1558]: 2026-03-10 02:11:25.342 [INFO][4779] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9ff2b4c2e541de75a5ad7c2a9eed24c38e1715fef53607eb93680148f3f6c6b3" host="localhost" Mar 10 02:11:25.473093 containerd[1558]: 2026-03-10 02:11:25.371 [INFO][4779] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.9ff2b4c2e541de75a5ad7c2a9eed24c38e1715fef53607eb93680148f3f6c6b3" host="localhost" Mar 10 02:11:25.473093 containerd[1558]: 2026-03-10 02:11:25.371 [INFO][4779] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.9ff2b4c2e541de75a5ad7c2a9eed24c38e1715fef53607eb93680148f3f6c6b3" host="localhost" Mar 10 02:11:25.473093 containerd[1558]: 2026-03-10 02:11:25.371 [INFO][4779] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 02:11:25.473093 containerd[1558]: 2026-03-10 02:11:25.372 [INFO][4779] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="9ff2b4c2e541de75a5ad7c2a9eed24c38e1715fef53607eb93680148f3f6c6b3" HandleID="k8s-pod-network.9ff2b4c2e541de75a5ad7c2a9eed24c38e1715fef53607eb93680148f3f6c6b3" Workload="localhost-k8s-calico--kube--controllers--689586c974--wfw9j-eth0" Mar 10 02:11:25.474087 containerd[1558]: 2026-03-10 02:11:25.380 [INFO][4714] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9ff2b4c2e541de75a5ad7c2a9eed24c38e1715fef53607eb93680148f3f6c6b3" Namespace="calico-system" Pod="calico-kube-controllers-689586c974-wfw9j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--689586c974--wfw9j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--689586c974--wfw9j-eth0", GenerateName:"calico-kube-controllers-689586c974-", Namespace:"calico-system", SelfLink:"", UID:"595f5ed2-29fa-4602-8ae6-ba221a4a42bd", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 10, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"689586c974", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-689586c974-wfw9j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali172ec3ae0c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:11:25.474087 containerd[1558]: 2026-03-10 02:11:25.380 [INFO][4714] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="9ff2b4c2e541de75a5ad7c2a9eed24c38e1715fef53607eb93680148f3f6c6b3" Namespace="calico-system" Pod="calico-kube-controllers-689586c974-wfw9j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--689586c974--wfw9j-eth0" Mar 10 02:11:25.474087 containerd[1558]: 2026-03-10 02:11:25.381 [INFO][4714] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali172ec3ae0c5 ContainerID="9ff2b4c2e541de75a5ad7c2a9eed24c38e1715fef53607eb93680148f3f6c6b3" Namespace="calico-system" Pod="calico-kube-controllers-689586c974-wfw9j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--689586c974--wfw9j-eth0" Mar 10 02:11:25.474087 containerd[1558]: 2026-03-10 02:11:25.409 [INFO][4714] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9ff2b4c2e541de75a5ad7c2a9eed24c38e1715fef53607eb93680148f3f6c6b3" Namespace="calico-system" Pod="calico-kube-controllers-689586c974-wfw9j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--689586c974--wfw9j-eth0" Mar 10 02:11:25.474087 containerd[1558]: 2026-03-10 02:11:25.409 [INFO][4714] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9ff2b4c2e541de75a5ad7c2a9eed24c38e1715fef53607eb93680148f3f6c6b3" Namespace="calico-system" Pod="calico-kube-controllers-689586c974-wfw9j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--689586c974--wfw9j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--689586c974--wfw9j-eth0", GenerateName:"calico-kube-controllers-689586c974-", Namespace:"calico-system", SelfLink:"", UID:"595f5ed2-29fa-4602-8ae6-ba221a4a42bd", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 10, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"689586c974", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9ff2b4c2e541de75a5ad7c2a9eed24c38e1715fef53607eb93680148f3f6c6b3", Pod:"calico-kube-controllers-689586c974-wfw9j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali172ec3ae0c5", MAC:"56:5a:ad:b8:bd:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:11:25.474087 containerd[1558]: 2026-03-10 02:11:25.457 [INFO][4714] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9ff2b4c2e541de75a5ad7c2a9eed24c38e1715fef53607eb93680148f3f6c6b3" Namespace="calico-system" Pod="calico-kube-controllers-689586c974-wfw9j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--689586c974--wfw9j-eth0" Mar 10 02:11:25.586637 kubelet[2766]: E0310 02:11:25.586112 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:11:25.605406 systemd-networkd[1464]: calid5e1e26585e: Link UP Mar 10 02:11:25.609178 systemd-networkd[1464]: calid5e1e26585e: Gained carrier Mar 10 02:11:25.725080 containerd[1558]: 2026-03-10 02:11:25.025 [INFO][4721] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--vz72q-eth0 coredns-7d764666f9- kube-system a2414ddd-640a-46d0-a1b7-587c0cfd947d 1019 0 2026-03-10 02:09:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-vz72q eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid5e1e26585e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="b3308d17d24c7b35408c9c9429b1bf92ac5b2e5550c294eb1a9f4aea764d53b5" Namespace="kube-system" Pod="coredns-7d764666f9-vz72q" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--vz72q-" Mar 10 02:11:25.725080 containerd[1558]: 2026-03-10 02:11:25.026 [INFO][4721] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b3308d17d24c7b35408c9c9429b1bf92ac5b2e5550c294eb1a9f4aea764d53b5" Namespace="kube-system" Pod="coredns-7d764666f9-vz72q" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--vz72q-eth0" Mar 10 02:11:25.725080 containerd[1558]: 2026-03-10 02:11:25.244 [INFO][4796] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b3308d17d24c7b35408c9c9429b1bf92ac5b2e5550c294eb1a9f4aea764d53b5" HandleID="k8s-pod-network.b3308d17d24c7b35408c9c9429b1bf92ac5b2e5550c294eb1a9f4aea764d53b5" Workload="localhost-k8s-coredns--7d764666f9--vz72q-eth0" Mar 10 02:11:25.725080 containerd[1558]: 2026-03-10 02:11:25.285 [INFO][4796] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b3308d17d24c7b35408c9c9429b1bf92ac5b2e5550c294eb1a9f4aea764d53b5" HandleID="k8s-pod-network.b3308d17d24c7b35408c9c9429b1bf92ac5b2e5550c294eb1a9f4aea764d53b5" Workload="localhost-k8s-coredns--7d764666f9--vz72q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138680), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-vz72q", "timestamp":"2026-03-10 02:11:25.244863529 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000380dc0)} Mar 10 02:11:25.725080 containerd[1558]: 2026-03-10 02:11:25.285 [INFO][4796] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 02:11:25.725080 containerd[1558]: 2026-03-10 02:11:25.377 [INFO][4796] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 02:11:25.725080 containerd[1558]: 2026-03-10 02:11:25.377 [INFO][4796] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 10 02:11:25.725080 containerd[1558]: 2026-03-10 02:11:25.388 [INFO][4796] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b3308d17d24c7b35408c9c9429b1bf92ac5b2e5550c294eb1a9f4aea764d53b5" host="localhost" Mar 10 02:11:25.725080 containerd[1558]: 2026-03-10 02:11:25.412 [INFO][4796] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 10 02:11:25.725080 containerd[1558]: 2026-03-10 02:11:25.458 [INFO][4796] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 10 02:11:25.725080 containerd[1558]: 2026-03-10 02:11:25.469 [INFO][4796] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 10 02:11:25.725080 containerd[1558]: 2026-03-10 02:11:25.491 [INFO][4796] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 10 02:11:25.725080 containerd[1558]: 2026-03-10 02:11:25.492 [INFO][4796] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b3308d17d24c7b35408c9c9429b1bf92ac5b2e5550c294eb1a9f4aea764d53b5" host="localhost" Mar 10 02:11:25.725080 containerd[1558]: 2026-03-10 02:11:25.502 [INFO][4796] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b3308d17d24c7b35408c9c9429b1bf92ac5b2e5550c294eb1a9f4aea764d53b5 Mar 10 02:11:25.725080 containerd[1558]: 2026-03-10 02:11:25.531 [INFO][4796] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b3308d17d24c7b35408c9c9429b1bf92ac5b2e5550c294eb1a9f4aea764d53b5" host="localhost" Mar 10 02:11:25.725080 containerd[1558]: 2026-03-10 02:11:25.571 [INFO][4796] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.b3308d17d24c7b35408c9c9429b1bf92ac5b2e5550c294eb1a9f4aea764d53b5" host="localhost" Mar 10 02:11:25.725080 containerd[1558]: 2026-03-10 02:11:25.572 [INFO][4796] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.b3308d17d24c7b35408c9c9429b1bf92ac5b2e5550c294eb1a9f4aea764d53b5" host="localhost" Mar 10 02:11:25.725080 containerd[1558]: 2026-03-10 02:11:25.572 [INFO][4796] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 02:11:25.725080 containerd[1558]: 2026-03-10 02:11:25.572 [INFO][4796] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="b3308d17d24c7b35408c9c9429b1bf92ac5b2e5550c294eb1a9f4aea764d53b5" HandleID="k8s-pod-network.b3308d17d24c7b35408c9c9429b1bf92ac5b2e5550c294eb1a9f4aea764d53b5" Workload="localhost-k8s-coredns--7d764666f9--vz72q-eth0" Mar 10 02:11:25.726792 containerd[1558]: 2026-03-10 02:11:25.582 [INFO][4721] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b3308d17d24c7b35408c9c9429b1bf92ac5b2e5550c294eb1a9f4aea764d53b5" Namespace="kube-system" Pod="coredns-7d764666f9-vz72q" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--vz72q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--vz72q-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"a2414ddd-640a-46d0-a1b7-587c0cfd947d", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 9, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-vz72q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid5e1e26585e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:11:25.726792 containerd[1558]: 2026-03-10 02:11:25.582 [INFO][4721] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="b3308d17d24c7b35408c9c9429b1bf92ac5b2e5550c294eb1a9f4aea764d53b5" Namespace="kube-system" Pod="coredns-7d764666f9-vz72q" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--vz72q-eth0" Mar 10 02:11:25.726792 containerd[1558]: 2026-03-10 02:11:25.582 [INFO][4721] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid5e1e26585e ContainerID="b3308d17d24c7b35408c9c9429b1bf92ac5b2e5550c294eb1a9f4aea764d53b5" Namespace="kube-system" Pod="coredns-7d764666f9-vz72q" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--vz72q-eth0" Mar 10 02:11:25.726792 containerd[1558]: 2026-03-10 02:11:25.605 [INFO][4721] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b3308d17d24c7b35408c9c9429b1bf92ac5b2e5550c294eb1a9f4aea764d53b5" Namespace="kube-system" Pod="coredns-7d764666f9-vz72q" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--vz72q-eth0" Mar 10 02:11:25.726792 containerd[1558]: 2026-03-10 02:11:25.606 [INFO][4721] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b3308d17d24c7b35408c9c9429b1bf92ac5b2e5550c294eb1a9f4aea764d53b5" Namespace="kube-system" Pod="coredns-7d764666f9-vz72q" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--vz72q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--vz72q-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"a2414ddd-640a-46d0-a1b7-587c0cfd947d", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 9, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b3308d17d24c7b35408c9c9429b1bf92ac5b2e5550c294eb1a9f4aea764d53b5", Pod:"coredns-7d764666f9-vz72q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid5e1e26585e", MAC:"b2:58:f0:4c:09:6c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:11:25.726792 containerd[1558]: 2026-03-10 02:11:25.692 [INFO][4721] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b3308d17d24c7b35408c9c9429b1bf92ac5b2e5550c294eb1a9f4aea764d53b5" Namespace="kube-system" Pod="coredns-7d764666f9-vz72q" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--vz72q-eth0" Mar 10 02:11:25.784143 containerd[1558]: time="2026-03-10T02:11:25.784058261Z" level=info msg="connecting to shim 9ff2b4c2e541de75a5ad7c2a9eed24c38e1715fef53607eb93680148f3f6c6b3" address="unix:///run/containerd/s/2cf8ab8555432a106b26405a4619b35ec6f89b43f80ab3a40ab243dd1808061c" namespace=k8s.io protocol=ttrpc version=3 Mar 10 02:11:25.937829 containerd[1558]: time="2026-03-10T02:11:25.928117635Z" level=info msg="connecting to shim b3308d17d24c7b35408c9c9429b1bf92ac5b2e5550c294eb1a9f4aea764d53b5" address="unix:///run/containerd/s/69198afda834e94a51abb6a95657c2784f7034f3b3437495f0a0c4c2c1695bce" namespace=k8s.io protocol=ttrpc version=3 Mar 10 02:11:25.943549 systemd[1]: Started cri-containerd-9ff2b4c2e541de75a5ad7c2a9eed24c38e1715fef53607eb93680148f3f6c6b3.scope - libcontainer container 9ff2b4c2e541de75a5ad7c2a9eed24c38e1715fef53607eb93680148f3f6c6b3. Mar 10 02:11:25.997816 kubelet[2766]: E0310 02:11:25.994885 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:11:26.043639 systemd-resolved[1468]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 10 02:11:26.097152 systemd-networkd[1464]: cali5fd40e1d66d: Gained IPv6LL Mar 10 02:11:26.125338 systemd[1]: Started cri-containerd-b3308d17d24c7b35408c9c9429b1bf92ac5b2e5550c294eb1a9f4aea764d53b5.scope - libcontainer container b3308d17d24c7b35408c9c9429b1bf92ac5b2e5550c294eb1a9f4aea764d53b5. Mar 10 02:11:26.225866 systemd-resolved[1468]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 10 02:11:26.269838 containerd[1558]: time="2026-03-10T02:11:26.269405012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-689586c974-wfw9j,Uid:595f5ed2-29fa-4602-8ae6-ba221a4a42bd,Namespace:calico-system,Attempt:0,} returns sandbox id \"9ff2b4c2e541de75a5ad7c2a9eed24c38e1715fef53607eb93680148f3f6c6b3\"" Mar 10 02:11:26.437842 containerd[1558]: time="2026-03-10T02:11:26.435252983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-vz72q,Uid:a2414ddd-640a-46d0-a1b7-587c0cfd947d,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3308d17d24c7b35408c9c9429b1bf92ac5b2e5550c294eb1a9f4aea764d53b5\"" Mar 10 02:11:26.439025 kubelet[2766]: E0310 02:11:26.438749 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:11:26.475845 containerd[1558]: time="2026-03-10T02:11:26.472880812Z" level=info msg="CreateContainer within sandbox \"b3308d17d24c7b35408c9c9429b1bf92ac5b2e5550c294eb1a9f4aea764d53b5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 10 02:11:26.543123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount963369556.mount: Deactivated successfully. Mar 10 02:11:26.566136 containerd[1558]: time="2026-03-10T02:11:26.560019904Z" level=info msg="Container 77556e1834c298e003ece7cd33cf4cd811df32de2abb37fe9d73ea35d3344e23: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:11:26.597825 containerd[1558]: time="2026-03-10T02:11:26.597775877Z" level=info msg="CreateContainer within sandbox \"b3308d17d24c7b35408c9c9429b1bf92ac5b2e5550c294eb1a9f4aea764d53b5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"77556e1834c298e003ece7cd33cf4cd811df32de2abb37fe9d73ea35d3344e23\"" Mar 10 02:11:26.613463 containerd[1558]: time="2026-03-10T02:11:26.605324676Z" level=info msg="StartContainer for \"77556e1834c298e003ece7cd33cf4cd811df32de2abb37fe9d73ea35d3344e23\"" Mar 10 02:11:26.613463 containerd[1558]: time="2026-03-10T02:11:26.606780207Z" level=info msg="connecting to shim 77556e1834c298e003ece7cd33cf4cd811df32de2abb37fe9d73ea35d3344e23" address="unix:///run/containerd/s/69198afda834e94a51abb6a95657c2784f7034f3b3437495f0a0c4c2c1695bce" protocol=ttrpc version=3 Mar 10 02:11:26.698027 systemd[1]: Started cri-containerd-77556e1834c298e003ece7cd33cf4cd811df32de2abb37fe9d73ea35d3344e23.scope - libcontainer container 77556e1834c298e003ece7cd33cf4cd811df32de2abb37fe9d73ea35d3344e23. Mar 10 02:11:26.732885 systemd-networkd[1464]: calid5e1e26585e: Gained IPv6LL Mar 10 02:11:26.962480 containerd[1558]: time="2026-03-10T02:11:26.960156790Z" level=info msg="StartContainer for \"77556e1834c298e003ece7cd33cf4cd811df32de2abb37fe9d73ea35d3344e23\" returns successfully" Mar 10 02:11:27.030839 kubelet[2766]: E0310 02:11:27.024042 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:11:27.435814 systemd-networkd[1464]: cali172ec3ae0c5: Gained IPv6LL Mar 10 02:11:28.033243 kubelet[2766]: E0310 02:11:28.031834 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:11:28.112571 kubelet[2766]: I0310 02:11:28.109586 2766 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-vz72q" podStartSLOduration=93.108931262 podStartE2EDuration="1m33.108931262s" podCreationTimestamp="2026-03-10 02:09:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 02:11:27.107941075 +0000 UTC m=+96.908060364" watchObservedRunningTime="2026-03-10 02:11:28.108931262 +0000 UTC m=+97.909050582" Mar 10 02:11:29.035509 kubelet[2766]: E0310 02:11:29.035468 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:11:29.586884 kubelet[2766]: E0310 02:11:29.585727 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:11:30.060098 kubelet[2766]: E0310 02:11:30.059616 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:11:30.194721 containerd[1558]: time="2026-03-10T02:11:30.194572119Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:11:30.200055 containerd[1558]: time="2026-03-10T02:11:30.199917534Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 10 02:11:30.200195 containerd[1558]: time="2026-03-10T02:11:30.200089363Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:11:30.212862 containerd[1558]: time="2026-03-10T02:11:30.211032241Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:11:30.216878 containerd[1558]: time="2026-03-10T02:11:30.214222522Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 9.392024771s" Mar 10 02:11:30.218830 containerd[1558]: time="2026-03-10T02:11:30.217179291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 10 02:11:30.221825 containerd[1558]: time="2026-03-10T02:11:30.221778664Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 10 02:11:30.249004 containerd[1558]: time="2026-03-10T02:11:30.245522955Z" level=info msg="CreateContainer within sandbox \"5d18a3e5e681d3b9218c603a61fc341c5896443bd164415b20786fa87c0531cc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 10 02:11:30.302285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1724524354.mount: Deactivated successfully. Mar 10 02:11:30.315380 containerd[1558]: time="2026-03-10T02:11:30.312009741Z" level=info msg="Container 7e247c6e1a0c7db85cd7dcd5d88143e1f3f3739284169f0c4ab4697574e13ddc: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:11:30.361140 containerd[1558]: time="2026-03-10T02:11:30.361013381Z" level=info msg="CreateContainer within sandbox \"5d18a3e5e681d3b9218c603a61fc341c5896443bd164415b20786fa87c0531cc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7e247c6e1a0c7db85cd7dcd5d88143e1f3f3739284169f0c4ab4697574e13ddc\"" Mar 10 02:11:30.366030 containerd[1558]: time="2026-03-10T02:11:30.365592352Z" level=info msg="StartContainer for \"7e247c6e1a0c7db85cd7dcd5d88143e1f3f3739284169f0c4ab4697574e13ddc\"" Mar 10 02:11:30.367357 containerd[1558]: time="2026-03-10T02:11:30.367316584Z" level=info msg="connecting to shim 7e247c6e1a0c7db85cd7dcd5d88143e1f3f3739284169f0c4ab4697574e13ddc" address="unix:///run/containerd/s/e005bdb7ce846c972a5f373d6fdd25bdefc96ea194ea2d8e1d0c9ab2f8ebb7b9" protocol=ttrpc version=3 Mar 10 02:11:30.481569 systemd[1]: Started cri-containerd-7e247c6e1a0c7db85cd7dcd5d88143e1f3f3739284169f0c4ab4697574e13ddc.scope - libcontainer container 7e247c6e1a0c7db85cd7dcd5d88143e1f3f3739284169f0c4ab4697574e13ddc. Mar 10 02:11:30.728061 containerd[1558]: time="2026-03-10T02:11:30.726623645Z" level=info msg="StartContainer for \"7e247c6e1a0c7db85cd7dcd5d88143e1f3f3739284169f0c4ab4697574e13ddc\" returns successfully" Mar 10 02:11:31.200792 kubelet[2766]: I0310 02:11:31.196543 2766 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-86bd949797-twmg8" podStartSLOduration=72.795603279 podStartE2EDuration="1m22.196523811s" podCreationTimestamp="2026-03-10 02:10:09 +0000 UTC" firstStartedPulling="2026-03-10 02:11:20.820339749 +0000 UTC m=+90.620459048" lastFinishedPulling="2026-03-10 02:11:30.221260291 +0000 UTC m=+100.021379580" observedRunningTime="2026-03-10 02:11:31.190487596 +0000 UTC m=+100.990606886" watchObservedRunningTime="2026-03-10 02:11:31.196523811 +0000 UTC m=+100.996643100" Mar 10 02:11:31.876098 containerd[1558]: time="2026-03-10T02:11:31.875219867Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:11:31.882339 containerd[1558]: time="2026-03-10T02:11:31.882298410Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 10 02:11:31.884726 containerd[1558]: time="2026-03-10T02:11:31.884644884Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:11:31.897824 containerd[1558]: time="2026-03-10T02:11:31.897769072Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:11:31.902306 containerd[1558]: time="2026-03-10T02:11:31.902158436Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.680335171s" Mar 10 02:11:31.902306 containerd[1558]: time="2026-03-10T02:11:31.902203590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 10 02:11:31.913147 containerd[1558]: time="2026-03-10T02:11:31.911851323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 10 02:11:31.933363 containerd[1558]: time="2026-03-10T02:11:31.933271695Z" level=info msg="CreateContainer within sandbox \"46efdc83618d3a476cf3e4b301f84e69a777553eb448301f5c41be74228cf211\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 10 02:11:31.983047 containerd[1558]: time="2026-03-10T02:11:31.982176420Z" level=info msg="Container 252e9cf2cd3e2ef32d363b059ec259177d9d57327084319a2cd33f7531ebe620: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:11:32.016705 containerd[1558]: time="2026-03-10T02:11:32.016515742Z" level=info msg="CreateContainer within sandbox \"46efdc83618d3a476cf3e4b301f84e69a777553eb448301f5c41be74228cf211\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"252e9cf2cd3e2ef32d363b059ec259177d9d57327084319a2cd33f7531ebe620\"" Mar 10 02:11:32.019031 containerd[1558]: time="2026-03-10T02:11:32.018618076Z" level=info msg="StartContainer for \"252e9cf2cd3e2ef32d363b059ec259177d9d57327084319a2cd33f7531ebe620\"" Mar 10 02:11:32.028166 containerd[1558]: time="2026-03-10T02:11:32.028043788Z" level=info msg="connecting to shim 252e9cf2cd3e2ef32d363b059ec259177d9d57327084319a2cd33f7531ebe620" address="unix:///run/containerd/s/3e9194bedb45a9d47de08fa4043a9225cd33e045c2622b469f7d7271affd4b11" protocol=ttrpc version=3 Mar 10 02:11:32.133556 systemd[1]: Started cri-containerd-252e9cf2cd3e2ef32d363b059ec259177d9d57327084319a2cd33f7531ebe620.scope - libcontainer container 252e9cf2cd3e2ef32d363b059ec259177d9d57327084319a2cd33f7531ebe620. Mar 10 02:11:32.474402 containerd[1558]: time="2026-03-10T02:11:32.474071280Z" level=info msg="StartContainer for \"252e9cf2cd3e2ef32d363b059ec259177d9d57327084319a2cd33f7531ebe620\" returns successfully" Mar 10 02:11:33.587580 kubelet[2766]: E0310 02:11:33.587022 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:11:36.139047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2569978247.mount: Deactivated successfully. Mar 10 02:11:38.347705 containerd[1558]: time="2026-03-10T02:11:38.347169056Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:11:38.359697 containerd[1558]: time="2026-03-10T02:11:38.357427642Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 10 02:11:38.362452 containerd[1558]: time="2026-03-10T02:11:38.362127378Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:11:38.374207 containerd[1558]: time="2026-03-10T02:11:38.373809369Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:11:38.374575 containerd[1558]: time="2026-03-10T02:11:38.374442138Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 6.462535503s" Mar 10 02:11:38.374575 containerd[1558]: time="2026-03-10T02:11:38.374478845Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 10 02:11:38.384319 containerd[1558]: time="2026-03-10T02:11:38.383866037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 10 02:11:38.411318 containerd[1558]: time="2026-03-10T02:11:38.411035535Z" level=info msg="CreateContainer within sandbox \"1678ca65eba387252013a11a9a6aca7343f328376239b86093384f1e723757a0\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 10 02:11:38.510823 containerd[1558]: time="2026-03-10T02:11:38.506097982Z" level=info msg="Container a422619d6da89b0ac14af04ce52415e3683e6aa55914dab51425e9588d38a3fa: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:11:38.511011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1276026509.mount: Deactivated successfully. Mar 10 02:11:38.571077 containerd[1558]: time="2026-03-10T02:11:38.570878180Z" level=info msg="CreateContainer within sandbox \"1678ca65eba387252013a11a9a6aca7343f328376239b86093384f1e723757a0\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"a422619d6da89b0ac14af04ce52415e3683e6aa55914dab51425e9588d38a3fa\"" Mar 10 02:11:38.578761 containerd[1558]: time="2026-03-10T02:11:38.575219837Z" level=info msg="StartContainer for \"a422619d6da89b0ac14af04ce52415e3683e6aa55914dab51425e9588d38a3fa\"" Mar 10 02:11:38.578761 containerd[1558]: time="2026-03-10T02:11:38.578086302Z" level=info msg="connecting to shim a422619d6da89b0ac14af04ce52415e3683e6aa55914dab51425e9588d38a3fa" address="unix:///run/containerd/s/f729fd4063f8e29812b1e13c07f737fc846f8cb29f331bad79a60cba251dc6a0" protocol=ttrpc version=3 Mar 10 02:11:38.675603 systemd[1]: Started cri-containerd-a422619d6da89b0ac14af04ce52415e3683e6aa55914dab51425e9588d38a3fa.scope - libcontainer container a422619d6da89b0ac14af04ce52415e3683e6aa55914dab51425e9588d38a3fa. Mar 10 02:11:38.738843 containerd[1558]: time="2026-03-10T02:11:38.737790767Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:11:38.754019 containerd[1558]: time="2026-03-10T02:11:38.753981783Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 10 02:11:38.766895 containerd[1558]: time="2026-03-10T02:11:38.764593750Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 380.698339ms" Mar 10 02:11:38.767249 containerd[1558]: time="2026-03-10T02:11:38.767078367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 10 02:11:38.774868 containerd[1558]: time="2026-03-10T02:11:38.774842221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 10 02:11:38.803478 containerd[1558]: time="2026-03-10T02:11:38.800261419Z" level=info msg="CreateContainer within sandbox \"d8877d79b38ba60b29001a7e4f7ed6e68c76dad102428e7f4ad49119deee0a14\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 10 02:11:38.890303 containerd[1558]: time="2026-03-10T02:11:38.890186610Z" level=info msg="StartContainer for \"a422619d6da89b0ac14af04ce52415e3683e6aa55914dab51425e9588d38a3fa\" returns successfully" Mar 10 02:11:38.890651 containerd[1558]: time="2026-03-10T02:11:38.890534556Z" level=info msg="Container 8273f2d31a80412909da238efd173142cd27bc7fff631a2e1fd2922d7b901e2c: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:11:38.918205 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1058744173.mount: Deactivated successfully. Mar 10 02:11:38.940126 containerd[1558]: time="2026-03-10T02:11:38.939800430Z" level=info msg="CreateContainer within sandbox \"d8877d79b38ba60b29001a7e4f7ed6e68c76dad102428e7f4ad49119deee0a14\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8273f2d31a80412909da238efd173142cd27bc7fff631a2e1fd2922d7b901e2c\"" Mar 10 02:11:38.945451 containerd[1558]: time="2026-03-10T02:11:38.945073927Z" level=info msg="StartContainer for \"8273f2d31a80412909da238efd173142cd27bc7fff631a2e1fd2922d7b901e2c\"" Mar 10 02:11:38.947513 containerd[1558]: time="2026-03-10T02:11:38.947299753Z" level=info msg="connecting to shim 8273f2d31a80412909da238efd173142cd27bc7fff631a2e1fd2922d7b901e2c" address="unix:///run/containerd/s/ad512321e5650bee88808a67a931ec29e6ba5cd0b7d18f90002548ea171a2998" protocol=ttrpc version=3 Mar 10 02:11:39.007442 systemd[1]: Started cri-containerd-8273f2d31a80412909da238efd173142cd27bc7fff631a2e1fd2922d7b901e2c.scope - libcontainer container 8273f2d31a80412909da238efd173142cd27bc7fff631a2e1fd2922d7b901e2c. Mar 10 02:11:39.303157 containerd[1558]: time="2026-03-10T02:11:39.289341158Z" level=info msg="StartContainer for \"8273f2d31a80412909da238efd173142cd27bc7fff631a2e1fd2922d7b901e2c\" returns successfully" Mar 10 02:11:39.340284 kubelet[2766]: I0310 02:11:39.338175 2766 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-rqjmq" podStartSLOduration=74.804857199 podStartE2EDuration="1m29.338157935s" podCreationTimestamp="2026-03-10 02:10:10 +0000 UTC" firstStartedPulling="2026-03-10 02:11:23.848565332 +0000 UTC m=+93.648684621" lastFinishedPulling="2026-03-10 02:11:38.381866068 +0000 UTC m=+108.181985357" observedRunningTime="2026-03-10 02:11:39.333142074 +0000 UTC m=+109.133261363" watchObservedRunningTime="2026-03-10 02:11:39.338157935 +0000 UTC m=+109.138277244" Mar 10 02:11:40.352836 kubelet[2766]: I0310 02:11:40.351662 2766 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-86bd949797-bcqfh" podStartSLOduration=77.922799085 podStartE2EDuration="1m31.351644356s" podCreationTimestamp="2026-03-10 02:10:09 +0000 UTC" firstStartedPulling="2026-03-10 02:11:25.339523482 +0000 UTC m=+95.139642771" lastFinishedPulling="2026-03-10 02:11:38.768368753 +0000 UTC m=+108.568488042" observedRunningTime="2026-03-10 02:11:40.350131416 +0000 UTC m=+110.150250725" watchObservedRunningTime="2026-03-10 02:11:40.351644356 +0000 UTC m=+110.151763666" Mar 10 02:11:42.289504 kubelet[2766]: I0310 02:11:42.289355 2766 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 10 02:11:46.548734 containerd[1558]: time="2026-03-10T02:11:46.548128140Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:11:46.549396 containerd[1558]: time="2026-03-10T02:11:46.549068468Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 10 02:11:46.552998 containerd[1558]: time="2026-03-10T02:11:46.552472439Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:11:46.560136 containerd[1558]: time="2026-03-10T02:11:46.560077306Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:11:46.562652 containerd[1558]: time="2026-03-10T02:11:46.562460944Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 7.784908877s" Mar 10 02:11:46.562652 containerd[1558]: time="2026-03-10T02:11:46.562493394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 10 02:11:46.577886 containerd[1558]: time="2026-03-10T02:11:46.575436288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 10 02:11:46.664203 containerd[1558]: time="2026-03-10T02:11:46.664074252Z" level=info msg="CreateContainer within sandbox \"9ff2b4c2e541de75a5ad7c2a9eed24c38e1715fef53607eb93680148f3f6c6b3\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 10 02:11:46.720735 containerd[1558]: time="2026-03-10T02:11:46.717067753Z" level=info msg="Container 524accf606f946cb1a8d305e28087aea1feedb959df3ab5dfabe86f114a5fef0: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:11:46.749122 containerd[1558]: time="2026-03-10T02:11:46.748366340Z" level=info msg="CreateContainer within sandbox \"9ff2b4c2e541de75a5ad7c2a9eed24c38e1715fef53607eb93680148f3f6c6b3\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"524accf606f946cb1a8d305e28087aea1feedb959df3ab5dfabe86f114a5fef0\"" Mar 10 02:11:46.751774 containerd[1558]: time="2026-03-10T02:11:46.751350544Z" level=info msg="StartContainer for \"524accf606f946cb1a8d305e28087aea1feedb959df3ab5dfabe86f114a5fef0\"" Mar 10 02:11:46.762397 containerd[1558]: time="2026-03-10T02:11:46.753478511Z" level=info msg="connecting to shim 524accf606f946cb1a8d305e28087aea1feedb959df3ab5dfabe86f114a5fef0" address="unix:///run/containerd/s/2cf8ab8555432a106b26405a4619b35ec6f89b43f80ab3a40ab243dd1808061c" protocol=ttrpc version=3 Mar 10 02:11:46.932418 systemd[1]: Started cri-containerd-524accf606f946cb1a8d305e28087aea1feedb959df3ab5dfabe86f114a5fef0.scope - libcontainer container 524accf606f946cb1a8d305e28087aea1feedb959df3ab5dfabe86f114a5fef0. Mar 10 02:11:47.145019 containerd[1558]: time="2026-03-10T02:11:47.144458942Z" level=info msg="StartContainer for \"524accf606f946cb1a8d305e28087aea1feedb959df3ab5dfabe86f114a5fef0\" returns successfully" Mar 10 02:11:47.435806 kubelet[2766]: I0310 02:11:47.434264 2766 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-689586c974-wfw9j" podStartSLOduration=76.154243918 podStartE2EDuration="1m36.434071784s" podCreationTimestamp="2026-03-10 02:10:11 +0000 UTC" firstStartedPulling="2026-03-10 02:11:26.288340157 +0000 UTC m=+96.088459447" lastFinishedPulling="2026-03-10 02:11:46.568168023 +0000 UTC m=+116.368287313" observedRunningTime="2026-03-10 02:11:47.425786838 +0000 UTC m=+117.225906137" watchObservedRunningTime="2026-03-10 02:11:47.434071784 +0000 UTC m=+117.234191083" Mar 10 02:11:48.357823 containerd[1558]: time="2026-03-10T02:11:48.356766289Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:11:48.358464 containerd[1558]: time="2026-03-10T02:11:48.357992001Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 10 02:11:48.363039 containerd[1558]: time="2026-03-10T02:11:48.362231380Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:11:48.373455 containerd[1558]: time="2026-03-10T02:11:48.370769455Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:11:48.373455 containerd[1558]: time="2026-03-10T02:11:48.372063373Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.7965866s" Mar 10 02:11:48.373455 containerd[1558]: time="2026-03-10T02:11:48.372105972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 10 02:11:48.393685 containerd[1558]: time="2026-03-10T02:11:48.393638463Z" level=info msg="CreateContainer within sandbox \"46efdc83618d3a476cf3e4b301f84e69a777553eb448301f5c41be74228cf211\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 10 02:11:48.438022 containerd[1558]: time="2026-03-10T02:11:48.437876881Z" level=info msg="Container 951a8e924e1803907b3761202ce806085b2982302dbf52e024886d04515c7edf: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:11:48.508293 containerd[1558]: time="2026-03-10T02:11:48.508146630Z" level=info msg="CreateContainer within sandbox \"46efdc83618d3a476cf3e4b301f84e69a777553eb448301f5c41be74228cf211\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"951a8e924e1803907b3761202ce806085b2982302dbf52e024886d04515c7edf\"" Mar 10 02:11:48.516492 containerd[1558]: time="2026-03-10T02:11:48.516346124Z" level=info msg="StartContainer for \"951a8e924e1803907b3761202ce806085b2982302dbf52e024886d04515c7edf\"" Mar 10 02:11:48.527422 containerd[1558]: time="2026-03-10T02:11:48.524794211Z" level=info msg="connecting to shim 951a8e924e1803907b3761202ce806085b2982302dbf52e024886d04515c7edf" address="unix:///run/containerd/s/3e9194bedb45a9d47de08fa4043a9225cd33e045c2622b469f7d7271affd4b11" protocol=ttrpc version=3 Mar 10 02:11:48.605780 systemd[1]: Started cri-containerd-951a8e924e1803907b3761202ce806085b2982302dbf52e024886d04515c7edf.scope - libcontainer container 951a8e924e1803907b3761202ce806085b2982302dbf52e024886d04515c7edf. Mar 10 02:11:48.935016 containerd[1558]: time="2026-03-10T02:11:48.934796188Z" level=info msg="StartContainer for \"951a8e924e1803907b3761202ce806085b2982302dbf52e024886d04515c7edf\" returns successfully" Mar 10 02:11:49.963827 kubelet[2766]: I0310 02:11:49.963207 2766 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 10 02:11:49.967519 kubelet[2766]: I0310 02:11:49.967152 2766 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 10 02:12:10.679610 kubelet[2766]: I0310 02:12:10.677910 2766 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-cb4xk" podStartSLOduration=94.532225617 podStartE2EDuration="1m59.67789141s" podCreationTimestamp="2026-03-10 02:10:11 +0000 UTC" firstStartedPulling="2026-03-10 02:11:23.23039029 +0000 UTC m=+93.030509580" lastFinishedPulling="2026-03-10 02:11:48.376056083 +0000 UTC m=+118.176175373" observedRunningTime="2026-03-10 02:11:49.491310576 +0000 UTC m=+119.291429895" watchObservedRunningTime="2026-03-10 02:12:10.67789141 +0000 UTC m=+140.478010739" Mar 10 02:12:30.600449 kubelet[2766]: E0310 02:12:30.600247 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:12:32.590014 kubelet[2766]: E0310 02:12:32.589859 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:12:33.592643 kubelet[2766]: E0310 02:12:33.590215 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:12:36.792806 systemd[1]: Started sshd@7-10.0.0.112:22-10.0.0.1:59594.service - OpenSSH per-connection server daemon (10.0.0.1:59594). Mar 10 02:12:37.247460 sshd[5571]: Accepted publickey for core from 10.0.0.1 port 59594 ssh2: RSA SHA256:d2FUFdel+KP9pqsSrlp8nTsY/4RJTtu7ZDVkbTKQqjY Mar 10 02:12:37.261509 sshd-session[5571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:12:37.305523 systemd-logind[1539]: New session 8 of user core. Mar 10 02:12:37.315251 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 10 02:12:38.508648 sshd[5587]: Connection closed by 10.0.0.1 port 59594 Mar 10 02:12:38.509270 sshd-session[5571]: pam_unix(sshd:session): session closed for user core Mar 10 02:12:38.524360 systemd[1]: sshd@7-10.0.0.112:22-10.0.0.1:59594.service: Deactivated successfully. Mar 10 02:12:38.528174 systemd[1]: session-8.scope: Deactivated successfully. Mar 10 02:12:38.535747 systemd-logind[1539]: Session 8 logged out. Waiting for processes to exit. Mar 10 02:12:38.542707 systemd-logind[1539]: Removed session 8. Mar 10 02:12:38.592367 kubelet[2766]: E0310 02:12:38.592289 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:12:38.595460 kubelet[2766]: E0310 02:12:38.595346 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:12:40.601692 kubelet[2766]: E0310 02:12:40.598266 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:12:43.541478 systemd[1]: Started sshd@8-10.0.0.112:22-10.0.0.1:60654.service - OpenSSH per-connection server daemon (10.0.0.1:60654). Mar 10 02:12:43.698874 sshd[5664]: Accepted publickey for core from 10.0.0.1 port 60654 ssh2: RSA SHA256:d2FUFdel+KP9pqsSrlp8nTsY/4RJTtu7ZDVkbTKQqjY Mar 10 02:12:43.703564 sshd-session[5664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:12:43.725136 systemd-logind[1539]: New session 9 of user core. Mar 10 02:12:43.737841 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 10 02:12:44.216247 sshd[5667]: Connection closed by 10.0.0.1 port 60654 Mar 10 02:12:44.217156 sshd-session[5664]: pam_unix(sshd:session): session closed for user core Mar 10 02:12:44.227777 systemd[1]: sshd@8-10.0.0.112:22-10.0.0.1:60654.service: Deactivated successfully. Mar 10 02:12:44.233400 systemd[1]: session-9.scope: Deactivated successfully. Mar 10 02:12:44.237498 systemd-logind[1539]: Session 9 logged out. Waiting for processes to exit. Mar 10 02:12:44.248091 systemd-logind[1539]: Removed session 9. Mar 10 02:12:49.238338 systemd[1]: Started sshd@9-10.0.0.112:22-10.0.0.1:60664.service - OpenSSH per-connection server daemon (10.0.0.1:60664). Mar 10 02:12:49.333966 sshd[5734]: Accepted publickey for core from 10.0.0.1 port 60664 ssh2: RSA SHA256:d2FUFdel+KP9pqsSrlp8nTsY/4RJTtu7ZDVkbTKQqjY Mar 10 02:12:49.336341 sshd-session[5734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:12:49.349029 systemd-logind[1539]: New session 10 of user core. Mar 10 02:12:49.362879 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 10 02:12:49.607076 sshd[5737]: Connection closed by 10.0.0.1 port 60664 Mar 10 02:12:49.609256 sshd-session[5734]: pam_unix(sshd:session): session closed for user core Mar 10 02:12:49.618218 systemd[1]: sshd@9-10.0.0.112:22-10.0.0.1:60664.service: Deactivated successfully. Mar 10 02:12:49.622622 systemd[1]: session-10.scope: Deactivated successfully. Mar 10 02:12:49.624689 systemd-logind[1539]: Session 10 logged out. Waiting for processes to exit. Mar 10 02:12:49.632003 systemd-logind[1539]: Removed session 10. Mar 10 02:12:54.669249 systemd[1]: Started sshd@10-10.0.0.112:22-10.0.0.1:60636.service - OpenSSH per-connection server daemon (10.0.0.1:60636). Mar 10 02:12:54.916516 sshd[5758]: Accepted publickey for core from 10.0.0.1 port 60636 ssh2: RSA SHA256:d2FUFdel+KP9pqsSrlp8nTsY/4RJTtu7ZDVkbTKQqjY Mar 10 02:12:54.922145 sshd-session[5758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:12:54.937118 systemd-logind[1539]: New session 11 of user core. Mar 10 02:12:54.947882 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 10 02:12:55.358745 sshd[5761]: Connection closed by 10.0.0.1 port 60636 Mar 10 02:12:55.362105 sshd-session[5758]: pam_unix(sshd:session): session closed for user core Mar 10 02:12:55.381372 systemd[1]: sshd@10-10.0.0.112:22-10.0.0.1:60636.service: Deactivated successfully. Mar 10 02:12:55.388276 systemd[1]: session-11.scope: Deactivated successfully. Mar 10 02:12:55.395106 systemd-logind[1539]: Session 11 logged out. Waiting for processes to exit. Mar 10 02:12:55.404077 systemd-logind[1539]: Removed session 11. Mar 10 02:12:57.588221 kubelet[2766]: E0310 02:12:57.588067 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:13:00.411538 systemd[1]: Started sshd@11-10.0.0.112:22-10.0.0.1:40674.service - OpenSSH per-connection server daemon (10.0.0.1:40674). Mar 10 02:13:00.579904 sshd[5778]: Accepted publickey for core from 10.0.0.1 port 40674 ssh2: RSA SHA256:d2FUFdel+KP9pqsSrlp8nTsY/4RJTtu7ZDVkbTKQqjY Mar 10 02:13:00.582344 sshd-session[5778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:13:00.630145 systemd-logind[1539]: New session 12 of user core. Mar 10 02:13:00.646285 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 10 02:13:01.168754 sshd[5781]: Connection closed by 10.0.0.1 port 40674 Mar 10 02:13:01.169910 sshd-session[5778]: pam_unix(sshd:session): session closed for user core Mar 10 02:13:01.201410 systemd[1]: sshd@11-10.0.0.112:22-10.0.0.1:40674.service: Deactivated successfully. Mar 10 02:13:01.210555 systemd[1]: session-12.scope: Deactivated successfully. Mar 10 02:13:01.225201 systemd-logind[1539]: Session 12 logged out. Waiting for processes to exit. Mar 10 02:13:01.236063 systemd-logind[1539]: Removed session 12. Mar 10 02:13:06.224737 systemd[1]: Started sshd@12-10.0.0.112:22-10.0.0.1:40688.service - OpenSSH per-connection server daemon (10.0.0.1:40688). Mar 10 02:13:06.401747 sshd[5795]: Accepted publickey for core from 10.0.0.1 port 40688 ssh2: RSA SHA256:d2FUFdel+KP9pqsSrlp8nTsY/4RJTtu7ZDVkbTKQqjY Mar 10 02:13:06.409628 sshd-session[5795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:13:06.425148 systemd-logind[1539]: New session 13 of user core. Mar 10 02:13:06.449313 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 10 02:13:07.006016 sshd[5798]: Connection closed by 10.0.0.1 port 40688 Mar 10 02:13:07.006293 sshd-session[5795]: pam_unix(sshd:session): session closed for user core Mar 10 02:13:07.035850 systemd[1]: sshd@12-10.0.0.112:22-10.0.0.1:40688.service: Deactivated successfully. Mar 10 02:13:07.058702 systemd[1]: session-13.scope: Deactivated successfully. Mar 10 02:13:07.066896 systemd-logind[1539]: Session 13 logged out. Waiting for processes to exit. Mar 10 02:13:07.084621 systemd-logind[1539]: Removed session 13. Mar 10 02:13:12.030833 systemd[1]: Started sshd@13-10.0.0.112:22-10.0.0.1:50990.service - OpenSSH per-connection server daemon (10.0.0.1:50990). Mar 10 02:13:12.482245 sshd[5908]: Accepted publickey for core from 10.0.0.1 port 50990 ssh2: RSA SHA256:d2FUFdel+KP9pqsSrlp8nTsY/4RJTtu7ZDVkbTKQqjY Mar 10 02:13:12.485758 sshd-session[5908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:13:12.513055 systemd-logind[1539]: New session 14 of user core. Mar 10 02:13:12.519258 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 10 02:13:13.169232 sshd[5911]: Connection closed by 10.0.0.1 port 50990 Mar 10 02:13:13.172562 sshd-session[5908]: pam_unix(sshd:session): session closed for user core Mar 10 02:13:13.200524 systemd[1]: sshd@13-10.0.0.112:22-10.0.0.1:50990.service: Deactivated successfully. Mar 10 02:13:13.208940 systemd[1]: session-14.scope: Deactivated successfully. Mar 10 02:13:13.212498 systemd-logind[1539]: Session 14 logged out. Waiting for processes to exit. Mar 10 02:13:13.221663 systemd-logind[1539]: Removed session 14. Mar 10 02:13:18.209865 systemd[1]: Started sshd@14-10.0.0.112:22-10.0.0.1:51004.service - OpenSSH per-connection server daemon (10.0.0.1:51004). Mar 10 02:13:18.322769 sshd[5947]: Accepted publickey for core from 10.0.0.1 port 51004 ssh2: RSA SHA256:d2FUFdel+KP9pqsSrlp8nTsY/4RJTtu7ZDVkbTKQqjY Mar 10 02:13:18.327524 sshd-session[5947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:13:18.352511 systemd-logind[1539]: New session 15 of user core. Mar 10 02:13:18.363199 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 10 02:13:18.640383 sshd[5950]: Connection closed by 10.0.0.1 port 51004 Mar 10 02:13:18.641263 sshd-session[5947]: pam_unix(sshd:session): session closed for user core Mar 10 02:13:18.647900 systemd[1]: sshd@14-10.0.0.112:22-10.0.0.1:51004.service: Deactivated successfully. Mar 10 02:13:18.651683 systemd[1]: session-15.scope: Deactivated successfully. Mar 10 02:13:18.653619 systemd-logind[1539]: Session 15 logged out. Waiting for processes to exit. Mar 10 02:13:18.656444 systemd-logind[1539]: Removed session 15. Mar 10 02:13:23.694445 systemd[1]: Started sshd@15-10.0.0.112:22-10.0.0.1:41350.service - OpenSSH per-connection server daemon (10.0.0.1:41350). Mar 10 02:13:23.823024 sshd[5965]: Accepted publickey for core from 10.0.0.1 port 41350 ssh2: RSA SHA256:d2FUFdel+KP9pqsSrlp8nTsY/4RJTtu7ZDVkbTKQqjY Mar 10 02:13:23.825076 sshd-session[5965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:13:23.841259 systemd-logind[1539]: New session 16 of user core. Mar 10 02:13:23.848314 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 10 02:13:24.074638 sshd[5968]: Connection closed by 10.0.0.1 port 41350 Mar 10 02:13:24.076441 sshd-session[5965]: pam_unix(sshd:session): session closed for user core Mar 10 02:13:24.086927 systemd[1]: sshd@15-10.0.0.112:22-10.0.0.1:41350.service: Deactivated successfully. Mar 10 02:13:24.095779 systemd[1]: session-16.scope: Deactivated successfully. Mar 10 02:13:24.098518 systemd-logind[1539]: Session 16 logged out. Waiting for processes to exit. Mar 10 02:13:24.103159 systemd-logind[1539]: Removed session 16. Mar 10 02:13:29.101542 systemd[1]: Started sshd@16-10.0.0.112:22-10.0.0.1:41360.service - OpenSSH per-connection server daemon (10.0.0.1:41360). Mar 10 02:13:29.200384 sshd[5984]: Accepted publickey for core from 10.0.0.1 port 41360 ssh2: RSA SHA256:d2FUFdel+KP9pqsSrlp8nTsY/4RJTtu7ZDVkbTKQqjY Mar 10 02:13:29.203594 sshd-session[5984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:13:29.213098 systemd-logind[1539]: New session 17 of user core. Mar 10 02:13:29.227330 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 10 02:13:29.450266 sshd[5987]: Connection closed by 10.0.0.1 port 41360 Mar 10 02:13:29.450666 sshd-session[5984]: pam_unix(sshd:session): session closed for user core Mar 10 02:13:29.463710 systemd[1]: sshd@16-10.0.0.112:22-10.0.0.1:41360.service: Deactivated successfully. Mar 10 02:13:29.467798 systemd[1]: session-17.scope: Deactivated successfully. Mar 10 02:13:29.478475 systemd-logind[1539]: Session 17 logged out. Waiting for processes to exit. Mar 10 02:13:29.483779 systemd-logind[1539]: Removed session 17. Mar 10 02:13:34.471285 systemd[1]: Started sshd@17-10.0.0.112:22-10.0.0.1:57720.service - OpenSSH per-connection server daemon (10.0.0.1:57720). Mar 10 02:13:34.644247 sshd[6002]: Accepted publickey for core from 10.0.0.1 port 57720 ssh2: RSA SHA256:d2FUFdel+KP9pqsSrlp8nTsY/4RJTtu7ZDVkbTKQqjY Mar 10 02:13:34.646514 sshd-session[6002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:13:34.669518 systemd-logind[1539]: New session 18 of user core. Mar 10 02:13:34.681425 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 10 02:13:35.097452 sshd[6005]: Connection closed by 10.0.0.1 port 57720 Mar 10 02:13:35.102848 sshd-session[6002]: pam_unix(sshd:session): session closed for user core Mar 10 02:13:35.117295 systemd[1]: sshd@17-10.0.0.112:22-10.0.0.1:57720.service: Deactivated successfully. Mar 10 02:13:35.121646 systemd[1]: session-18.scope: Deactivated successfully. Mar 10 02:13:35.125800 systemd-logind[1539]: Session 18 logged out. Waiting for processes to exit. Mar 10 02:13:35.135508 systemd-logind[1539]: Removed session 18. Mar 10 02:13:40.154839 systemd[1]: Started sshd@18-10.0.0.112:22-10.0.0.1:58900.service - OpenSSH per-connection server daemon (10.0.0.1:58900). Mar 10 02:13:40.451653 sshd[6019]: Accepted publickey for core from 10.0.0.1 port 58900 ssh2: RSA SHA256:d2FUFdel+KP9pqsSrlp8nTsY/4RJTtu7ZDVkbTKQqjY Mar 10 02:13:40.459055 sshd-session[6019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:13:40.475457 systemd-logind[1539]: New session 19 of user core. Mar 10 02:13:40.493832 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 10 02:13:41.383115 sshd[6047]: Connection closed by 10.0.0.1 port 58900 Mar 10 02:13:41.387867 sshd-session[6019]: pam_unix(sshd:session): session closed for user core Mar 10 02:13:41.416220 systemd[1]: sshd@18-10.0.0.112:22-10.0.0.1:58900.service: Deactivated successfully. Mar 10 02:13:41.426121 systemd[1]: session-19.scope: Deactivated successfully. Mar 10 02:13:41.433013 systemd-logind[1539]: Session 19 logged out. Waiting for processes to exit. Mar 10 02:13:41.445366 systemd-logind[1539]: Removed session 19. Mar 10 02:13:41.451824 systemd[1]: Started sshd@19-10.0.0.112:22-10.0.0.1:58902.service - OpenSSH per-connection server daemon (10.0.0.1:58902). Mar 10 02:13:41.591066 kubelet[2766]: E0310 02:13:41.588895 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:13:41.683904 sshd[6101]: Accepted publickey for core from 10.0.0.1 port 58902 ssh2: RSA SHA256:d2FUFdel+KP9pqsSrlp8nTsY/4RJTtu7ZDVkbTKQqjY Mar 10 02:13:41.687272 sshd-session[6101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:13:41.716454 systemd-logind[1539]: New session 20 of user core. Mar 10 02:13:41.725830 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 10 02:13:42.268337 sshd[6106]: Connection closed by 10.0.0.1 port 58902 Mar 10 02:13:42.268876 sshd-session[6101]: pam_unix(sshd:session): session closed for user core Mar 10 02:13:42.310330 systemd[1]: sshd@19-10.0.0.112:22-10.0.0.1:58902.service: Deactivated successfully. Mar 10 02:13:42.316867 systemd[1]: session-20.scope: Deactivated successfully. Mar 10 02:13:42.324262 systemd-logind[1539]: Session 20 logged out. Waiting for processes to exit. Mar 10 02:13:42.329829 systemd-logind[1539]: Removed session 20. Mar 10 02:13:42.343660 systemd[1]: Started sshd@20-10.0.0.112:22-10.0.0.1:58906.service - OpenSSH per-connection server daemon (10.0.0.1:58906). Mar 10 02:13:42.529889 sshd[6118]: Accepted publickey for core from 10.0.0.1 port 58906 ssh2: RSA SHA256:d2FUFdel+KP9pqsSrlp8nTsY/4RJTtu7ZDVkbTKQqjY Mar 10 02:13:42.537746 sshd-session[6118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:13:42.557378 systemd-logind[1539]: New session 21 of user core. Mar 10 02:13:42.568703 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 10 02:13:42.888268 sshd[6121]: Connection closed by 10.0.0.1 port 58906 Mar 10 02:13:42.897848 sshd-session[6118]: pam_unix(sshd:session): session closed for user core Mar 10 02:13:42.909513 systemd-logind[1539]: Session 21 logged out. Waiting for processes to exit. Mar 10 02:13:42.916138 systemd[1]: sshd@20-10.0.0.112:22-10.0.0.1:58906.service: Deactivated successfully. Mar 10 02:13:42.924558 systemd[1]: session-21.scope: Deactivated successfully. Mar 10 02:13:42.938248 systemd-logind[1539]: Removed session 21. Mar 10 02:13:46.598800 kubelet[2766]: E0310 02:13:46.598681 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:13:47.588248 kubelet[2766]: E0310 02:13:47.588012 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:13:47.939154 systemd[1]: Started sshd@21-10.0.0.112:22-10.0.0.1:58920.service - OpenSSH per-connection server daemon (10.0.0.1:58920). Mar 10 02:13:48.097846 sshd[6157]: Accepted publickey for core from 10.0.0.1 port 58920 ssh2: RSA SHA256:d2FUFdel+KP9pqsSrlp8nTsY/4RJTtu7ZDVkbTKQqjY Mar 10 02:13:48.102295 sshd-session[6157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:13:48.117057 systemd-logind[1539]: New session 22 of user core. Mar 10 02:13:48.129420 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 10 02:13:48.362343 sshd[6160]: Connection closed by 10.0.0.1 port 58920 Mar 10 02:13:48.362812 sshd-session[6157]: pam_unix(sshd:session): session closed for user core Mar 10 02:13:48.373186 systemd[1]: sshd@21-10.0.0.112:22-10.0.0.1:58920.service: Deactivated successfully. Mar 10 02:13:48.377058 systemd[1]: session-22.scope: Deactivated successfully. Mar 10 02:13:48.380895 systemd-logind[1539]: Session 22 logged out. Waiting for processes to exit. Mar 10 02:13:48.385628 systemd-logind[1539]: Removed session 22. Mar 10 02:13:52.588802 kubelet[2766]: E0310 02:13:52.588335 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:13:53.390387 systemd[1]: Started sshd@22-10.0.0.112:22-10.0.0.1:49412.service - OpenSSH per-connection server daemon (10.0.0.1:49412). Mar 10 02:13:53.489967 sshd[6177]: Accepted publickey for core from 10.0.0.1 port 49412 ssh2: RSA SHA256:d2FUFdel+KP9pqsSrlp8nTsY/4RJTtu7ZDVkbTKQqjY Mar 10 02:13:53.492527 sshd-session[6177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:13:53.505296 systemd-logind[1539]: New session 23 of user core. Mar 10 02:13:53.525401 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 10 02:13:53.586766 kubelet[2766]: E0310 02:13:53.586175 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:13:53.763305 sshd[6180]: Connection closed by 10.0.0.1 port 49412 Mar 10 02:13:53.762887 sshd-session[6177]: pam_unix(sshd:session): session closed for user core Mar 10 02:13:53.780116 systemd[1]: sshd@22-10.0.0.112:22-10.0.0.1:49412.service: Deactivated successfully. Mar 10 02:13:53.783387 systemd[1]: session-23.scope: Deactivated successfully. Mar 10 02:13:53.786408 systemd-logind[1539]: Session 23 logged out. Waiting for processes to exit. Mar 10 02:13:53.789750 systemd[1]: Started sshd@23-10.0.0.112:22-10.0.0.1:49414.service - OpenSSH per-connection server daemon (10.0.0.1:49414). Mar 10 02:13:53.791812 systemd-logind[1539]: Removed session 23. Mar 10 02:13:53.867996 sshd[6193]: Accepted publickey for core from 10.0.0.1 port 49414 ssh2: RSA SHA256:d2FUFdel+KP9pqsSrlp8nTsY/4RJTtu7ZDVkbTKQqjY Mar 10 02:13:53.870698 sshd-session[6193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:13:53.881349 systemd-logind[1539]: New session 24 of user core. Mar 10 02:13:53.891384 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 10 02:13:54.712272 sshd[6196]: Connection closed by 10.0.0.1 port 49414 Mar 10 02:13:54.716472 sshd-session[6193]: pam_unix(sshd:session): session closed for user core Mar 10 02:13:54.729485 systemd[1]: Started sshd@24-10.0.0.112:22-10.0.0.1:49426.service - OpenSSH per-connection server daemon (10.0.0.1:49426). Mar 10 02:13:54.736492 systemd[1]: sshd@23-10.0.0.112:22-10.0.0.1:49414.service: Deactivated successfully. Mar 10 02:13:54.741040 systemd[1]: session-24.scope: Deactivated successfully. Mar 10 02:13:54.742642 systemd-logind[1539]: Session 24 logged out. Waiting for processes to exit. Mar 10 02:13:54.745758 systemd-logind[1539]: Removed session 24. Mar 10 02:13:54.968735 sshd[6206]: Accepted publickey for core from 10.0.0.1 port 49426 ssh2: RSA SHA256:d2FUFdel+KP9pqsSrlp8nTsY/4RJTtu7ZDVkbTKQqjY Mar 10 02:13:54.975795 sshd-session[6206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:13:54.998493 systemd-logind[1539]: New session 25 of user core. Mar 10 02:13:55.015115 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 10 02:13:56.212519 sshd[6212]: Connection closed by 10.0.0.1 port 49426 Mar 10 02:13:56.213412 sshd-session[6206]: pam_unix(sshd:session): session closed for user core Mar 10 02:13:56.242691 systemd[1]: sshd@24-10.0.0.112:22-10.0.0.1:49426.service: Deactivated successfully. Mar 10 02:13:56.249433 systemd[1]: session-25.scope: Deactivated successfully. Mar 10 02:13:56.251355 systemd-logind[1539]: Session 25 logged out. Waiting for processes to exit. Mar 10 02:13:56.257703 systemd[1]: Started sshd@25-10.0.0.112:22-10.0.0.1:49438.service - OpenSSH per-connection server daemon (10.0.0.1:49438). Mar 10 02:13:56.264210 systemd-logind[1539]: Removed session 25. Mar 10 02:13:56.418481 sshd[6237]: Accepted publickey for core from 10.0.0.1 port 49438 ssh2: RSA SHA256:d2FUFdel+KP9pqsSrlp8nTsY/4RJTtu7ZDVkbTKQqjY Mar 10 02:13:56.421084 sshd-session[6237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:13:56.438027 systemd-logind[1539]: New session 26 of user core. Mar 10 02:13:56.445726 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 10 02:13:56.590485 kubelet[2766]: E0310 02:13:56.590140 2766 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 02:13:57.240890 sshd[6242]: Connection closed by 10.0.0.1 port 49438 Mar 10 02:13:57.241327 sshd-session[6237]: pam_unix(sshd:session): session closed for user core Mar 10 02:13:57.265655 systemd[1]: sshd@25-10.0.0.112:22-10.0.0.1:49438.service: Deactivated successfully. Mar 10 02:13:57.271773 systemd[1]: session-26.scope: Deactivated successfully. Mar 10 02:13:57.278240 systemd-logind[1539]: Session 26 logged out. Waiting for processes to exit. Mar 10 02:13:57.285241 systemd[1]: Started sshd@26-10.0.0.112:22-10.0.0.1:49446.service - OpenSSH per-connection server daemon (10.0.0.1:49446). Mar 10 02:13:57.290321 systemd-logind[1539]: Removed session 26. Mar 10 02:13:57.361926 sshd[6266]: Accepted publickey for core from 10.0.0.1 port 49446 ssh2: RSA SHA256:d2FUFdel+KP9pqsSrlp8nTsY/4RJTtu7ZDVkbTKQqjY Mar 10 02:13:57.366691 sshd-session[6266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:13:57.385288 systemd-logind[1539]: New session 27 of user core. Mar 10 02:13:57.394383 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 10 02:13:57.556272 sshd[6275]: Connection closed by 10.0.0.1 port 49446 Mar 10 02:13:57.555700 sshd-session[6266]: pam_unix(sshd:session): session closed for user core Mar 10 02:13:57.565099 systemd[1]: sshd@26-10.0.0.112:22-10.0.0.1:49446.service: Deactivated successfully. Mar 10 02:13:57.567919 systemd[1]: session-27.scope: Deactivated successfully. Mar 10 02:13:57.571295 systemd-logind[1539]: Session 27 logged out. Waiting for processes to exit. Mar 10 02:13:57.583454 systemd-logind[1539]: Removed session 27. Mar 10 02:14:02.577998 systemd[1]: Started sshd@27-10.0.0.112:22-10.0.0.1:38486.service - OpenSSH per-connection server daemon (10.0.0.1:38486). Mar 10 02:14:02.676367 sshd[6288]: Accepted publickey for core from 10.0.0.1 port 38486 ssh2: RSA SHA256:d2FUFdel+KP9pqsSrlp8nTsY/4RJTtu7ZDVkbTKQqjY Mar 10 02:14:02.679933 sshd-session[6288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:14:02.694469 systemd-logind[1539]: New session 28 of user core. Mar 10 02:14:02.707310 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 10 02:14:02.852788 sshd[6291]: Connection closed by 10.0.0.1 port 38486 Mar 10 02:14:02.854457 sshd-session[6288]: pam_unix(sshd:session): session closed for user core Mar 10 02:14:02.863805 systemd[1]: sshd@27-10.0.0.112:22-10.0.0.1:38486.service: Deactivated successfully. Mar 10 02:14:02.867288 systemd[1]: session-28.scope: Deactivated successfully. Mar 10 02:14:02.871810 systemd-logind[1539]: Session 28 logged out. Waiting for processes to exit. Mar 10 02:14:02.877077 systemd-logind[1539]: Removed session 28. Mar 10 02:14:07.873574 systemd[1]: Started sshd@28-10.0.0.112:22-10.0.0.1:38498.service - OpenSSH per-connection server daemon (10.0.0.1:38498). Mar 10 02:14:07.954499 sshd[6313]: Accepted publickey for core from 10.0.0.1 port 38498 ssh2: RSA SHA256:d2FUFdel+KP9pqsSrlp8nTsY/4RJTtu7ZDVkbTKQqjY Mar 10 02:14:07.955886 sshd-session[6313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:14:07.966425 systemd-logind[1539]: New session 29 of user core. Mar 10 02:14:07.980372 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 10 02:14:08.130119 sshd[6316]: Connection closed by 10.0.0.1 port 38498 Mar 10 02:14:08.131276 sshd-session[6313]: pam_unix(sshd:session): session closed for user core Mar 10 02:14:08.140537 systemd[1]: sshd@28-10.0.0.112:22-10.0.0.1:38498.service: Deactivated successfully. Mar 10 02:14:08.143360 systemd[1]: session-29.scope: Deactivated successfully. Mar 10 02:14:08.144718 systemd-logind[1539]: Session 29 logged out. Waiting for processes to exit. Mar 10 02:14:08.146886 systemd-logind[1539]: Removed session 29. Mar 10 02:14:13.157321 systemd[1]: Started sshd@29-10.0.0.112:22-10.0.0.1:54338.service - OpenSSH per-connection server daemon (10.0.0.1:54338). Mar 10 02:14:13.282619 sshd[6424]: Accepted publickey for core from 10.0.0.1 port 54338 ssh2: RSA SHA256:d2FUFdel+KP9pqsSrlp8nTsY/4RJTtu7ZDVkbTKQqjY Mar 10 02:14:13.285009 sshd-session[6424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:14:13.301045 systemd-logind[1539]: New session 30 of user core. Mar 10 02:14:13.307801 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 10 02:14:13.706254 sshd[6427]: Connection closed by 10.0.0.1 port 54338 Mar 10 02:14:13.706735 sshd-session[6424]: pam_unix(sshd:session): session closed for user core Mar 10 02:14:13.712943 systemd[1]: sshd@29-10.0.0.112:22-10.0.0.1:54338.service: Deactivated successfully. Mar 10 02:14:13.717440 systemd[1]: session-30.scope: Deactivated successfully. Mar 10 02:14:13.719791 systemd-logind[1539]: Session 30 logged out. Waiting for processes to exit. Mar 10 02:14:13.725458 systemd-logind[1539]: Removed session 30. Mar 10 02:14:18.733252 systemd[1]: Started sshd@30-10.0.0.112:22-10.0.0.1:54352.service - OpenSSH per-connection server daemon (10.0.0.1:54352). Mar 10 02:14:18.838057 sshd[6492]: Accepted publickey for core from 10.0.0.1 port 54352 ssh2: RSA SHA256:d2FUFdel+KP9pqsSrlp8nTsY/4RJTtu7ZDVkbTKQqjY Mar 10 02:14:18.844589 sshd-session[6492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:14:18.864729 systemd-logind[1539]: New session 31 of user core. Mar 10 02:14:18.881561 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 10 02:14:19.183099 sshd[6495]: Connection closed by 10.0.0.1 port 54352 Mar 10 02:14:19.185447 sshd-session[6492]: pam_unix(sshd:session): session closed for user core Mar 10 02:14:19.204607 systemd[1]: sshd@30-10.0.0.112:22-10.0.0.1:54352.service: Deactivated successfully. Mar 10 02:14:19.211772 systemd[1]: session-31.scope: Deactivated successfully. Mar 10 02:14:19.215170 systemd-logind[1539]: Session 31 logged out. Waiting for processes to exit. Mar 10 02:14:19.222674 systemd-logind[1539]: Removed session 31.