May 13 12:54:21.840634 kernel: Linux version 6.12.28-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 13 11:28:50 -00 2025 May 13 12:54:21.840671 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7099d7ee582d4f3e6d25a3763207cfa25fb4eb117c83034e2c517b959b8370a1 May 13 12:54:21.840683 kernel: BIOS-provided physical RAM map: May 13 12:54:21.840690 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 13 12:54:21.840696 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 13 12:54:21.840702 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 13 12:54:21.840710 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 13 12:54:21.840716 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 13 12:54:21.840731 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable May 13 12:54:21.840738 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 13 12:54:21.840744 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable May 13 12:54:21.840751 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 13 12:54:21.840757 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 13 12:54:21.840763 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 13 12:54:21.840786 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 13 12:54:21.840793 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 13 12:54:21.840800 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable May 13 12:54:21.840807 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved May 13 12:54:21.840814 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS May 13 12:54:21.840820 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable May 13 12:54:21.840827 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 13 12:54:21.840834 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 13 12:54:21.840840 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 13 12:54:21.840847 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 13 12:54:21.840854 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 13 12:54:21.840876 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 13 12:54:21.840884 kernel: NX (Execute Disable) protection: active May 13 12:54:21.840893 kernel: APIC: Static calls initialized May 13 12:54:21.840902 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable May 13 12:54:21.840911 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable May 13 12:54:21.840920 kernel: extended physical RAM map: May 13 12:54:21.840928 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 13 12:54:21.840937 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable May 13 12:54:21.840947 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 13 12:54:21.840955 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable May 13 12:54:21.840964 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 13 12:54:21.840977 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable May 13 12:54:21.840986 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 13 12:54:21.840995 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable May 13 12:54:21.841004 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable May 13 12:54:21.841018 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable May 13 12:54:21.841027 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable May 13 12:54:21.841039 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable May 13 12:54:21.841056 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 13 12:54:21.841067 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 13 12:54:21.841076 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 13 12:54:21.841086 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 13 12:54:21.841096 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 13 12:54:21.841105 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable May 13 12:54:21.841115 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved May 13 12:54:21.841125 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS May 13 12:54:21.841138 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable May 13 12:54:21.841148 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 13 12:54:21.841158 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 13 12:54:21.841167 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 13 12:54:21.841177 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 13 12:54:21.841187 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 13 12:54:21.841197 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 13 12:54:21.841206 kernel: efi: EFI v2.7 by EDK II May 13 12:54:21.841216 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 May 13 12:54:21.841225 kernel: random: crng init done May 13 12:54:21.841235 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map May 13 12:54:21.841244 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved May 13 12:54:21.841257 kernel: secureboot: Secure boot disabled May 13 12:54:21.841267 kernel: SMBIOS 2.8 present. May 13 12:54:21.841276 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 May 13 12:54:21.841285 kernel: DMI: Memory slots populated: 1/1 May 13 12:54:21.841295 kernel: Hypervisor detected: KVM May 13 12:54:21.841304 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 13 12:54:21.841314 kernel: kvm-clock: using sched offset of 3511837784 cycles May 13 12:54:21.841324 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 13 12:54:21.841334 kernel: tsc: Detected 2794.748 MHz processor May 13 12:54:21.841344 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 13 12:54:21.841353 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 13 12:54:21.841364 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 May 13 12:54:21.841372 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 13 12:54:21.841379 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 13 12:54:21.841386 kernel: Using GB pages for direct mapping May 13 12:54:21.841393 kernel: ACPI: Early table checksum verification disabled May 13 12:54:21.841401 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 13 12:54:21.841408 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 13 12:54:21.841415 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:54:21.841423 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:54:21.841432 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 13 12:54:21.841439 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:54:21.841446 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:54:21.841454 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:54:21.841461 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:54:21.841468 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 13 12:54:21.841475 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 13 12:54:21.841483 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 13 12:54:21.841492 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 13 12:54:21.841499 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 13 12:54:21.841506 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 13 12:54:21.841513 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 13 12:54:21.841520 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 13 12:54:21.841527 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 13 12:54:21.841534 kernel: No NUMA configuration found May 13 12:54:21.841541 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] May 13 12:54:21.841548 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] May 13 12:54:21.841558 kernel: Zone ranges: May 13 12:54:21.841565 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 13 12:54:21.841573 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] May 13 12:54:21.841580 kernel: Normal empty May 13 12:54:21.841588 kernel: Device empty May 13 12:54:21.841598 kernel: Movable zone start for each node May 13 12:54:21.841608 kernel: Early memory node ranges May 13 12:54:21.841617 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 13 12:54:21.841624 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 13 12:54:21.841631 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 13 12:54:21.841640 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] May 13 12:54:21.841648 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] May 13 12:54:21.841654 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] May 13 12:54:21.841661 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] May 13 12:54:21.841669 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] May 13 12:54:21.841676 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] May 13 12:54:21.841683 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 12:54:21.841690 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 13 12:54:21.841706 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 13 12:54:21.841715 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 12:54:21.841725 kernel: On node 0, zone DMA: 239 pages in unavailable ranges May 13 12:54:21.841736 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges May 13 12:54:21.841749 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges May 13 12:54:21.841757 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges May 13 12:54:21.841769 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges May 13 12:54:21.841787 kernel: ACPI: PM-Timer IO Port: 0x608 May 13 12:54:21.841801 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 13 12:54:21.841821 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 13 12:54:21.841829 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 13 12:54:21.841836 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 13 12:54:21.841843 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 13 12:54:21.841851 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 13 12:54:21.841858 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 13 12:54:21.841887 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 13 12:54:21.841894 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 13 12:54:21.841901 kernel: TSC deadline timer available May 13 12:54:21.841911 kernel: CPU topo: Max. logical packages: 1 May 13 12:54:21.841919 kernel: CPU topo: Max. logical dies: 1 May 13 12:54:21.841926 kernel: CPU topo: Max. dies per package: 1 May 13 12:54:21.841933 kernel: CPU topo: Max. threads per core: 1 May 13 12:54:21.841943 kernel: CPU topo: Num. cores per package: 4 May 13 12:54:21.841953 kernel: CPU topo: Num. threads per package: 4 May 13 12:54:21.841963 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs May 13 12:54:21.841973 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 13 12:54:21.841984 kernel: kvm-guest: KVM setup pv remote TLB flush May 13 12:54:21.841994 kernel: kvm-guest: setup PV sched yield May 13 12:54:21.842008 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices May 13 12:54:21.842018 kernel: Booting paravirtualized kernel on KVM May 13 12:54:21.842029 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 13 12:54:21.842040 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 13 12:54:21.842059 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 May 13 12:54:21.842070 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 May 13 12:54:21.842080 kernel: pcpu-alloc: [0] 0 1 2 3 May 13 12:54:21.842090 kernel: kvm-guest: PV spinlocks enabled May 13 12:54:21.842101 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 13 12:54:21.842127 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7099d7ee582d4f3e6d25a3763207cfa25fb4eb117c83034e2c517b959b8370a1 May 13 12:54:21.842146 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 12:54:21.842157 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 12:54:21.842167 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 12:54:21.842177 kernel: Fallback order for Node 0: 0 May 13 12:54:21.842187 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 May 13 12:54:21.842197 kernel: Policy zone: DMA32 May 13 12:54:21.842207 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 12:54:21.842221 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 12:54:21.842232 kernel: ftrace: allocating 40071 entries in 157 pages May 13 12:54:21.842242 kernel: ftrace: allocated 157 pages with 5 groups May 13 12:54:21.842251 kernel: Dynamic Preempt: voluntary May 13 12:54:21.842261 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 12:54:21.842272 kernel: rcu: RCU event tracing is enabled. May 13 12:54:21.842281 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 12:54:21.842289 kernel: Trampoline variant of Tasks RCU enabled. May 13 12:54:21.842296 kernel: Rude variant of Tasks RCU enabled. May 13 12:54:21.842307 kernel: Tracing variant of Tasks RCU enabled. May 13 12:54:21.842314 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 12:54:21.842322 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 12:54:21.842329 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 12:54:21.842337 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 12:54:21.842344 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 12:54:21.842352 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 13 12:54:21.842359 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 12:54:21.842367 kernel: Console: colour dummy device 80x25 May 13 12:54:21.842376 kernel: printk: legacy console [ttyS0] enabled May 13 12:54:21.842384 kernel: ACPI: Core revision 20240827 May 13 12:54:21.842392 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 13 12:54:21.842399 kernel: APIC: Switch to symmetric I/O mode setup May 13 12:54:21.842406 kernel: x2apic enabled May 13 12:54:21.842414 kernel: APIC: Switched APIC routing to: physical x2apic May 13 12:54:21.842421 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 13 12:54:21.842429 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 13 12:54:21.842436 kernel: kvm-guest: setup PV IPIs May 13 12:54:21.842446 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 13 12:54:21.842454 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 13 12:54:21.842461 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 13 12:54:21.842469 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 13 12:54:21.842477 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 13 12:54:21.842484 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 13 12:54:21.842491 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 13 12:54:21.842499 kernel: Spectre V2 : Mitigation: Retpolines May 13 12:54:21.842506 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 13 12:54:21.842516 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 13 12:54:21.842523 kernel: RETBleed: Mitigation: untrained return thunk May 13 12:54:21.842531 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 13 12:54:21.842539 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 13 12:54:21.842546 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 13 12:54:21.842554 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 13 12:54:21.842562 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 13 12:54:21.842569 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 13 12:54:21.842579 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 13 12:54:21.842588 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 13 12:54:21.842598 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 13 12:54:21.842609 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 13 12:54:21.842618 kernel: Freeing SMP alternatives memory: 32K May 13 12:54:21.842625 kernel: pid_max: default: 32768 minimum: 301 May 13 12:54:21.842633 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 13 12:54:21.842640 kernel: landlock: Up and running. May 13 12:54:21.842647 kernel: SELinux: Initializing. May 13 12:54:21.842657 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 12:54:21.842665 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 12:54:21.842672 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 13 12:54:21.842680 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 13 12:54:21.842687 kernel: ... version: 0 May 13 12:54:21.842694 kernel: ... bit width: 48 May 13 12:54:21.842702 kernel: ... generic registers: 6 May 13 12:54:21.842709 kernel: ... value mask: 0000ffffffffffff May 13 12:54:21.842717 kernel: ... max period: 00007fffffffffff May 13 12:54:21.842726 kernel: ... fixed-purpose events: 0 May 13 12:54:21.842733 kernel: ... event mask: 000000000000003f May 13 12:54:21.842741 kernel: signal: max sigframe size: 1776 May 13 12:54:21.842748 kernel: rcu: Hierarchical SRCU implementation. May 13 12:54:21.842756 kernel: rcu: Max phase no-delay instances is 400. May 13 12:54:21.842763 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 13 12:54:21.842770 kernel: smp: Bringing up secondary CPUs ... May 13 12:54:21.842778 kernel: smpboot: x86: Booting SMP configuration: May 13 12:54:21.842785 kernel: .... node #0, CPUs: #1 #2 #3 May 13 12:54:21.842793 kernel: smp: Brought up 1 node, 4 CPUs May 13 12:54:21.842802 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 13 12:54:21.842810 kernel: Memory: 2422664K/2565800K available (14336K kernel code, 2430K rwdata, 9948K rodata, 54420K init, 2548K bss, 137196K reserved, 0K cma-reserved) May 13 12:54:21.842817 kernel: devtmpfs: initialized May 13 12:54:21.842825 kernel: x86/mm: Memory block size: 128MB May 13 12:54:21.842832 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 13 12:54:21.842840 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 13 12:54:21.842847 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) May 13 12:54:21.842855 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 13 12:54:21.842878 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) May 13 12:54:21.842886 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 13 12:54:21.842894 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 12:54:21.842901 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 12:54:21.842909 kernel: pinctrl core: initialized pinctrl subsystem May 13 12:54:21.842916 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 12:54:21.842923 kernel: audit: initializing netlink subsys (disabled) May 13 12:54:21.842931 kernel: audit: type=2000 audit(1747140859.858:1): state=initialized audit_enabled=0 res=1 May 13 12:54:21.842939 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 12:54:21.842949 kernel: thermal_sys: Registered thermal governor 'user_space' May 13 12:54:21.842959 kernel: cpuidle: using governor menu May 13 12:54:21.842969 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 12:54:21.842980 kernel: dca service started, version 1.12.1 May 13 12:54:21.842990 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] May 13 12:54:21.843000 kernel: PCI: Using configuration type 1 for base access May 13 12:54:21.843011 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 13 12:54:21.843021 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 12:54:21.843032 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 13 12:54:21.843045 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 12:54:21.843065 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 13 12:54:21.843075 kernel: ACPI: Added _OSI(Module Device) May 13 12:54:21.843085 kernel: ACPI: Added _OSI(Processor Device) May 13 12:54:21.843095 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 12:54:21.843105 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 12:54:21.843115 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 12:54:21.843125 kernel: ACPI: Interpreter enabled May 13 12:54:21.843135 kernel: ACPI: PM: (supports S0 S3 S5) May 13 12:54:21.843148 kernel: ACPI: Using IOAPIC for interrupt routing May 13 12:54:21.843158 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 13 12:54:21.843168 kernel: PCI: Using E820 reservations for host bridge windows May 13 12:54:21.843178 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 13 12:54:21.843188 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 12:54:21.843430 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 12:54:21.843552 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 13 12:54:21.843677 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 13 12:54:21.843692 kernel: PCI host bridge to bus 0000:00 May 13 12:54:21.843810 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 13 12:54:21.843932 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 13 12:54:21.844059 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 13 12:54:21.844220 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] May 13 12:54:21.844350 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] May 13 12:54:21.844480 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] May 13 12:54:21.844617 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 12:54:21.844752 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint May 13 12:54:21.844917 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint May 13 12:54:21.845039 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] May 13 12:54:21.845162 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] May 13 12:54:21.845276 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] May 13 12:54:21.845392 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 13 12:54:21.845541 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 13 12:54:21.845688 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] May 13 12:54:21.845819 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] May 13 12:54:21.845955 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] May 13 12:54:21.846086 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 13 12:54:21.846201 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] May 13 12:54:21.846319 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] May 13 12:54:21.846431 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] May 13 12:54:21.846552 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 13 12:54:21.846676 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] May 13 12:54:21.846791 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] May 13 12:54:21.846936 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] May 13 12:54:21.847059 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] May 13 12:54:21.847193 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint May 13 12:54:21.847305 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 13 12:54:21.847435 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint May 13 12:54:21.847561 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] May 13 12:54:21.847706 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] May 13 12:54:21.847882 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint May 13 12:54:21.848038 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] May 13 12:54:21.848062 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 13 12:54:21.848073 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 13 12:54:21.848083 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 13 12:54:21.848095 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 13 12:54:21.848106 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 13 12:54:21.848118 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 13 12:54:21.848128 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 13 12:54:21.848138 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 13 12:54:21.848153 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 13 12:54:21.848163 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 13 12:54:21.848173 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 13 12:54:21.848184 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 13 12:54:21.848194 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 13 12:54:21.848204 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 13 12:54:21.848214 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 13 12:54:21.848225 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 13 12:54:21.848235 kernel: iommu: Default domain type: Translated May 13 12:54:21.848249 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 13 12:54:21.848259 kernel: efivars: Registered efivars operations May 13 12:54:21.848269 kernel: PCI: Using ACPI for IRQ routing May 13 12:54:21.848279 kernel: PCI: pci_cache_line_size set to 64 bytes May 13 12:54:21.848289 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 13 12:54:21.848300 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] May 13 12:54:21.848310 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] May 13 12:54:21.848320 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] May 13 12:54:21.848330 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] May 13 12:54:21.848343 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] May 13 12:54:21.848353 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] May 13 12:54:21.848363 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] May 13 12:54:21.848515 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 13 12:54:21.848665 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 13 12:54:21.848811 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 13 12:54:21.848825 kernel: vgaarb: loaded May 13 12:54:21.848840 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 13 12:54:21.848850 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 13 12:54:21.848877 kernel: clocksource: Switched to clocksource kvm-clock May 13 12:54:21.848888 kernel: VFS: Disk quotas dquot_6.6.0 May 13 12:54:21.848899 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 12:54:21.848909 kernel: pnp: PnP ACPI init May 13 12:54:21.849073 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved May 13 12:54:21.849108 kernel: pnp: PnP ACPI: found 6 devices May 13 12:54:21.849122 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 13 12:54:21.849135 kernel: NET: Registered PF_INET protocol family May 13 12:54:21.849146 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 12:54:21.849156 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 12:54:21.849167 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 12:54:21.849178 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 12:54:21.849189 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 12:54:21.849200 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 12:54:21.849210 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 12:54:21.849224 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 12:54:21.849234 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 12:54:21.849245 kernel: NET: Registered PF_XDP protocol family May 13 12:54:21.849394 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window May 13 12:54:21.849542 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned May 13 12:54:21.849676 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 13 12:54:21.849808 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 13 12:54:21.850004 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 13 12:54:21.850155 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] May 13 12:54:21.850610 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] May 13 12:54:21.850749 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] May 13 12:54:21.850764 kernel: PCI: CLS 0 bytes, default 64 May 13 12:54:21.850776 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 13 12:54:21.850786 kernel: Initialise system trusted keyrings May 13 12:54:21.850797 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 12:54:21.850808 kernel: Key type asymmetric registered May 13 12:54:21.850823 kernel: Asymmetric key parser 'x509' registered May 13 12:54:21.850833 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 13 12:54:21.850844 kernel: io scheduler mq-deadline registered May 13 12:54:21.850855 kernel: io scheduler kyber registered May 13 12:54:21.850886 kernel: io scheduler bfq registered May 13 12:54:21.850898 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 13 12:54:21.850909 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 13 12:54:21.850924 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 13 12:54:21.850934 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 13 12:54:21.850945 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 12:54:21.850956 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 13 12:54:21.850967 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 13 12:54:21.850977 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 13 12:54:21.850988 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 13 12:54:21.851158 kernel: rtc_cmos 00:04: RTC can wake from S4 May 13 12:54:21.851179 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 13 12:54:21.851321 kernel: rtc_cmos 00:04: registered as rtc0 May 13 12:54:21.851460 kernel: rtc_cmos 00:04: setting system clock to 2025-05-13T12:54:21 UTC (1747140861) May 13 12:54:21.851598 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 13 12:54:21.851612 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 13 12:54:21.851623 kernel: efifb: probing for efifb May 13 12:54:21.851634 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 13 12:54:21.851645 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 13 12:54:21.851659 kernel: efifb: scrolling: redraw May 13 12:54:21.851670 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 13 12:54:21.851681 kernel: Console: switching to colour frame buffer device 160x50 May 13 12:54:21.851692 kernel: fb0: EFI VGA frame buffer device May 13 12:54:21.851702 kernel: pstore: Using crash dump compression: deflate May 13 12:54:21.851713 kernel: pstore: Registered efi_pstore as persistent store backend May 13 12:54:21.851724 kernel: NET: Registered PF_INET6 protocol family May 13 12:54:21.851734 kernel: Segment Routing with IPv6 May 13 12:54:21.851745 kernel: In-situ OAM (IOAM) with IPv6 May 13 12:54:21.851756 kernel: NET: Registered PF_PACKET protocol family May 13 12:54:21.851770 kernel: Key type dns_resolver registered May 13 12:54:21.851780 kernel: IPI shorthand broadcast: enabled May 13 12:54:21.851791 kernel: sched_clock: Marking stable (2781003074, 157253366)->(2952787775, -14531335) May 13 12:54:21.851802 kernel: registered taskstats version 1 May 13 12:54:21.851813 kernel: Loading compiled-in X.509 certificates May 13 12:54:21.851824 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.28-flatcar: d81efc2839896c91a2830d4cfad7b0572af8b26a' May 13 12:54:21.851834 kernel: Demotion targets for Node 0: null May 13 12:54:21.851845 kernel: Key type .fscrypt registered May 13 12:54:21.851855 kernel: Key type fscrypt-provisioning registered May 13 12:54:21.851888 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 12:54:21.851899 kernel: ima: Allocated hash algorithm: sha1 May 13 12:54:21.851910 kernel: ima: No architecture policies found May 13 12:54:21.851920 kernel: clk: Disabling unused clocks May 13 12:54:21.851931 kernel: Warning: unable to open an initial console. May 13 12:54:21.851942 kernel: Freeing unused kernel image (initmem) memory: 54420K May 13 12:54:21.851953 kernel: Write protecting the kernel read-only data: 24576k May 13 12:54:21.851964 kernel: Freeing unused kernel image (rodata/data gap) memory: 292K May 13 12:54:21.851978 kernel: Run /init as init process May 13 12:54:21.851989 kernel: with arguments: May 13 12:54:21.851999 kernel: /init May 13 12:54:21.852010 kernel: with environment: May 13 12:54:21.852020 kernel: HOME=/ May 13 12:54:21.852030 kernel: TERM=linux May 13 12:54:21.852041 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 12:54:21.852060 systemd[1]: Successfully made /usr/ read-only. May 13 12:54:21.852075 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 12:54:21.852091 systemd[1]: Detected virtualization kvm. May 13 12:54:21.852102 systemd[1]: Detected architecture x86-64. May 13 12:54:21.852113 systemd[1]: Running in initrd. May 13 12:54:21.852124 systemd[1]: No hostname configured, using default hostname. May 13 12:54:21.852136 systemd[1]: Hostname set to . May 13 12:54:21.852148 systemd[1]: Initializing machine ID from VM UUID. May 13 12:54:21.852159 systemd[1]: Queued start job for default target initrd.target. May 13 12:54:21.852173 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 12:54:21.852185 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 12:54:21.852197 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 12:54:21.852208 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 12:54:21.852220 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 12:54:21.852233 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 12:54:21.852246 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 12:54:21.852260 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 12:54:21.852272 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 12:54:21.852283 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 12:54:21.852295 systemd[1]: Reached target paths.target - Path Units. May 13 12:54:21.852306 systemd[1]: Reached target slices.target - Slice Units. May 13 12:54:21.852317 systemd[1]: Reached target swap.target - Swaps. May 13 12:54:21.852329 systemd[1]: Reached target timers.target - Timer Units. May 13 12:54:21.852340 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 12:54:21.852351 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 12:54:21.852365 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 12:54:21.852377 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 13 12:54:21.852391 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 12:54:21.852402 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 12:54:21.852414 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 12:54:21.852425 systemd[1]: Reached target sockets.target - Socket Units. May 13 12:54:21.852437 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 12:54:21.852448 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 12:54:21.852462 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 12:54:21.852475 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 13 12:54:21.852487 systemd[1]: Starting systemd-fsck-usr.service... May 13 12:54:21.852498 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 12:54:21.852509 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 12:54:21.852521 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 12:54:21.852535 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 12:54:21.852549 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 12:54:21.852561 systemd[1]: Finished systemd-fsck-usr.service. May 13 12:54:21.852597 systemd-journald[220]: Collecting audit messages is disabled. May 13 12:54:21.852629 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 12:54:21.852641 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:54:21.852653 systemd-journald[220]: Journal started May 13 12:54:21.852678 systemd-journald[220]: Runtime Journal (/run/log/journal/e5f8864271c84efca219c5d415798b09) is 6M, max 48.5M, 42.4M free. May 13 12:54:21.842152 systemd-modules-load[221]: Inserted module 'overlay' May 13 12:54:21.856883 systemd[1]: Started systemd-journald.service - Journal Service. May 13 12:54:21.860968 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 12:54:21.864131 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 12:54:21.866255 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 12:54:21.872480 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 12:54:21.873378 systemd-modules-load[221]: Inserted module 'br_netfilter' May 13 12:54:21.874352 kernel: Bridge firewalling registered May 13 12:54:21.876858 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 12:54:21.877238 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 12:54:21.879366 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 12:54:21.889686 systemd-tmpfiles[238]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 13 12:54:21.892094 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 12:54:21.892365 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 12:54:21.895827 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 12:54:21.898222 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 12:54:21.901252 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 12:54:21.903345 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 12:54:21.926351 dracut-cmdline[262]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7099d7ee582d4f3e6d25a3763207cfa25fb4eb117c83034e2c517b959b8370a1 May 13 12:54:21.945541 systemd-resolved[261]: Positive Trust Anchors: May 13 12:54:21.945555 systemd-resolved[261]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 12:54:21.945586 systemd-resolved[261]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 12:54:21.948040 systemd-resolved[261]: Defaulting to hostname 'linux'. May 13 12:54:21.949081 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 12:54:21.955509 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 12:54:22.034902 kernel: SCSI subsystem initialized May 13 12:54:22.044888 kernel: Loading iSCSI transport class v2.0-870. May 13 12:54:22.056893 kernel: iscsi: registered transport (tcp) May 13 12:54:22.078917 kernel: iscsi: registered transport (qla4xxx) May 13 12:54:22.078972 kernel: QLogic iSCSI HBA Driver May 13 12:54:22.099606 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 12:54:22.120593 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 12:54:22.124281 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 12:54:22.180684 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 12:54:22.184323 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 12:54:22.246885 kernel: raid6: avx2x4 gen() 30284 MB/s May 13 12:54:22.263890 kernel: raid6: avx2x2 gen() 31359 MB/s May 13 12:54:22.280976 kernel: raid6: avx2x1 gen() 26056 MB/s May 13 12:54:22.280996 kernel: raid6: using algorithm avx2x2 gen() 31359 MB/s May 13 12:54:22.298984 kernel: raid6: .... xor() 19992 MB/s, rmw enabled May 13 12:54:22.299002 kernel: raid6: using avx2x2 recovery algorithm May 13 12:54:22.319890 kernel: xor: automatically using best checksumming function avx May 13 12:54:22.480889 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 12:54:22.489487 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 12:54:22.493164 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 12:54:22.532590 systemd-udevd[473]: Using default interface naming scheme 'v255'. May 13 12:54:22.538929 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 12:54:22.542206 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 12:54:22.570870 dracut-pre-trigger[477]: rd.md=0: removing MD RAID activation May 13 12:54:22.600841 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 12:54:22.603338 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 12:54:22.690447 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 12:54:22.695906 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 12:54:22.727880 kernel: cryptd: max_cpu_qlen set to 1000 May 13 12:54:22.729880 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 13 12:54:22.732583 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 12:54:22.734887 kernel: AES CTR mode by8 optimization enabled May 13 12:54:22.741978 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 12:54:22.742058 kernel: GPT:9289727 != 19775487 May 13 12:54:22.742073 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 12:54:22.742087 kernel: GPT:9289727 != 19775487 May 13 12:54:22.742100 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 12:54:22.742121 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 12:54:22.766890 kernel: libata version 3.00 loaded. May 13 12:54:22.769909 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 13 12:54:22.783886 kernel: ahci 0000:00:1f.2: version 3.0 May 13 12:54:22.784094 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 13 12:54:22.787397 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode May 13 12:54:22.787558 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) May 13 12:54:22.787696 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 13 12:54:22.790035 kernel: scsi host0: ahci May 13 12:54:22.791422 kernel: scsi host1: ahci May 13 12:54:22.792627 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 13 12:54:22.795254 kernel: scsi host2: ahci May 13 12:54:22.795425 kernel: scsi host3: ahci May 13 12:54:22.795561 kernel: scsi host4: ahci May 13 12:54:22.797238 kernel: scsi host5: ahci May 13 12:54:22.797399 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 May 13 12:54:22.797411 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 May 13 12:54:22.799069 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 May 13 12:54:22.799082 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 May 13 12:54:22.801765 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 May 13 12:54:22.801778 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 May 13 12:54:22.814385 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 13 12:54:22.823031 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 13 12:54:22.823110 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 13 12:54:22.834395 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 12:54:22.836944 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 12:54:22.838089 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 12:54:22.838140 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:54:22.842673 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 12:54:22.850456 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 12:54:22.852787 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 12:54:22.861476 disk-uuid[629]: Primary Header is updated. May 13 12:54:22.861476 disk-uuid[629]: Secondary Entries is updated. May 13 12:54:22.861476 disk-uuid[629]: Secondary Header is updated. May 13 12:54:22.864887 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 12:54:22.869890 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 12:54:22.872824 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:54:23.113718 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 13 12:54:23.113800 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 13 12:54:23.113814 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 13 12:54:23.113826 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 13 12:54:23.114892 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 13 12:54:23.115906 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 13 12:54:23.117296 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 13 12:54:23.117310 kernel: ata3.00: applying bridge limits May 13 12:54:23.117897 kernel: ata3.00: configured for UDMA/100 May 13 12:54:23.119893 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 13 12:54:23.164909 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 13 12:54:23.165246 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 13 12:54:23.190979 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 13 12:54:23.639469 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 12:54:23.640073 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 12:54:23.642924 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 12:54:23.643166 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 12:54:23.648437 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 12:54:23.680173 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 12:54:23.870884 disk-uuid[633]: The operation has completed successfully. May 13 12:54:23.872277 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 12:54:23.903173 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 12:54:23.903304 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 12:54:23.935485 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 12:54:23.965715 sh[665]: Success May 13 12:54:23.986244 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 12:54:23.986326 kernel: device-mapper: uevent: version 1.0.3 May 13 12:54:23.986343 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 13 12:54:23.997912 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 13 12:54:24.029605 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 12:54:24.032724 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 12:54:24.056834 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 12:54:24.064086 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 13 12:54:24.064117 kernel: BTRFS: device fsid 3042589c-b63f-42f0-9a6f-a4369b1889f9 devid 1 transid 40 /dev/mapper/usr (253:0) scanned by mount (677) May 13 12:54:24.065532 kernel: BTRFS info (device dm-0): first mount of filesystem 3042589c-b63f-42f0-9a6f-a4369b1889f9 May 13 12:54:24.065552 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 13 12:54:24.067299 kernel: BTRFS info (device dm-0): using free-space-tree May 13 12:54:24.071472 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 12:54:24.072779 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 13 12:54:24.074459 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 12:54:24.075339 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 12:54:24.076963 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 12:54:24.107915 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (713) May 13 12:54:24.110429 kernel: BTRFS info (device vda6): first mount of filesystem 00c8da9a-330c-44ff-bf12-f9831c2c14e1 May 13 12:54:24.110453 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 12:54:24.110463 kernel: BTRFS info (device vda6): using free-space-tree May 13 12:54:24.116892 kernel: BTRFS info (device vda6): last unmount of filesystem 00c8da9a-330c-44ff-bf12-f9831c2c14e1 May 13 12:54:24.118183 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 12:54:24.121378 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 12:54:24.199731 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 12:54:24.204705 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 12:54:24.213045 ignition[759]: Ignition 2.21.0 May 13 12:54:24.213058 ignition[759]: Stage: fetch-offline May 13 12:54:24.213088 ignition[759]: no configs at "/usr/lib/ignition/base.d" May 13 12:54:24.213098 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 12:54:24.213187 ignition[759]: parsed url from cmdline: "" May 13 12:54:24.213191 ignition[759]: no config URL provided May 13 12:54:24.213195 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" May 13 12:54:24.213203 ignition[759]: no config at "/usr/lib/ignition/user.ign" May 13 12:54:24.213225 ignition[759]: op(1): [started] loading QEMU firmware config module May 13 12:54:24.213230 ignition[759]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 12:54:24.219591 ignition[759]: op(1): [finished] loading QEMU firmware config module May 13 12:54:24.245276 systemd-networkd[853]: lo: Link UP May 13 12:54:24.245287 systemd-networkd[853]: lo: Gained carrier May 13 12:54:24.246757 systemd-networkd[853]: Enumeration completed May 13 12:54:24.246874 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 12:54:24.247797 systemd-networkd[853]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 12:54:24.247802 systemd-networkd[853]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 12:54:24.248329 systemd[1]: Reached target network.target - Network. May 13 12:54:24.250962 systemd-networkd[853]: eth0: Link UP May 13 12:54:24.250966 systemd-networkd[853]: eth0: Gained carrier May 13 12:54:24.250987 systemd-networkd[853]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 12:54:24.272878 ignition[759]: parsing config with SHA512: 4fbbb03e79b8b9760d62fc84315403b4c84e754adf8fcd6ee9275926d151a908f64f7c782055eb986934ce0566432f090f982d3ae928560bd7e79bdc31f13220 May 13 12:54:24.274949 systemd-networkd[853]: eth0: DHCPv4 address 10.0.0.90/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 12:54:24.278448 unknown[759]: fetched base config from "system" May 13 12:54:24.278583 unknown[759]: fetched user config from "qemu" May 13 12:54:24.278920 ignition[759]: fetch-offline: fetch-offline passed May 13 12:54:24.278996 ignition[759]: Ignition finished successfully May 13 12:54:24.282239 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 12:54:24.283620 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 12:54:24.284472 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 12:54:24.319545 ignition[860]: Ignition 2.21.0 May 13 12:54:24.319563 ignition[860]: Stage: kargs May 13 12:54:24.319686 ignition[860]: no configs at "/usr/lib/ignition/base.d" May 13 12:54:24.319695 ignition[860]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 12:54:24.320929 ignition[860]: kargs: kargs passed May 13 12:54:24.320995 ignition[860]: Ignition finished successfully May 13 12:54:24.325711 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 12:54:24.326806 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 12:54:24.365451 ignition[868]: Ignition 2.21.0 May 13 12:54:24.365465 ignition[868]: Stage: disks May 13 12:54:24.365620 ignition[868]: no configs at "/usr/lib/ignition/base.d" May 13 12:54:24.365633 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 12:54:24.367748 ignition[868]: disks: disks passed May 13 12:54:24.367805 ignition[868]: Ignition finished successfully May 13 12:54:24.370772 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 12:54:24.371070 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 12:54:24.373908 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 12:54:24.374121 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 12:54:24.374451 systemd[1]: Reached target sysinit.target - System Initialization. May 13 12:54:24.374775 systemd[1]: Reached target basic.target - Basic System. May 13 12:54:24.382627 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 12:54:24.413843 systemd-resolved[261]: Detected conflict on linux IN A 10.0.0.90 May 13 12:54:24.413859 systemd-resolved[261]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. May 13 12:54:24.415713 systemd-fsck[878]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 13 12:54:24.424464 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 12:54:24.425614 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 12:54:24.531906 kernel: EXT4-fs (vda9): mounted filesystem ebf7ca75-051f-4154-b098-5ec24084105d r/w with ordered data mode. Quota mode: none. May 13 12:54:24.532892 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 12:54:24.533567 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 12:54:24.537404 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 12:54:24.539561 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 12:54:24.540711 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 12:54:24.540768 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 12:54:24.540801 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 12:54:24.556478 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 12:54:24.561277 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (886) May 13 12:54:24.558012 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 12:54:24.565093 kernel: BTRFS info (device vda6): first mount of filesystem 00c8da9a-330c-44ff-bf12-f9831c2c14e1 May 13 12:54:24.565124 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 12:54:24.565138 kernel: BTRFS info (device vda6): using free-space-tree May 13 12:54:24.569671 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 12:54:24.595200 initrd-setup-root[910]: cut: /sysroot/etc/passwd: No such file or directory May 13 12:54:24.600163 initrd-setup-root[917]: cut: /sysroot/etc/group: No such file or directory May 13 12:54:24.603929 initrd-setup-root[924]: cut: /sysroot/etc/shadow: No such file or directory May 13 12:54:24.608103 initrd-setup-root[931]: cut: /sysroot/etc/gshadow: No such file or directory May 13 12:54:24.687745 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 12:54:24.688746 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 12:54:24.691894 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 12:54:24.718903 kernel: BTRFS info (device vda6): last unmount of filesystem 00c8da9a-330c-44ff-bf12-f9831c2c14e1 May 13 12:54:24.732212 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 12:54:24.745894 ignition[1000]: INFO : Ignition 2.21.0 May 13 12:54:24.745894 ignition[1000]: INFO : Stage: mount May 13 12:54:24.747887 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 12:54:24.747887 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 12:54:24.750306 ignition[1000]: INFO : mount: mount passed May 13 12:54:24.750306 ignition[1000]: INFO : Ignition finished successfully May 13 12:54:24.754257 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 12:54:24.756351 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 12:54:25.062926 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 12:54:25.064469 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 12:54:25.094887 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (1012) May 13 12:54:25.094935 kernel: BTRFS info (device vda6): first mount of filesystem 00c8da9a-330c-44ff-bf12-f9831c2c14e1 May 13 12:54:25.094946 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 12:54:25.096395 kernel: BTRFS info (device vda6): using free-space-tree May 13 12:54:25.099746 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 12:54:25.136913 ignition[1029]: INFO : Ignition 2.21.0 May 13 12:54:25.136913 ignition[1029]: INFO : Stage: files May 13 12:54:25.138851 ignition[1029]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 12:54:25.138851 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 12:54:25.142555 ignition[1029]: DEBUG : files: compiled without relabeling support, skipping May 13 12:54:25.144528 ignition[1029]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 12:54:25.144528 ignition[1029]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 12:54:25.148647 ignition[1029]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 12:54:25.150037 ignition[1029]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 12:54:25.151612 unknown[1029]: wrote ssh authorized keys file for user: core May 13 12:54:25.152739 ignition[1029]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 12:54:25.154434 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 13 12:54:25.154434 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 13 12:54:25.250067 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 12:54:25.458046 systemd-networkd[853]: eth0: Gained IPv6LL May 13 12:54:25.486841 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 13 12:54:25.489374 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 13 12:54:25.489374 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 13 12:54:25.489374 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 12:54:25.489374 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 12:54:25.489374 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 12:54:25.489374 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 12:54:25.489374 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 12:54:25.489374 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 12:54:25.505315 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 12:54:25.505315 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 12:54:25.505315 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 12:54:25.505315 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 12:54:25.505315 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 12:54:25.505315 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 13 12:54:25.771947 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 13 12:54:26.084616 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 12:54:26.084616 ignition[1029]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 13 12:54:26.098471 ignition[1029]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 12:54:26.109476 ignition[1029]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 12:54:26.109476 ignition[1029]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 13 12:54:26.109476 ignition[1029]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 13 12:54:26.114075 ignition[1029]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 12:54:26.114075 ignition[1029]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 12:54:26.114075 ignition[1029]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 13 12:54:26.114075 ignition[1029]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 13 12:54:26.134356 ignition[1029]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 12:54:26.138970 ignition[1029]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 12:54:26.140678 ignition[1029]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 13 12:54:26.140678 ignition[1029]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 13 12:54:26.140678 ignition[1029]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 13 12:54:26.140678 ignition[1029]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 12:54:26.140678 ignition[1029]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 12:54:26.140678 ignition[1029]: INFO : files: files passed May 13 12:54:26.140678 ignition[1029]: INFO : Ignition finished successfully May 13 12:54:26.145290 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 12:54:26.150959 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 12:54:26.152793 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 12:54:26.166246 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 12:54:26.166516 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 12:54:26.168184 initrd-setup-root-after-ignition[1058]: grep: /sysroot/oem/oem-release: No such file or directory May 13 12:54:26.170707 initrd-setup-root-after-ignition[1060]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 12:54:26.170707 initrd-setup-root-after-ignition[1060]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 12:54:26.174872 initrd-setup-root-after-ignition[1064]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 12:54:26.178048 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 12:54:26.180673 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 12:54:26.182931 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 12:54:26.253973 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 12:54:26.254112 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 12:54:26.256451 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 12:54:26.257499 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 12:54:26.259433 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 12:54:26.262790 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 12:54:26.294288 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 12:54:26.296028 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 12:54:26.322279 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 12:54:26.323606 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 12:54:26.325957 systemd[1]: Stopped target timers.target - Timer Units. May 13 12:54:26.327101 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 12:54:26.327217 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 12:54:26.329126 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 12:54:26.329451 systemd[1]: Stopped target basic.target - Basic System. May 13 12:54:26.329786 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 12:54:26.330289 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 12:54:26.330630 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 12:54:26.331141 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 13 12:54:26.331498 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 12:54:26.331805 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 12:54:26.332319 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 12:54:26.332656 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 12:54:26.333144 systemd[1]: Stopped target swap.target - Swaps. May 13 12:54:26.333448 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 12:54:26.333549 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 12:54:26.353801 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 12:54:26.354010 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 12:54:26.357025 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 12:54:26.358035 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 12:54:26.359284 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 12:54:26.359435 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 12:54:26.361649 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 12:54:26.361797 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 12:54:26.364720 systemd[1]: Stopped target paths.target - Path Units. May 13 12:54:26.367392 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 12:54:26.372942 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 12:54:26.373143 systemd[1]: Stopped target slices.target - Slice Units. May 13 12:54:26.375725 systemd[1]: Stopped target sockets.target - Socket Units. May 13 12:54:26.379056 systemd[1]: iscsid.socket: Deactivated successfully. May 13 12:54:26.379179 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 12:54:26.381858 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 12:54:26.382010 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 12:54:26.384786 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 12:54:26.385096 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 12:54:26.388566 systemd[1]: ignition-files.service: Deactivated successfully. May 13 12:54:26.388713 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 12:54:26.392299 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 12:54:26.392374 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 12:54:26.392476 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 12:54:26.394873 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 12:54:26.396208 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 12:54:26.396356 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 12:54:26.398237 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 12:54:26.398339 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 12:54:26.404660 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 12:54:26.407999 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 12:54:26.422310 ignition[1085]: INFO : Ignition 2.21.0 May 13 12:54:26.423483 ignition[1085]: INFO : Stage: umount May 13 12:54:26.423483 ignition[1085]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 12:54:26.423483 ignition[1085]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 12:54:26.426493 ignition[1085]: INFO : umount: umount passed May 13 12:54:26.426493 ignition[1085]: INFO : Ignition finished successfully May 13 12:54:26.427004 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 12:54:26.427137 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 12:54:26.429411 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 12:54:26.430229 systemd[1]: Stopped target network.target - Network. May 13 12:54:26.430720 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 12:54:26.430780 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 12:54:26.431588 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 12:54:26.431630 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 12:54:26.432087 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 12:54:26.432133 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 12:54:26.432413 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 12:54:26.432451 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 12:54:26.432946 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 12:54:26.433327 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 12:54:26.451062 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 12:54:26.451204 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 12:54:26.456305 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 13 12:54:26.456618 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 12:54:26.456765 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 12:54:26.461726 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 13 12:54:26.462460 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 13 12:54:26.465676 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 12:54:26.465735 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 12:54:26.469633 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 12:54:26.470709 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 12:54:26.470763 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 12:54:26.472271 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 12:54:26.472319 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 12:54:26.474705 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 12:54:26.474752 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 12:54:26.477153 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 12:54:26.477197 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 12:54:26.480261 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 12:54:26.483875 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 12:54:26.483953 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 13 12:54:26.503792 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 12:54:26.503950 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 12:54:26.511726 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 12:54:26.511950 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 12:54:26.513143 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 12:54:26.513189 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 12:54:26.516221 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 12:54:26.516256 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 12:54:26.517255 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 12:54:26.517305 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 12:54:26.518082 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 12:54:26.518127 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 12:54:26.518766 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 12:54:26.518809 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 12:54:26.527527 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 12:54:26.528493 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 13 12:54:26.528553 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 13 12:54:26.535059 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 12:54:26.535114 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 12:54:26.538554 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 12:54:26.538610 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:54:26.543101 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 13 12:54:26.543164 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 13 12:54:26.543210 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 12:54:26.556263 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 12:54:26.556396 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 12:54:26.650463 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 12:54:26.651524 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 12:54:26.653648 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 12:54:26.655712 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 12:54:26.656700 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 12:54:26.659726 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 12:54:26.677669 systemd[1]: Switching root. May 13 12:54:26.704144 systemd-journald[220]: Journal stopped May 13 12:54:27.954420 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). May 13 12:54:27.954502 kernel: SELinux: policy capability network_peer_controls=1 May 13 12:54:27.954520 kernel: SELinux: policy capability open_perms=1 May 13 12:54:27.954541 kernel: SELinux: policy capability extended_socket_class=1 May 13 12:54:27.954555 kernel: SELinux: policy capability always_check_network=0 May 13 12:54:27.954569 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 12:54:27.954583 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 12:54:27.954597 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 12:54:27.954615 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 12:54:27.954628 kernel: SELinux: policy capability userspace_initial_context=0 May 13 12:54:27.954643 kernel: audit: type=1403 audit(1747140867.156:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 12:54:27.954658 systemd[1]: Successfully loaded SELinux policy in 54.250ms. May 13 12:54:27.954687 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.354ms. May 13 12:54:27.954710 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 12:54:27.954726 systemd[1]: Detected virtualization kvm. May 13 12:54:27.954742 systemd[1]: Detected architecture x86-64. May 13 12:54:27.954758 systemd[1]: Detected first boot. May 13 12:54:27.954776 systemd[1]: Initializing machine ID from VM UUID. May 13 12:54:27.954792 zram_generator::config[1129]: No configuration found. May 13 12:54:27.954812 kernel: Guest personality initialized and is inactive May 13 12:54:27.954827 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 13 12:54:27.954841 kernel: Initialized host personality May 13 12:54:27.954856 kernel: NET: Registered PF_VSOCK protocol family May 13 12:54:27.954895 systemd[1]: Populated /etc with preset unit settings. May 13 12:54:27.954912 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 13 12:54:27.954929 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 12:54:27.954947 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 12:54:27.954963 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 12:54:27.954979 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 12:54:27.954994 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 12:54:27.955009 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 12:54:27.955024 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 12:54:27.955039 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 12:54:27.955057 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 12:54:27.955075 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 12:54:27.955091 systemd[1]: Created slice user.slice - User and Session Slice. May 13 12:54:27.955107 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 12:54:27.955123 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 12:54:27.955139 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 12:54:27.955155 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 12:54:27.955176 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 12:54:27.955215 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 12:54:27.955231 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 13 12:54:27.955247 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 12:54:27.955263 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 12:54:27.955279 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 12:54:27.955294 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 12:54:27.955310 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 12:54:27.955325 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 12:54:27.955341 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 12:54:27.955357 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 12:54:27.955375 systemd[1]: Reached target slices.target - Slice Units. May 13 12:54:27.955389 systemd[1]: Reached target swap.target - Swaps. May 13 12:54:27.955404 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 12:54:27.955419 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 12:54:27.955434 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 13 12:54:27.955449 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 12:54:27.955465 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 12:54:27.955481 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 12:54:27.955496 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 12:54:27.955522 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 12:54:27.955540 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 12:54:27.955556 systemd[1]: Mounting media.mount - External Media Directory... May 13 12:54:27.955571 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 12:54:27.955587 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 12:54:27.955604 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 12:54:27.955622 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 12:54:27.955640 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 12:54:27.955658 systemd[1]: Reached target machines.target - Containers. May 13 12:54:27.955674 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 12:54:27.955690 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 12:54:27.955705 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 12:54:27.955720 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 12:54:27.955736 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 12:54:27.955751 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 12:54:27.955767 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 12:54:27.955782 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 12:54:27.955799 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 12:54:27.955815 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 12:54:27.955830 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 12:54:27.955845 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 12:54:27.955861 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 12:54:27.955933 systemd[1]: Stopped systemd-fsck-usr.service. May 13 12:54:27.955950 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 12:54:27.955965 kernel: fuse: init (API version 7.41) May 13 12:54:27.955984 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 12:54:27.955999 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 12:54:27.956014 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 12:54:27.956029 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 12:54:27.956044 kernel: loop: module loaded May 13 12:54:27.956059 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 13 12:54:27.956075 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 12:54:27.956093 kernel: ACPI: bus type drm_connector registered May 13 12:54:27.956108 systemd[1]: verity-setup.service: Deactivated successfully. May 13 12:54:27.956123 systemd[1]: Stopped verity-setup.service. May 13 12:54:27.956138 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 12:54:27.956153 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 12:54:27.956177 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 12:54:27.956192 systemd[1]: Mounted media.mount - External Media Directory. May 13 12:54:27.956209 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 12:54:27.956225 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 12:54:27.956241 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 12:54:27.956282 systemd-journald[1207]: Collecting audit messages is disabled. May 13 12:54:27.956313 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 12:54:27.956329 systemd-journald[1207]: Journal started May 13 12:54:27.956358 systemd-journald[1207]: Runtime Journal (/run/log/journal/e5f8864271c84efca219c5d415798b09) is 6M, max 48.5M, 42.4M free. May 13 12:54:27.701225 systemd[1]: Queued start job for default target multi-user.target. May 13 12:54:27.714731 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 13 12:54:27.715217 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 12:54:27.958040 systemd[1]: Started systemd-journald.service - Journal Service. May 13 12:54:27.959691 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 12:54:27.961320 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 12:54:27.961590 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 12:54:27.963183 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 12:54:27.963454 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 12:54:27.964999 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 12:54:27.965246 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 12:54:27.966650 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 12:54:27.966934 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 12:54:27.968511 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 12:54:27.968769 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 12:54:27.970444 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 12:54:27.970719 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 12:54:27.972353 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 12:54:27.974047 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 12:54:27.975721 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 12:54:27.977359 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 13 12:54:27.992148 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 12:54:27.995190 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 12:54:27.997511 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 12:54:27.998979 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 12:54:27.999019 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 12:54:28.001396 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 13 12:54:28.008024 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 12:54:28.009489 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 12:54:28.011092 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 12:54:28.013948 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 12:54:28.015718 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 12:54:28.016979 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 12:54:28.019090 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 12:54:28.020994 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 12:54:28.025910 systemd-journald[1207]: Time spent on flushing to /var/log/journal/e5f8864271c84efca219c5d415798b09 is 30.508ms for 1062 entries. May 13 12:54:28.025910 systemd-journald[1207]: System Journal (/var/log/journal/e5f8864271c84efca219c5d415798b09) is 8M, max 195.6M, 187.6M free. May 13 12:54:28.072265 systemd-journald[1207]: Received client request to flush runtime journal. May 13 12:54:28.072328 kernel: loop0: detected capacity change from 0 to 218376 May 13 12:54:28.072354 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 12:54:28.024255 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 12:54:28.026884 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 12:54:28.031515 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 12:54:28.034214 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 12:54:28.035507 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 12:54:28.045233 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 12:54:28.047053 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 12:54:28.051404 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 13 12:54:28.054257 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 12:54:28.075912 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 12:54:28.092062 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 12:54:28.095266 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 12:54:28.096234 kernel: loop1: detected capacity change from 0 to 113872 May 13 12:54:28.098327 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 13 12:54:28.124126 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. May 13 12:54:28.124477 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. May 13 12:54:28.129118 kernel: loop2: detected capacity change from 0 to 146240 May 13 12:54:28.130202 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 12:54:28.173912 kernel: loop3: detected capacity change from 0 to 218376 May 13 12:54:28.183080 kernel: loop4: detected capacity change from 0 to 113872 May 13 12:54:28.191924 kernel: loop5: detected capacity change from 0 to 146240 May 13 12:54:28.203705 (sd-merge)[1270]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 13 12:54:28.204273 (sd-merge)[1270]: Merged extensions into '/usr'. May 13 12:54:28.210586 systemd[1]: Reload requested from client PID 1248 ('systemd-sysext') (unit systemd-sysext.service)... May 13 12:54:28.210603 systemd[1]: Reloading... May 13 12:54:28.269915 zram_generator::config[1296]: No configuration found. May 13 12:54:28.328482 ldconfig[1243]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 12:54:28.373788 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 12:54:28.454700 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 12:54:28.454812 systemd[1]: Reloading finished in 243 ms. May 13 12:54:28.495135 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 12:54:28.496708 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 12:54:28.519664 systemd[1]: Starting ensure-sysext.service... May 13 12:54:28.521754 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 12:54:28.541557 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 13 12:54:28.541595 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 13 12:54:28.541976 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 12:54:28.542227 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 12:54:28.543104 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 12:54:28.543363 systemd-tmpfiles[1334]: ACLs are not supported, ignoring. May 13 12:54:28.543436 systemd-tmpfiles[1334]: ACLs are not supported, ignoring. May 13 12:54:28.553967 systemd-tmpfiles[1334]: Detected autofs mount point /boot during canonicalization of boot. May 13 12:54:28.553979 systemd-tmpfiles[1334]: Skipping /boot May 13 12:54:28.557035 systemd[1]: Reload requested from client PID 1333 ('systemctl') (unit ensure-sysext.service)... May 13 12:54:28.557052 systemd[1]: Reloading... May 13 12:54:28.566097 systemd-tmpfiles[1334]: Detected autofs mount point /boot during canonicalization of boot. May 13 12:54:28.566250 systemd-tmpfiles[1334]: Skipping /boot May 13 12:54:28.609884 zram_generator::config[1364]: No configuration found. May 13 12:54:28.693449 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 12:54:28.773667 systemd[1]: Reloading finished in 216 ms. May 13 12:54:28.800799 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 12:54:28.824097 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 12:54:28.833700 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 12:54:28.836136 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 12:54:28.838482 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 12:54:28.844415 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 12:54:28.847304 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 12:54:28.853123 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 12:54:28.857436 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 12:54:28.857609 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 12:54:28.859612 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 12:54:28.862613 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 12:54:28.865855 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 12:54:28.866035 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 12:54:28.866140 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 12:54:28.869053 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 12:54:28.870115 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 12:54:28.871855 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 12:54:28.872434 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 12:54:28.880014 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 12:54:28.882139 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 12:54:28.882342 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 12:54:28.885026 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 12:54:28.885383 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 12:54:28.890962 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 12:54:28.891218 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 12:54:28.891956 systemd-udevd[1405]: Using default interface naming scheme 'v255'. May 13 12:54:28.894770 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 12:54:28.896150 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 12:54:28.896453 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 12:54:28.896675 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 12:54:28.898791 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 12:54:28.900060 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 12:54:28.904780 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 12:54:28.906751 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 12:54:28.907117 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 12:54:28.912586 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 12:54:28.912987 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 12:54:28.914671 augenrules[1435]: No rules May 13 12:54:28.921078 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 12:54:28.924086 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 12:54:28.928121 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 12:54:28.930937 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 12:54:28.932752 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 12:54:28.932901 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 12:54:28.933048 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 12:54:28.934037 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 12:54:28.937993 systemd[1]: audit-rules.service: Deactivated successfully. May 13 12:54:28.942604 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 12:54:28.944354 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 12:54:28.947684 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 12:54:28.950490 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 12:54:28.957334 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 12:54:28.965412 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 12:54:28.972981 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 12:54:28.973207 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 12:54:28.974880 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 12:54:28.975192 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 12:54:28.976757 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 12:54:28.977090 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 12:54:28.982455 systemd[1]: Finished ensure-sysext.service. May 13 12:54:28.995252 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 12:54:28.996937 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 12:54:28.996997 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 12:54:29.000292 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 12:54:29.001891 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 12:54:29.026744 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 13 12:54:29.061739 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 12:54:29.064758 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 12:54:29.073814 systemd-resolved[1403]: Positive Trust Anchors: May 13 12:54:29.074105 systemd-resolved[1403]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 12:54:29.074141 systemd-resolved[1403]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 12:54:29.078188 systemd-resolved[1403]: Defaulting to hostname 'linux'. May 13 12:54:29.079886 kernel: mousedev: PS/2 mouse device common for all mice May 13 12:54:29.080072 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 12:54:29.081572 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 12:54:29.094098 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 12:54:29.101011 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 13 12:54:29.104889 kernel: ACPI: button: Power Button [PWRF] May 13 12:54:29.114265 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 13 12:54:29.114516 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 13 12:54:29.114674 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 13 12:54:29.143948 systemd-networkd[1488]: lo: Link UP May 13 12:54:29.143962 systemd-networkd[1488]: lo: Gained carrier May 13 12:54:29.145997 systemd-networkd[1488]: Enumeration completed May 13 12:54:29.146105 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 12:54:29.148978 systemd[1]: Reached target network.target - Network. May 13 12:54:29.151969 systemd-networkd[1488]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 12:54:29.151981 systemd-networkd[1488]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 12:54:29.152220 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 13 12:54:29.153881 systemd-networkd[1488]: eth0: Link UP May 13 12:54:29.154057 systemd-networkd[1488]: eth0: Gained carrier May 13 12:54:29.154079 systemd-networkd[1488]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 12:54:29.155963 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 12:54:29.172918 systemd-networkd[1488]: eth0: DHCPv4 address 10.0.0.90/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 12:54:29.176401 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 12:54:30.349308 systemd-resolved[1403]: Clock change detected. Flushing caches. May 13 12:54:30.349345 systemd-timesyncd[1489]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 12:54:30.349383 systemd-timesyncd[1489]: Initial clock synchronization to Tue 2025-05-13 12:54:30.349272 UTC. May 13 12:54:30.349418 systemd[1]: Reached target sysinit.target - System Initialization. May 13 12:54:30.350589 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 12:54:30.351876 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 12:54:30.353231 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 13 12:54:30.354478 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 12:54:30.356194 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 12:54:30.356224 systemd[1]: Reached target paths.target - Path Units. May 13 12:54:30.357164 systemd[1]: Reached target time-set.target - System Time Set. May 13 12:54:30.358348 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 12:54:30.359542 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 12:54:30.360781 systemd[1]: Reached target timers.target - Timer Units. May 13 12:54:30.363841 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 12:54:30.366638 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 12:54:30.370867 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 13 12:54:30.372353 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 13 12:54:30.373623 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 13 12:54:30.385078 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 12:54:30.386715 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 13 12:54:30.390169 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 13 12:54:30.391577 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 12:54:30.403504 systemd[1]: Reached target sockets.target - Socket Units. May 13 12:54:30.404641 systemd[1]: Reached target basic.target - Basic System. May 13 12:54:30.405718 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 12:54:30.405857 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 12:54:30.409405 systemd[1]: Starting containerd.service - containerd container runtime... May 13 12:54:30.412165 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 12:54:30.430492 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 12:54:30.433448 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 12:54:30.435812 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 12:54:30.436854 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 12:54:30.439717 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 13 12:54:30.447900 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 12:54:30.451442 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 12:54:30.456181 jq[1528]: false May 13 12:54:30.454400 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 12:54:30.457435 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 12:54:30.462167 google_oslogin_nss_cache[1530]: oslogin_cache_refresh[1530]: Refreshing passwd entry cache May 13 12:54:30.461206 oslogin_cache_refresh[1530]: Refreshing passwd entry cache May 13 12:54:30.467679 extend-filesystems[1529]: Found loop3 May 13 12:54:30.468596 extend-filesystems[1529]: Found loop4 May 13 12:54:30.468596 extend-filesystems[1529]: Found loop5 May 13 12:54:30.468596 extend-filesystems[1529]: Found sr0 May 13 12:54:30.468596 extend-filesystems[1529]: Found vda May 13 12:54:30.468596 extend-filesystems[1529]: Found vda1 May 13 12:54:30.468596 extend-filesystems[1529]: Found vda2 May 13 12:54:30.468596 extend-filesystems[1529]: Found vda3 May 13 12:54:30.468596 extend-filesystems[1529]: Found usr May 13 12:54:30.468596 extend-filesystems[1529]: Found vda4 May 13 12:54:30.499212 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 12:54:30.499238 google_oslogin_nss_cache[1530]: oslogin_cache_refresh[1530]: Failure getting users, quitting May 13 12:54:30.499238 google_oslogin_nss_cache[1530]: oslogin_cache_refresh[1530]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 13 12:54:30.499238 google_oslogin_nss_cache[1530]: oslogin_cache_refresh[1530]: Refreshing group entry cache May 13 12:54:30.499238 google_oslogin_nss_cache[1530]: oslogin_cache_refresh[1530]: Failure getting groups, quitting May 13 12:54:30.499238 google_oslogin_nss_cache[1530]: oslogin_cache_refresh[1530]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 13 12:54:30.471809 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 12:54:30.471028 oslogin_cache_refresh[1530]: Failure getting users, quitting May 13 12:54:30.499454 extend-filesystems[1529]: Found vda6 May 13 12:54:30.499454 extend-filesystems[1529]: Found vda7 May 13 12:54:30.499454 extend-filesystems[1529]: Found vda9 May 13 12:54:30.499454 extend-filesystems[1529]: Checking size of /dev/vda9 May 13 12:54:30.499454 extend-filesystems[1529]: Resized partition /dev/vda9 May 13 12:54:30.473787 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 12:54:30.471043 oslogin_cache_refresh[1530]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 13 12:54:30.507422 extend-filesystems[1545]: resize2fs 1.47.2 (1-Jan-2025) May 13 12:54:30.474443 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 12:54:30.471090 oslogin_cache_refresh[1530]: Refreshing group entry cache May 13 12:54:30.475520 systemd[1]: Starting update-engine.service - Update Engine... May 13 12:54:30.481299 oslogin_cache_refresh[1530]: Failure getting groups, quitting May 13 12:54:30.482409 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 12:54:30.481313 oslogin_cache_refresh[1530]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 13 12:54:30.492469 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 12:54:30.494808 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 12:54:30.495037 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 12:54:30.495577 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 13 12:54:30.495799 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 13 12:54:30.511635 jq[1543]: true May 13 12:54:30.514728 systemd[1]: motdgen.service: Deactivated successfully. May 13 12:54:30.515000 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 12:54:30.516592 kernel: kvm_amd: TSC scaling supported May 13 12:54:30.516630 kernel: kvm_amd: Nested Virtualization enabled May 13 12:54:30.516643 kernel: kvm_amd: Nested Paging enabled May 13 12:54:30.516655 kernel: kvm_amd: LBR virtualization supported May 13 12:54:30.516669 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 13 12:54:30.517646 kernel: kvm_amd: Virtual GIF supported May 13 12:54:30.519575 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 12:54:30.520666 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 12:54:30.520591 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 12:54:30.530176 update_engine[1541]: I20250513 12:54:30.527889 1541 main.cc:92] Flatcar Update Engine starting May 13 12:54:30.544336 (ntainerd)[1557]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 12:54:30.547921 jq[1556]: true May 13 12:54:30.559726 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 12:54:30.562499 extend-filesystems[1545]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 12:54:30.562499 extend-filesystems[1545]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 12:54:30.562499 extend-filesystems[1545]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 12:54:30.568499 extend-filesystems[1529]: Resized filesystem in /dev/vda9 May 13 12:54:30.564985 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 12:54:30.565272 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 12:54:30.570043 tar[1553]: linux-amd64/LICENSE May 13 12:54:30.571159 tar[1553]: linux-amd64/helm May 13 12:54:30.593151 dbus-daemon[1524]: [system] SELinux support is enabled May 13 12:54:30.594912 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 12:54:30.598126 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 12:54:30.598166 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 12:54:30.599493 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 12:54:30.599510 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 12:54:30.610451 systemd[1]: Started update-engine.service - Update Engine. May 13 12:54:30.611480 update_engine[1541]: I20250513 12:54:30.610654 1541 update_check_scheduler.cc:74] Next update check in 10m7s May 13 12:54:30.612863 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 12:54:30.630161 kernel: EDAC MC: Ver: 3.0.0 May 13 12:54:30.635583 bash[1590]: Updated "/home/core/.ssh/authorized_keys" May 13 12:54:30.638816 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 12:54:30.639364 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 12:54:30.699864 systemd-logind[1539]: Watching system buttons on /dev/input/event2 (Power Button) May 13 12:54:30.699895 systemd-logind[1539]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 13 12:54:30.700359 systemd-logind[1539]: New seat seat0. May 13 12:54:30.701493 systemd[1]: Started systemd-logind.service - User Login Management. May 13 12:54:30.707813 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:54:30.712362 locksmithd[1586]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 12:54:30.770669 containerd[1557]: time="2025-05-13T12:54:30Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 13 12:54:30.772161 containerd[1557]: time="2025-05-13T12:54:30.772050366Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 13 12:54:30.779751 containerd[1557]: time="2025-05-13T12:54:30.779727877Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.687µs" May 13 12:54:30.779809 containerd[1557]: time="2025-05-13T12:54:30.779795624Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 13 12:54:30.779863 containerd[1557]: time="2025-05-13T12:54:30.779851960Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 13 12:54:30.780061 containerd[1557]: time="2025-05-13T12:54:30.780044180Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 13 12:54:30.780126 containerd[1557]: time="2025-05-13T12:54:30.780113751Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 13 12:54:30.780196 containerd[1557]: time="2025-05-13T12:54:30.780184724Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 12:54:30.780314 containerd[1557]: time="2025-05-13T12:54:30.780298367Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 12:54:30.780360 containerd[1557]: time="2025-05-13T12:54:30.780349673Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 12:54:30.780632 containerd[1557]: time="2025-05-13T12:54:30.780613388Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 12:54:30.780681 containerd[1557]: time="2025-05-13T12:54:30.780670255Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 12:54:30.780732 containerd[1557]: time="2025-05-13T12:54:30.780720569Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 12:54:30.780782 containerd[1557]: time="2025-05-13T12:54:30.780770132Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 13 12:54:30.780915 containerd[1557]: time="2025-05-13T12:54:30.780901468Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 13 12:54:30.781213 containerd[1557]: time="2025-05-13T12:54:30.781197173Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 12:54:30.781283 containerd[1557]: time="2025-05-13T12:54:30.781270110Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 12:54:30.781337 containerd[1557]: time="2025-05-13T12:54:30.781325714Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 13 12:54:30.781418 containerd[1557]: time="2025-05-13T12:54:30.781405874Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 13 12:54:30.781867 containerd[1557]: time="2025-05-13T12:54:30.781823528Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 13 12:54:30.781947 containerd[1557]: time="2025-05-13T12:54:30.781927833Z" level=info msg="metadata content store policy set" policy=shared May 13 12:54:30.786835 containerd[1557]: time="2025-05-13T12:54:30.786805262Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 13 12:54:30.786871 containerd[1557]: time="2025-05-13T12:54:30.786848914Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 13 12:54:30.786871 containerd[1557]: time="2025-05-13T12:54:30.786864493Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 13 12:54:30.786963 containerd[1557]: time="2025-05-13T12:54:30.786876265Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 13 12:54:30.786963 containerd[1557]: time="2025-05-13T12:54:30.786889620Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 13 12:54:30.786963 containerd[1557]: time="2025-05-13T12:54:30.786899399Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 13 12:54:30.786963 containerd[1557]: time="2025-05-13T12:54:30.786916811Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 13 12:54:30.786963 containerd[1557]: time="2025-05-13T12:54:30.786928423Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 13 12:54:30.786963 containerd[1557]: time="2025-05-13T12:54:30.786939724Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 13 12:54:30.786963 containerd[1557]: time="2025-05-13T12:54:30.786949062Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 13 12:54:30.786963 containerd[1557]: time="2025-05-13T12:54:30.786958109Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 13 12:54:30.787112 containerd[1557]: time="2025-05-13T12:54:30.786971354Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 13 12:54:30.787112 containerd[1557]: time="2025-05-13T12:54:30.787094775Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 13 12:54:30.787160 containerd[1557]: time="2025-05-13T12:54:30.787128188Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 13 12:54:30.787181 containerd[1557]: time="2025-05-13T12:54:30.787162101Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 13 12:54:30.787476 containerd[1557]: time="2025-05-13T12:54:30.787448158Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 13 12:54:30.787518 containerd[1557]: time="2025-05-13T12:54:30.787476361Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 13 12:54:30.787518 containerd[1557]: time="2025-05-13T12:54:30.787496198Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 13 12:54:30.787518 containerd[1557]: time="2025-05-13T12:54:30.787510866Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 13 12:54:30.787577 containerd[1557]: time="2025-05-13T12:54:30.787524070Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 13 12:54:30.787577 containerd[1557]: time="2025-05-13T12:54:30.787535983Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 13 12:54:30.787577 containerd[1557]: time="2025-05-13T12:54:30.787549238Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 13 12:54:30.787577 containerd[1557]: time="2025-05-13T12:54:30.787564556Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 13 12:54:30.788168 containerd[1557]: time="2025-05-13T12:54:30.787646280Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 13 12:54:30.788168 containerd[1557]: time="2025-05-13T12:54:30.787666568Z" level=info msg="Start snapshots syncer" May 13 12:54:30.788168 containerd[1557]: time="2025-05-13T12:54:30.787733092Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 13 12:54:30.789123 containerd[1557]: time="2025-05-13T12:54:30.789080710Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 13 12:54:30.789275 containerd[1557]: time="2025-05-13T12:54:30.789260287Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 13 12:54:30.790084 containerd[1557]: time="2025-05-13T12:54:30.790063053Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 13 12:54:30.790282 containerd[1557]: time="2025-05-13T12:54:30.790265372Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 13 12:54:30.790343 containerd[1557]: time="2025-05-13T12:54:30.790331496Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 13 12:54:30.790400 containerd[1557]: time="2025-05-13T12:54:30.790388383Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 13 12:54:30.790451 containerd[1557]: time="2025-05-13T12:54:30.790440591Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 13 12:54:30.790499 containerd[1557]: time="2025-05-13T12:54:30.790488360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 13 12:54:30.790542 containerd[1557]: time="2025-05-13T12:54:30.790532103Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 13 12:54:30.790588 containerd[1557]: time="2025-05-13T12:54:30.790578049Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 13 12:54:30.790647 containerd[1557]: time="2025-05-13T12:54:30.790636839Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 13 12:54:30.790697 containerd[1557]: time="2025-05-13T12:54:30.790687193Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 13 12:54:30.790751 containerd[1557]: time="2025-05-13T12:54:30.790739482Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 13 12:54:30.791350 containerd[1557]: time="2025-05-13T12:54:30.791330941Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 12:54:30.791410 containerd[1557]: time="2025-05-13T12:54:30.791397175Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 12:54:30.791453 containerd[1557]: time="2025-05-13T12:54:30.791442440Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 12:54:30.791498 containerd[1557]: time="2025-05-13T12:54:30.791487284Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 12:54:30.792742 containerd[1557]: time="2025-05-13T12:54:30.791537078Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 13 12:54:30.792742 containerd[1557]: time="2025-05-13T12:54:30.791550743Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 13 12:54:30.792742 containerd[1557]: time="2025-05-13T12:54:30.791561233Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 13 12:54:30.792742 containerd[1557]: time="2025-05-13T12:54:30.791579818Z" level=info msg="runtime interface created" May 13 12:54:30.792742 containerd[1557]: time="2025-05-13T12:54:30.791584917Z" level=info msg="created NRI interface" May 13 12:54:30.792742 containerd[1557]: time="2025-05-13T12:54:30.791593223Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 13 12:54:30.792742 containerd[1557]: time="2025-05-13T12:54:30.791607359Z" level=info msg="Connect containerd service" May 13 12:54:30.792742 containerd[1557]: time="2025-05-13T12:54:30.791633589Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 12:54:30.792742 containerd[1557]: time="2025-05-13T12:54:30.792379498Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 12:54:30.876024 containerd[1557]: time="2025-05-13T12:54:30.875302898Z" level=info msg="Start subscribing containerd event" May 13 12:54:30.876024 containerd[1557]: time="2025-05-13T12:54:30.875396083Z" level=info msg="Start recovering state" May 13 12:54:30.876024 containerd[1557]: time="2025-05-13T12:54:30.875510217Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 12:54:30.876024 containerd[1557]: time="2025-05-13T12:54:30.875534543Z" level=info msg="Start event monitor" May 13 12:54:30.876024 containerd[1557]: time="2025-05-13T12:54:30.875601739Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 12:54:30.876024 containerd[1557]: time="2025-05-13T12:54:30.875618190Z" level=info msg="Start cni network conf syncer for default" May 13 12:54:30.876024 containerd[1557]: time="2025-05-13T12:54:30.875627056Z" level=info msg="Start streaming server" May 13 12:54:30.876024 containerd[1557]: time="2025-05-13T12:54:30.875636704Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 13 12:54:30.876024 containerd[1557]: time="2025-05-13T12:54:30.875643207Z" level=info msg="runtime interface starting up..." May 13 12:54:30.876024 containerd[1557]: time="2025-05-13T12:54:30.875648757Z" level=info msg="starting plugins..." May 13 12:54:30.876024 containerd[1557]: time="2025-05-13T12:54:30.875664236Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 13 12:54:30.876024 containerd[1557]: time="2025-05-13T12:54:30.875849544Z" level=info msg="containerd successfully booted in 0.105902s" May 13 12:54:30.875960 systemd[1]: Started containerd.service - containerd container runtime. May 13 12:54:30.979937 sshd_keygen[1555]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 12:54:31.004344 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 12:54:31.007388 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 12:54:31.027274 systemd[1]: issuegen.service: Deactivated successfully. May 13 12:54:31.027554 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 12:54:31.030476 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 12:54:31.034291 tar[1553]: linux-amd64/README.md May 13 12:54:31.055545 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 12:54:31.057393 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 12:54:31.060956 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 12:54:31.063238 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 13 12:54:31.064684 systemd[1]: Reached target getty.target - Login Prompts. May 13 12:54:32.196336 systemd-networkd[1488]: eth0: Gained IPv6LL May 13 12:54:32.199830 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 12:54:32.201682 systemd[1]: Reached target network-online.target - Network is Online. May 13 12:54:32.204258 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 13 12:54:32.206652 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:54:32.220815 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 12:54:32.239064 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 12:54:32.239372 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 13 12:54:32.240970 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 12:54:32.243625 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 12:54:32.876817 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:54:32.878480 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 12:54:32.879754 systemd[1]: Startup finished in 2.858s (kernel) + 5.506s (initrd) + 4.604s (userspace) = 12.970s. May 13 12:54:32.885479 (kubelet)[1665]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 12:54:33.276539 kubelet[1665]: E0513 12:54:33.276426 1665 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 12:54:33.280836 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 12:54:33.281083 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 12:54:33.281558 systemd[1]: kubelet.service: Consumed 914ms CPU time, 253M memory peak. May 13 12:54:36.433659 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 12:54:36.434894 systemd[1]: Started sshd@0-10.0.0.90:22-10.0.0.1:40816.service - OpenSSH per-connection server daemon (10.0.0.1:40816). May 13 12:54:36.499732 sshd[1678]: Accepted publickey for core from 10.0.0.1 port 40816 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 12:54:36.501627 sshd-session[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:54:36.508027 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 12:54:36.509168 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 12:54:36.515893 systemd-logind[1539]: New session 1 of user core. May 13 12:54:36.534337 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 12:54:36.537476 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 12:54:36.565414 (systemd)[1682]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 12:54:36.567751 systemd-logind[1539]: New session c1 of user core. May 13 12:54:36.710572 systemd[1682]: Queued start job for default target default.target. May 13 12:54:36.721333 systemd[1682]: Created slice app.slice - User Application Slice. May 13 12:54:36.721356 systemd[1682]: Reached target paths.target - Paths. May 13 12:54:36.721392 systemd[1682]: Reached target timers.target - Timers. May 13 12:54:36.722911 systemd[1682]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 12:54:36.734062 systemd[1682]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 12:54:36.734199 systemd[1682]: Reached target sockets.target - Sockets. May 13 12:54:36.734237 systemd[1682]: Reached target basic.target - Basic System. May 13 12:54:36.734274 systemd[1682]: Reached target default.target - Main User Target. May 13 12:54:36.734304 systemd[1682]: Startup finished in 160ms. May 13 12:54:36.734673 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 12:54:36.744265 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 12:54:36.805352 systemd[1]: Started sshd@1-10.0.0.90:22-10.0.0.1:40830.service - OpenSSH per-connection server daemon (10.0.0.1:40830). May 13 12:54:36.862987 sshd[1693]: Accepted publickey for core from 10.0.0.1 port 40830 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 12:54:36.864427 sshd-session[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:54:36.868693 systemd-logind[1539]: New session 2 of user core. May 13 12:54:36.878270 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 12:54:36.929687 sshd[1695]: Connection closed by 10.0.0.1 port 40830 May 13 12:54:36.930021 sshd-session[1693]: pam_unix(sshd:session): session closed for user core May 13 12:54:36.941825 systemd[1]: sshd@1-10.0.0.90:22-10.0.0.1:40830.service: Deactivated successfully. May 13 12:54:36.943551 systemd[1]: session-2.scope: Deactivated successfully. May 13 12:54:36.944319 systemd-logind[1539]: Session 2 logged out. Waiting for processes to exit. May 13 12:54:36.947033 systemd[1]: Started sshd@2-10.0.0.90:22-10.0.0.1:40836.service - OpenSSH per-connection server daemon (10.0.0.1:40836). May 13 12:54:36.947695 systemd-logind[1539]: Removed session 2. May 13 12:54:36.999386 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 40836 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 12:54:37.001135 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:54:37.006095 systemd-logind[1539]: New session 3 of user core. May 13 12:54:37.020296 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 12:54:37.071163 sshd[1703]: Connection closed by 10.0.0.1 port 40836 May 13 12:54:37.071555 sshd-session[1701]: pam_unix(sshd:session): session closed for user core May 13 12:54:37.093015 systemd[1]: sshd@2-10.0.0.90:22-10.0.0.1:40836.service: Deactivated successfully. May 13 12:54:37.094642 systemd[1]: session-3.scope: Deactivated successfully. May 13 12:54:37.095337 systemd-logind[1539]: Session 3 logged out. Waiting for processes to exit. May 13 12:54:37.097940 systemd[1]: Started sshd@3-10.0.0.90:22-10.0.0.1:40846.service - OpenSSH per-connection server daemon (10.0.0.1:40846). May 13 12:54:37.098673 systemd-logind[1539]: Removed session 3. May 13 12:54:37.159591 sshd[1709]: Accepted publickey for core from 10.0.0.1 port 40846 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 12:54:37.161086 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:54:37.165635 systemd-logind[1539]: New session 4 of user core. May 13 12:54:37.175256 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 12:54:37.227855 sshd[1711]: Connection closed by 10.0.0.1 port 40846 May 13 12:54:37.228304 sshd-session[1709]: pam_unix(sshd:session): session closed for user core May 13 12:54:37.236878 systemd[1]: sshd@3-10.0.0.90:22-10.0.0.1:40846.service: Deactivated successfully. May 13 12:54:37.238522 systemd[1]: session-4.scope: Deactivated successfully. May 13 12:54:37.239249 systemd-logind[1539]: Session 4 logged out. Waiting for processes to exit. May 13 12:54:37.241884 systemd[1]: Started sshd@4-10.0.0.90:22-10.0.0.1:40856.service - OpenSSH per-connection server daemon (10.0.0.1:40856). May 13 12:54:37.242702 systemd-logind[1539]: Removed session 4. May 13 12:54:37.290562 sshd[1717]: Accepted publickey for core from 10.0.0.1 port 40856 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 12:54:37.291962 sshd-session[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:54:37.296422 systemd-logind[1539]: New session 5 of user core. May 13 12:54:37.306326 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 12:54:37.366081 sudo[1720]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 12:54:37.366472 sudo[1720]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 12:54:37.383087 sudo[1720]: pam_unix(sudo:session): session closed for user root May 13 12:54:37.385091 sshd[1719]: Connection closed by 10.0.0.1 port 40856 May 13 12:54:37.385453 sshd-session[1717]: pam_unix(sshd:session): session closed for user core May 13 12:54:37.398732 systemd[1]: sshd@4-10.0.0.90:22-10.0.0.1:40856.service: Deactivated successfully. May 13 12:54:37.400673 systemd[1]: session-5.scope: Deactivated successfully. May 13 12:54:37.401416 systemd-logind[1539]: Session 5 logged out. Waiting for processes to exit. May 13 12:54:37.404528 systemd[1]: Started sshd@5-10.0.0.90:22-10.0.0.1:40870.service - OpenSSH per-connection server daemon (10.0.0.1:40870). May 13 12:54:37.405145 systemd-logind[1539]: Removed session 5. May 13 12:54:37.461038 sshd[1726]: Accepted publickey for core from 10.0.0.1 port 40870 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 12:54:37.462537 sshd-session[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:54:37.466728 systemd-logind[1539]: New session 6 of user core. May 13 12:54:37.476307 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 12:54:37.528450 sudo[1730]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 12:54:37.528759 sudo[1730]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 12:54:37.855962 sudo[1730]: pam_unix(sudo:session): session closed for user root May 13 12:54:37.862548 sudo[1729]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 13 12:54:37.862847 sudo[1729]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 12:54:37.873910 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 12:54:37.924839 augenrules[1752]: No rules May 13 12:54:37.926562 systemd[1]: audit-rules.service: Deactivated successfully. May 13 12:54:37.926836 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 12:54:37.927866 sudo[1729]: pam_unix(sudo:session): session closed for user root May 13 12:54:37.929330 sshd[1728]: Connection closed by 10.0.0.1 port 40870 May 13 12:54:37.929620 sshd-session[1726]: pam_unix(sshd:session): session closed for user core May 13 12:54:37.946702 systemd[1]: sshd@5-10.0.0.90:22-10.0.0.1:40870.service: Deactivated successfully. May 13 12:54:37.948407 systemd[1]: session-6.scope: Deactivated successfully. May 13 12:54:37.949094 systemd-logind[1539]: Session 6 logged out. Waiting for processes to exit. May 13 12:54:37.951787 systemd[1]: Started sshd@6-10.0.0.90:22-10.0.0.1:40886.service - OpenSSH per-connection server daemon (10.0.0.1:40886). May 13 12:54:37.952400 systemd-logind[1539]: Removed session 6. May 13 12:54:38.005319 sshd[1761]: Accepted publickey for core from 10.0.0.1 port 40886 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 12:54:38.006583 sshd-session[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:54:38.010703 systemd-logind[1539]: New session 7 of user core. May 13 12:54:38.024266 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 12:54:38.078033 sudo[1764]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 12:54:38.078455 sudo[1764]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 12:54:38.377938 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 12:54:38.397487 (dockerd)[1784]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 12:54:38.618469 dockerd[1784]: time="2025-05-13T12:54:38.618401708Z" level=info msg="Starting up" May 13 12:54:38.619988 dockerd[1784]: time="2025-05-13T12:54:38.619961364Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 13 12:54:38.693094 dockerd[1784]: time="2025-05-13T12:54:38.692965499Z" level=info msg="Loading containers: start." May 13 12:54:38.703168 kernel: Initializing XFRM netlink socket May 13 12:54:38.932746 systemd-networkd[1488]: docker0: Link UP May 13 12:54:38.936966 dockerd[1784]: time="2025-05-13T12:54:38.936929526Z" level=info msg="Loading containers: done." May 13 12:54:38.950910 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3452994034-merged.mount: Deactivated successfully. May 13 12:54:38.952768 dockerd[1784]: time="2025-05-13T12:54:38.952730353Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 12:54:38.952823 dockerd[1784]: time="2025-05-13T12:54:38.952793452Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 13 12:54:38.952941 dockerd[1784]: time="2025-05-13T12:54:38.952917134Z" level=info msg="Initializing buildkit" May 13 12:54:38.981895 dockerd[1784]: time="2025-05-13T12:54:38.981848521Z" level=info msg="Completed buildkit initialization" May 13 12:54:38.987061 dockerd[1784]: time="2025-05-13T12:54:38.987024770Z" level=info msg="Daemon has completed initialization" May 13 12:54:38.987154 dockerd[1784]: time="2025-05-13T12:54:38.987104439Z" level=info msg="API listen on /run/docker.sock" May 13 12:54:38.987229 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 12:54:39.709045 containerd[1557]: time="2025-05-13T12:54:39.709000841Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 13 12:54:40.245159 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2512554321.mount: Deactivated successfully. May 13 12:54:41.164552 containerd[1557]: time="2025-05-13T12:54:41.164495503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:54:41.165288 containerd[1557]: time="2025-05-13T12:54:41.165227215Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682879" May 13 12:54:41.166406 containerd[1557]: time="2025-05-13T12:54:41.166372032Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:54:41.168955 containerd[1557]: time="2025-05-13T12:54:41.168920473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:54:41.169753 containerd[1557]: time="2025-05-13T12:54:41.169722757Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 1.460683415s" May 13 12:54:41.169753 containerd[1557]: time="2025-05-13T12:54:41.169753836Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" May 13 12:54:41.170312 containerd[1557]: time="2025-05-13T12:54:41.170283018Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 13 12:54:42.356103 containerd[1557]: time="2025-05-13T12:54:42.356024711Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:54:42.357053 containerd[1557]: time="2025-05-13T12:54:42.356989210Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779589" May 13 12:54:42.358146 containerd[1557]: time="2025-05-13T12:54:42.358112367Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:54:42.360880 containerd[1557]: time="2025-05-13T12:54:42.360836416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:54:42.361688 containerd[1557]: time="2025-05-13T12:54:42.361652978Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 1.191343029s" May 13 12:54:42.361688 containerd[1557]: time="2025-05-13T12:54:42.361683756Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" May 13 12:54:42.362159 containerd[1557]: time="2025-05-13T12:54:42.362116838Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 13 12:54:43.365822 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 12:54:43.367359 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:54:43.936918 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:54:43.941494 (kubelet)[2063]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 12:54:43.980816 kubelet[2063]: E0513 12:54:43.980766 2063 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 12:54:43.987628 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 12:54:43.987824 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 12:54:43.988210 systemd[1]: kubelet.service: Consumed 203ms CPU time, 104.7M memory peak. May 13 12:54:44.083898 containerd[1557]: time="2025-05-13T12:54:44.083848284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:54:44.084797 containerd[1557]: time="2025-05-13T12:54:44.084765484Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169938" May 13 12:54:44.085919 containerd[1557]: time="2025-05-13T12:54:44.085884814Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:54:44.088486 containerd[1557]: time="2025-05-13T12:54:44.088455065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:54:44.089326 containerd[1557]: time="2025-05-13T12:54:44.089273310Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 1.727111698s" May 13 12:54:44.089326 containerd[1557]: time="2025-05-13T12:54:44.089312133Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" May 13 12:54:44.089724 containerd[1557]: time="2025-05-13T12:54:44.089686946Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 13 12:54:44.955845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount909929840.mount: Deactivated successfully. May 13 12:54:45.580039 containerd[1557]: time="2025-05-13T12:54:45.579963989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:54:45.580881 containerd[1557]: time="2025-05-13T12:54:45.580815356Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917856" May 13 12:54:45.582109 containerd[1557]: time="2025-05-13T12:54:45.582076352Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:54:45.583916 containerd[1557]: time="2025-05-13T12:54:45.583883151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:54:45.584372 containerd[1557]: time="2025-05-13T12:54:45.584331161Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 1.494587669s" May 13 12:54:45.584404 containerd[1557]: time="2025-05-13T12:54:45.584373130Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 13 12:54:45.584841 containerd[1557]: time="2025-05-13T12:54:45.584807044Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 13 12:54:46.120180 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1154707972.mount: Deactivated successfully. May 13 12:54:46.800505 containerd[1557]: time="2025-05-13T12:54:46.800444622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:54:46.801165 containerd[1557]: time="2025-05-13T12:54:46.801120009Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 13 12:54:46.802309 containerd[1557]: time="2025-05-13T12:54:46.802283852Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:54:46.804748 containerd[1557]: time="2025-05-13T12:54:46.804705364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:54:46.805516 containerd[1557]: time="2025-05-13T12:54:46.805490106Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.220640904s" May 13 12:54:46.805555 containerd[1557]: time="2025-05-13T12:54:46.805519171Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 13 12:54:46.805987 containerd[1557]: time="2025-05-13T12:54:46.805935902Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 13 12:54:47.213084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3717570906.mount: Deactivated successfully. May 13 12:54:47.218975 containerd[1557]: time="2025-05-13T12:54:47.218932426Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 12:54:47.219676 containerd[1557]: time="2025-05-13T12:54:47.219631798Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 13 12:54:47.220813 containerd[1557]: time="2025-05-13T12:54:47.220784209Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 12:54:47.222640 containerd[1557]: time="2025-05-13T12:54:47.222603211Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 12:54:47.223209 containerd[1557]: time="2025-05-13T12:54:47.223178029Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 417.219294ms" May 13 12:54:47.223245 containerd[1557]: time="2025-05-13T12:54:47.223211953Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 13 12:54:47.223688 containerd[1557]: time="2025-05-13T12:54:47.223649604Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 13 12:54:47.698463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount307626760.mount: Deactivated successfully. May 13 12:54:49.365850 containerd[1557]: time="2025-05-13T12:54:49.365787222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:54:49.366516 containerd[1557]: time="2025-05-13T12:54:49.366466265Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 13 12:54:49.367625 containerd[1557]: time="2025-05-13T12:54:49.367590975Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:54:49.370067 containerd[1557]: time="2025-05-13T12:54:49.370006275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:54:49.370934 containerd[1557]: time="2025-05-13T12:54:49.370906284Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.147233346s" May 13 12:54:49.370969 containerd[1557]: time="2025-05-13T12:54:49.370935428Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 13 12:54:51.567388 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:54:51.567578 systemd[1]: kubelet.service: Consumed 203ms CPU time, 104.7M memory peak. May 13 12:54:51.569796 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:54:51.597100 systemd[1]: Reload requested from client PID 2222 ('systemctl') (unit session-7.scope)... May 13 12:54:51.597120 systemd[1]: Reloading... May 13 12:54:51.687179 zram_generator::config[2267]: No configuration found. May 13 12:54:51.903971 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 12:54:52.018011 systemd[1]: Reloading finished in 420 ms. May 13 12:54:52.086773 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 13 12:54:52.086870 systemd[1]: kubelet.service: Failed with result 'signal'. May 13 12:54:52.087163 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:54:52.087214 systemd[1]: kubelet.service: Consumed 146ms CPU time, 91.8M memory peak. May 13 12:54:52.088704 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:54:52.256343 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:54:52.262039 (kubelet)[2313]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 12:54:52.309862 kubelet[2313]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 12:54:52.309862 kubelet[2313]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 12:54:52.309862 kubelet[2313]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 12:54:52.310300 kubelet[2313]: I0513 12:54:52.309913 2313 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 12:54:52.491352 kubelet[2313]: I0513 12:54:52.491300 2313 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 12:54:52.491352 kubelet[2313]: I0513 12:54:52.491334 2313 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 12:54:52.491637 kubelet[2313]: I0513 12:54:52.491614 2313 server.go:954] "Client rotation is on, will bootstrap in background" May 13 12:54:52.510907 kubelet[2313]: E0513 12:54:52.510823 2313 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.90:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" May 13 12:54:52.511050 kubelet[2313]: I0513 12:54:52.511021 2313 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 12:54:52.518868 kubelet[2313]: I0513 12:54:52.518834 2313 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 12:54:52.524912 kubelet[2313]: I0513 12:54:52.524876 2313 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 12:54:52.525152 kubelet[2313]: I0513 12:54:52.525102 2313 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 12:54:52.525323 kubelet[2313]: I0513 12:54:52.525128 2313 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 12:54:52.525323 kubelet[2313]: I0513 12:54:52.525322 2313 topology_manager.go:138] "Creating topology manager with none policy" May 13 12:54:52.525449 kubelet[2313]: I0513 12:54:52.525331 2313 container_manager_linux.go:304] "Creating device plugin manager" May 13 12:54:52.525897 kubelet[2313]: I0513 12:54:52.525874 2313 state_mem.go:36] "Initialized new in-memory state store" May 13 12:54:52.528226 kubelet[2313]: I0513 12:54:52.528205 2313 kubelet.go:446] "Attempting to sync node with API server" May 13 12:54:52.528226 kubelet[2313]: I0513 12:54:52.528220 2313 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 12:54:52.528280 kubelet[2313]: I0513 12:54:52.528241 2313 kubelet.go:352] "Adding apiserver pod source" May 13 12:54:52.528280 kubelet[2313]: I0513 12:54:52.528252 2313 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 12:54:52.531173 kubelet[2313]: I0513 12:54:52.530755 2313 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 13 12:54:52.531235 kubelet[2313]: I0513 12:54:52.531215 2313 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 12:54:52.531303 kubelet[2313]: W0513 12:54:52.531280 2313 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 12:54:52.531846 kubelet[2313]: W0513 12:54:52.531719 2313 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.90:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused May 13 12:54:52.531846 kubelet[2313]: E0513 12:54:52.531781 2313 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.90:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" May 13 12:54:52.531846 kubelet[2313]: W0513 12:54:52.531719 2313 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.90:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused May 13 12:54:52.531846 kubelet[2313]: E0513 12:54:52.531806 2313 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.90:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" May 13 12:54:52.532889 kubelet[2313]: I0513 12:54:52.532863 2313 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 12:54:52.532937 kubelet[2313]: I0513 12:54:52.532899 2313 server.go:1287] "Started kubelet" May 13 12:54:52.533054 kubelet[2313]: I0513 12:54:52.533029 2313 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 12:54:52.533939 kubelet[2313]: I0513 12:54:52.533924 2313 server.go:490] "Adding debug handlers to kubelet server" May 13 12:54:52.535854 kubelet[2313]: I0513 12:54:52.535812 2313 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 12:54:52.536108 kubelet[2313]: I0513 12:54:52.536092 2313 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 12:54:52.536508 kubelet[2313]: I0513 12:54:52.536497 2313 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 12:54:52.536644 kubelet[2313]: I0513 12:54:52.536606 2313 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 12:54:52.536686 kubelet[2313]: I0513 12:54:52.536528 2313 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 12:54:52.536814 kubelet[2313]: I0513 12:54:52.536776 2313 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 12:54:52.536892 kubelet[2313]: I0513 12:54:52.536854 2313 reconciler.go:26] "Reconciler: start to sync state" May 13 12:54:52.537834 kubelet[2313]: E0513 12:54:52.536931 2313 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.90:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.90:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f17628ffb29aa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 12:54:52.532877738 +0000 UTC m=+0.266302979,LastTimestamp:2025-05-13 12:54:52.532877738 +0000 UTC m=+0.266302979,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 12:54:52.538177 kubelet[2313]: E0513 12:54:52.538035 2313 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 12:54:52.538330 kubelet[2313]: W0513 12:54:52.538287 2313 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.90:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused May 13 12:54:52.538360 kubelet[2313]: E0513 12:54:52.538338 2313 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.90:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" May 13 12:54:52.538360 kubelet[2313]: E0513 12:54:52.538349 2313 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 12:54:52.538419 kubelet[2313]: E0513 12:54:52.538405 2313 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.90:6443: connect: connection refused" interval="200ms" May 13 12:54:52.538753 kubelet[2313]: I0513 12:54:52.538717 2313 factory.go:221] Registration of the systemd container factory successfully May 13 12:54:52.538821 kubelet[2313]: I0513 12:54:52.538802 2313 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 12:54:52.540299 kubelet[2313]: I0513 12:54:52.540276 2313 factory.go:221] Registration of the containerd container factory successfully May 13 12:54:52.553930 kubelet[2313]: I0513 12:54:52.553887 2313 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 12:54:52.553930 kubelet[2313]: I0513 12:54:52.553903 2313 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 12:54:52.553930 kubelet[2313]: I0513 12:54:52.553919 2313 state_mem.go:36] "Initialized new in-memory state store" May 13 12:54:52.555516 kubelet[2313]: I0513 12:54:52.555487 2313 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 12:54:52.556872 kubelet[2313]: I0513 12:54:52.556854 2313 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 12:54:52.556926 kubelet[2313]: I0513 12:54:52.556876 2313 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 12:54:52.556926 kubelet[2313]: I0513 12:54:52.556893 2313 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 12:54:52.556926 kubelet[2313]: I0513 12:54:52.556900 2313 kubelet.go:2388] "Starting kubelet main sync loop" May 13 12:54:52.556984 kubelet[2313]: E0513 12:54:52.556941 2313 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 12:54:52.557728 kubelet[2313]: W0513 12:54:52.557611 2313 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.90:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused May 13 12:54:52.557728 kubelet[2313]: E0513 12:54:52.557643 2313 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.90:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" May 13 12:54:52.639308 kubelet[2313]: E0513 12:54:52.639264 2313 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 12:54:52.657451 kubelet[2313]: E0513 12:54:52.657417 2313 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 12:54:52.739264 kubelet[2313]: E0513 12:54:52.739215 2313 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.90:6443: connect: connection refused" interval="400ms" May 13 12:54:52.740377 kubelet[2313]: E0513 12:54:52.740337 2313 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 12:54:52.841089 kubelet[2313]: E0513 12:54:52.840968 2313 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 12:54:52.858188 kubelet[2313]: E0513 12:54:52.858126 2313 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 12:54:52.924344 kubelet[2313]: I0513 12:54:52.924281 2313 policy_none.go:49] "None policy: Start" May 13 12:54:52.924344 kubelet[2313]: I0513 12:54:52.924311 2313 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 12:54:52.924344 kubelet[2313]: I0513 12:54:52.924324 2313 state_mem.go:35] "Initializing new in-memory state store" May 13 12:54:52.932382 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 12:54:52.941521 kubelet[2313]: E0513 12:54:52.941487 2313 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 12:54:52.946633 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 12:54:52.949745 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 12:54:52.957120 kubelet[2313]: I0513 12:54:52.957088 2313 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 12:54:52.957381 kubelet[2313]: I0513 12:54:52.957364 2313 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 12:54:52.957420 kubelet[2313]: I0513 12:54:52.957381 2313 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 12:54:52.957677 kubelet[2313]: I0513 12:54:52.957654 2313 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 12:54:52.958243 kubelet[2313]: E0513 12:54:52.958220 2313 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 12:54:52.958280 kubelet[2313]: E0513 12:54:52.958258 2313 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 12:54:53.059327 kubelet[2313]: I0513 12:54:53.059277 2313 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 12:54:53.059772 kubelet[2313]: E0513 12:54:53.059733 2313 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.90:6443/api/v1/nodes\": dial tcp 10.0.0.90:6443: connect: connection refused" node="localhost" May 13 12:54:53.140566 kubelet[2313]: E0513 12:54:53.140525 2313 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.90:6443: connect: connection refused" interval="800ms" May 13 12:54:53.260795 kubelet[2313]: I0513 12:54:53.260658 2313 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 12:54:53.261250 kubelet[2313]: E0513 12:54:53.261218 2313 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.90:6443/api/v1/nodes\": dial tcp 10.0.0.90:6443: connect: connection refused" node="localhost" May 13 12:54:53.266940 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice - libcontainer container kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. May 13 12:54:53.291380 kubelet[2313]: E0513 12:54:53.291338 2313 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 12:54:53.293649 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice - libcontainer container kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. May 13 12:54:53.307770 kubelet[2313]: E0513 12:54:53.307724 2313 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 12:54:53.310863 systemd[1]: Created slice kubepods-burstable-pod0b98a6c50ddfc6b7e6f87bbdbd36c82e.slice - libcontainer container kubepods-burstable-pod0b98a6c50ddfc6b7e6f87bbdbd36c82e.slice. May 13 12:54:53.312619 kubelet[2313]: E0513 12:54:53.312584 2313 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 12:54:53.341101 kubelet[2313]: I0513 12:54:53.341027 2313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0b98a6c50ddfc6b7e6f87bbdbd36c82e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0b98a6c50ddfc6b7e6f87bbdbd36c82e\") " pod="kube-system/kube-apiserver-localhost" May 13 12:54:53.341101 kubelet[2313]: I0513 12:54:53.341108 2313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:54:53.341284 kubelet[2313]: I0513 12:54:53.341135 2313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:54:53.341284 kubelet[2313]: I0513 12:54:53.341182 2313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:54:53.341284 kubelet[2313]: I0513 12:54:53.341205 2313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 13 12:54:53.341284 kubelet[2313]: I0513 12:54:53.341224 2313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0b98a6c50ddfc6b7e6f87bbdbd36c82e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0b98a6c50ddfc6b7e6f87bbdbd36c82e\") " pod="kube-system/kube-apiserver-localhost" May 13 12:54:53.341284 kubelet[2313]: I0513 12:54:53.341241 2313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0b98a6c50ddfc6b7e6f87bbdbd36c82e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0b98a6c50ddfc6b7e6f87bbdbd36c82e\") " pod="kube-system/kube-apiserver-localhost" May 13 12:54:53.341399 kubelet[2313]: I0513 12:54:53.341259 2313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:54:53.341399 kubelet[2313]: I0513 12:54:53.341279 2313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:54:53.362747 kubelet[2313]: W0513 12:54:53.362687 2313 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.90:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused May 13 12:54:53.362747 kubelet[2313]: E0513 12:54:53.362752 2313 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.90:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" May 13 12:54:53.410724 kubelet[2313]: W0513 12:54:53.410556 2313 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.90:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused May 13 12:54:53.410724 kubelet[2313]: E0513 12:54:53.410628 2313 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.90:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" May 13 12:54:53.592409 kubelet[2313]: E0513 12:54:53.592370 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:54:53.593063 containerd[1557]: time="2025-05-13T12:54:53.593010594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" May 13 12:54:53.608257 kubelet[2313]: E0513 12:54:53.608219 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:54:53.608705 containerd[1557]: time="2025-05-13T12:54:53.608668695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" May 13 12:54:53.613545 kubelet[2313]: E0513 12:54:53.613232 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:54:53.614336 containerd[1557]: time="2025-05-13T12:54:53.614272696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0b98a6c50ddfc6b7e6f87bbdbd36c82e,Namespace:kube-system,Attempt:0,}" May 13 12:54:53.617411 containerd[1557]: time="2025-05-13T12:54:53.617371909Z" level=info msg="connecting to shim 6522f3fa99653699333f462a5e03f460442c47a3e9f0cd7ac421ad0526d2bba6" address="unix:///run/containerd/s/14c9ef957a1b977b6270c94e18fbbea593f3bf1421b7b4912babd61a021676d9" namespace=k8s.io protocol=ttrpc version=3 May 13 12:54:53.643391 systemd[1]: Started cri-containerd-6522f3fa99653699333f462a5e03f460442c47a3e9f0cd7ac421ad0526d2bba6.scope - libcontainer container 6522f3fa99653699333f462a5e03f460442c47a3e9f0cd7ac421ad0526d2bba6. May 13 12:54:53.647268 containerd[1557]: time="2025-05-13T12:54:53.647187565Z" level=info msg="connecting to shim a51006d851744fd5e78453e6959564b643346613352302adceff927e733d7737" address="unix:///run/containerd/s/408e24afb93d33690cdda62c3d724f2ffd4c9885db2f19937771ae3606e5889f" namespace=k8s.io protocol=ttrpc version=3 May 13 12:54:53.649843 containerd[1557]: time="2025-05-13T12:54:53.649792882Z" level=info msg="connecting to shim 31213d61861eb953edf115ff4c455ab8ceace9b1bde17d850c0d175c56ea237d" address="unix:///run/containerd/s/b4d7c87beecc554ae1049a279b50b63e0af716cb359df9a1a647aaeaccc47161" namespace=k8s.io protocol=ttrpc version=3 May 13 12:54:53.663892 kubelet[2313]: I0513 12:54:53.663762 2313 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 12:54:53.664617 kubelet[2313]: E0513 12:54:53.664591 2313 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.90:6443/api/v1/nodes\": dial tcp 10.0.0.90:6443: connect: connection refused" node="localhost" May 13 12:54:53.680316 systemd[1]: Started cri-containerd-31213d61861eb953edf115ff4c455ab8ceace9b1bde17d850c0d175c56ea237d.scope - libcontainer container 31213d61861eb953edf115ff4c455ab8ceace9b1bde17d850c0d175c56ea237d. May 13 12:54:53.682389 systemd[1]: Started cri-containerd-a51006d851744fd5e78453e6959564b643346613352302adceff927e733d7737.scope - libcontainer container a51006d851744fd5e78453e6959564b643346613352302adceff927e733d7737. May 13 12:54:53.711546 containerd[1557]: time="2025-05-13T12:54:53.711426298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"6522f3fa99653699333f462a5e03f460442c47a3e9f0cd7ac421ad0526d2bba6\"" May 13 12:54:53.713019 kubelet[2313]: E0513 12:54:53.712984 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:54:53.715155 containerd[1557]: time="2025-05-13T12:54:53.715116550Z" level=info msg="CreateContainer within sandbox \"6522f3fa99653699333f462a5e03f460442c47a3e9f0cd7ac421ad0526d2bba6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 12:54:53.725119 containerd[1557]: time="2025-05-13T12:54:53.725072315Z" level=info msg="Container c0d95a5ebc6e1e8d70c2512e44601efb2283c5e736d93a489f5c4f5bd1c75397: CDI devices from CRI Config.CDIDevices: []" May 13 12:54:53.736161 containerd[1557]: time="2025-05-13T12:54:53.736046960Z" level=info msg="CreateContainer within sandbox \"6522f3fa99653699333f462a5e03f460442c47a3e9f0cd7ac421ad0526d2bba6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c0d95a5ebc6e1e8d70c2512e44601efb2283c5e736d93a489f5c4f5bd1c75397\"" May 13 12:54:53.736400 containerd[1557]: time="2025-05-13T12:54:53.736348626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"31213d61861eb953edf115ff4c455ab8ceace9b1bde17d850c0d175c56ea237d\"" May 13 12:54:53.737283 kubelet[2313]: E0513 12:54:53.737125 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:54:53.737526 containerd[1557]: time="2025-05-13T12:54:53.737375071Z" level=info msg="StartContainer for \"c0d95a5ebc6e1e8d70c2512e44601efb2283c5e736d93a489f5c4f5bd1c75397\"" May 13 12:54:53.739626 containerd[1557]: time="2025-05-13T12:54:53.739593713Z" level=info msg="connecting to shim c0d95a5ebc6e1e8d70c2512e44601efb2283c5e736d93a489f5c4f5bd1c75397" address="unix:///run/containerd/s/14c9ef957a1b977b6270c94e18fbbea593f3bf1421b7b4912babd61a021676d9" protocol=ttrpc version=3 May 13 12:54:53.739980 containerd[1557]: time="2025-05-13T12:54:53.739948999Z" level=info msg="CreateContainer within sandbox \"31213d61861eb953edf115ff4c455ab8ceace9b1bde17d850c0d175c56ea237d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 12:54:53.741532 containerd[1557]: time="2025-05-13T12:54:53.741499317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0b98a6c50ddfc6b7e6f87bbdbd36c82e,Namespace:kube-system,Attempt:0,} returns sandbox id \"a51006d851744fd5e78453e6959564b643346613352302adceff927e733d7737\"" May 13 12:54:53.742364 kubelet[2313]: E0513 12:54:53.742333 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:54:53.743817 containerd[1557]: time="2025-05-13T12:54:53.743791396Z" level=info msg="CreateContainer within sandbox \"a51006d851744fd5e78453e6959564b643346613352302adceff927e733d7737\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 12:54:53.750528 containerd[1557]: time="2025-05-13T12:54:53.750477477Z" level=info msg="Container 158b6583c93d7704990f5845789387a2323bf6c148979fe8f151b34c5c2329ad: CDI devices from CRI Config.CDIDevices: []" May 13 12:54:53.760373 containerd[1557]: time="2025-05-13T12:54:53.760254616Z" level=info msg="Container 0bbd861d25ef64061499838000344d4435227e8e35147f2884094e3fe3c81036: CDI devices from CRI Config.CDIDevices: []" May 13 12:54:53.762710 containerd[1557]: time="2025-05-13T12:54:53.762670538Z" level=info msg="CreateContainer within sandbox \"31213d61861eb953edf115ff4c455ab8ceace9b1bde17d850c0d175c56ea237d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"158b6583c93d7704990f5845789387a2323bf6c148979fe8f151b34c5c2329ad\"" May 13 12:54:53.763064 containerd[1557]: time="2025-05-13T12:54:53.763035603Z" level=info msg="StartContainer for \"158b6583c93d7704990f5845789387a2323bf6c148979fe8f151b34c5c2329ad\"" May 13 12:54:53.764338 containerd[1557]: time="2025-05-13T12:54:53.764305254Z" level=info msg="connecting to shim 158b6583c93d7704990f5845789387a2323bf6c148979fe8f151b34c5c2329ad" address="unix:///run/containerd/s/b4d7c87beecc554ae1049a279b50b63e0af716cb359df9a1a647aaeaccc47161" protocol=ttrpc version=3 May 13 12:54:53.765301 systemd[1]: Started cri-containerd-c0d95a5ebc6e1e8d70c2512e44601efb2283c5e736d93a489f5c4f5bd1c75397.scope - libcontainer container c0d95a5ebc6e1e8d70c2512e44601efb2283c5e736d93a489f5c4f5bd1c75397. May 13 12:54:53.767787 containerd[1557]: time="2025-05-13T12:54:53.767724999Z" level=info msg="CreateContainer within sandbox \"a51006d851744fd5e78453e6959564b643346613352302adceff927e733d7737\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0bbd861d25ef64061499838000344d4435227e8e35147f2884094e3fe3c81036\"" May 13 12:54:53.768228 containerd[1557]: time="2025-05-13T12:54:53.768185823Z" level=info msg="StartContainer for \"0bbd861d25ef64061499838000344d4435227e8e35147f2884094e3fe3c81036\"" May 13 12:54:53.769599 containerd[1557]: time="2025-05-13T12:54:53.769570871Z" level=info msg="connecting to shim 0bbd861d25ef64061499838000344d4435227e8e35147f2884094e3fe3c81036" address="unix:///run/containerd/s/408e24afb93d33690cdda62c3d724f2ffd4c9885db2f19937771ae3606e5889f" protocol=ttrpc version=3 May 13 12:54:53.790501 systemd[1]: Started cri-containerd-158b6583c93d7704990f5845789387a2323bf6c148979fe8f151b34c5c2329ad.scope - libcontainer container 158b6583c93d7704990f5845789387a2323bf6c148979fe8f151b34c5c2329ad. May 13 12:54:53.794461 systemd[1]: Started cri-containerd-0bbd861d25ef64061499838000344d4435227e8e35147f2884094e3fe3c81036.scope - libcontainer container 0bbd861d25ef64061499838000344d4435227e8e35147f2884094e3fe3c81036. May 13 12:54:53.829878 kubelet[2313]: W0513 12:54:53.829810 2313 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.90:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused May 13 12:54:53.830101 kubelet[2313]: E0513 12:54:53.830044 2313 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.90:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" May 13 12:54:53.833661 kubelet[2313]: W0513 12:54:53.833609 2313 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.90:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused May 13 12:54:53.833661 kubelet[2313]: E0513 12:54:53.833639 2313 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.90:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" May 13 12:54:53.834709 containerd[1557]: time="2025-05-13T12:54:53.834671491Z" level=info msg="StartContainer for \"c0d95a5ebc6e1e8d70c2512e44601efb2283c5e736d93a489f5c4f5bd1c75397\" returns successfully" May 13 12:54:53.857171 containerd[1557]: time="2025-05-13T12:54:53.855991281Z" level=info msg="StartContainer for \"158b6583c93d7704990f5845789387a2323bf6c148979fe8f151b34c5c2329ad\" returns successfully" May 13 12:54:53.862885 containerd[1557]: time="2025-05-13T12:54:53.862846970Z" level=info msg="StartContainer for \"0bbd861d25ef64061499838000344d4435227e8e35147f2884094e3fe3c81036\" returns successfully" May 13 12:54:54.466936 kubelet[2313]: I0513 12:54:54.466393 2313 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 12:54:54.563478 kubelet[2313]: E0513 12:54:54.563420 2313 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 12:54:54.563715 kubelet[2313]: E0513 12:54:54.563703 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:54:54.566433 kubelet[2313]: E0513 12:54:54.566298 2313 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 12:54:54.566433 kubelet[2313]: E0513 12:54:54.566373 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:54:54.568353 kubelet[2313]: E0513 12:54:54.568339 2313 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 12:54:54.568501 kubelet[2313]: E0513 12:54:54.568491 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:54:54.940337 kubelet[2313]: E0513 12:54:54.940302 2313 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 13 12:54:55.039115 kubelet[2313]: I0513 12:54:55.039066 2313 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 13 12:54:55.039115 kubelet[2313]: E0513 12:54:55.039104 2313 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 13 12:54:55.042357 kubelet[2313]: E0513 12:54:55.042334 2313 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 12:54:55.138971 kubelet[2313]: I0513 12:54:55.138945 2313 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 13 12:54:55.142908 kubelet[2313]: E0513 12:54:55.142873 2313 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 13 12:54:55.142908 kubelet[2313]: I0513 12:54:55.142905 2313 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 12:54:55.144107 kubelet[2313]: E0513 12:54:55.144072 2313 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 13 12:54:55.144107 kubelet[2313]: I0513 12:54:55.144092 2313 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 12:54:55.145331 kubelet[2313]: E0513 12:54:55.145308 2313 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 13 12:54:55.533469 kubelet[2313]: I0513 12:54:55.533419 2313 apiserver.go:52] "Watching apiserver" May 13 12:54:55.537668 kubelet[2313]: I0513 12:54:55.537627 2313 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 12:54:55.569445 kubelet[2313]: I0513 12:54:55.569413 2313 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 12:54:55.569594 kubelet[2313]: I0513 12:54:55.569562 2313 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 12:54:55.571528 kubelet[2313]: E0513 12:54:55.571477 2313 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 13 12:54:55.571679 kubelet[2313]: E0513 12:54:55.571546 2313 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 13 12:54:55.571725 kubelet[2313]: E0513 12:54:55.571699 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:54:55.571773 kubelet[2313]: E0513 12:54:55.571754 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:54:57.203576 systemd[1]: Reload requested from client PID 2594 ('systemctl') (unit session-7.scope)... May 13 12:54:57.203595 systemd[1]: Reloading... May 13 12:54:57.276253 zram_generator::config[2640]: No configuration found. May 13 12:54:57.874635 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 12:54:58.007829 systemd[1]: Reloading finished in 803 ms. May 13 12:54:58.041651 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:54:58.063446 systemd[1]: kubelet.service: Deactivated successfully. May 13 12:54:58.063755 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:54:58.063804 systemd[1]: kubelet.service: Consumed 712ms CPU time, 125.4M memory peak. May 13 12:54:58.065700 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:54:58.263594 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:54:58.279594 (kubelet)[2682]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 12:54:58.319871 kubelet[2682]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 12:54:58.319871 kubelet[2682]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 12:54:58.319871 kubelet[2682]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 12:54:58.320387 kubelet[2682]: I0513 12:54:58.319944 2682 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 12:54:58.328411 kubelet[2682]: I0513 12:54:58.328366 2682 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 12:54:58.328411 kubelet[2682]: I0513 12:54:58.328400 2682 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 12:54:58.328759 kubelet[2682]: I0513 12:54:58.328740 2682 server.go:954] "Client rotation is on, will bootstrap in background" May 13 12:54:58.330300 kubelet[2682]: I0513 12:54:58.330280 2682 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 12:54:58.333119 kubelet[2682]: I0513 12:54:58.332816 2682 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 12:54:58.336827 kubelet[2682]: I0513 12:54:58.336773 2682 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 12:54:58.343544 kubelet[2682]: I0513 12:54:58.343499 2682 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 12:54:58.343806 kubelet[2682]: I0513 12:54:58.343756 2682 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 12:54:58.344007 kubelet[2682]: I0513 12:54:58.343797 2682 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 12:54:58.344084 kubelet[2682]: I0513 12:54:58.344015 2682 topology_manager.go:138] "Creating topology manager with none policy" May 13 12:54:58.344084 kubelet[2682]: I0513 12:54:58.344026 2682 container_manager_linux.go:304] "Creating device plugin manager" May 13 12:54:58.344084 kubelet[2682]: I0513 12:54:58.344073 2682 state_mem.go:36] "Initialized new in-memory state store" May 13 12:54:58.344281 kubelet[2682]: I0513 12:54:58.344269 2682 kubelet.go:446] "Attempting to sync node with API server" May 13 12:54:58.344331 kubelet[2682]: I0513 12:54:58.344284 2682 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 12:54:58.344331 kubelet[2682]: I0513 12:54:58.344322 2682 kubelet.go:352] "Adding apiserver pod source" May 13 12:54:58.344374 kubelet[2682]: I0513 12:54:58.344340 2682 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 12:54:58.345451 kubelet[2682]: I0513 12:54:58.345423 2682 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 13 12:54:58.346476 kubelet[2682]: I0513 12:54:58.346438 2682 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 12:54:58.348124 kubelet[2682]: I0513 12:54:58.348098 2682 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 12:54:58.348337 kubelet[2682]: I0513 12:54:58.348274 2682 server.go:1287] "Started kubelet" May 13 12:54:58.353532 kubelet[2682]: I0513 12:54:58.353475 2682 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 12:54:58.353934 kubelet[2682]: I0513 12:54:58.353769 2682 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 12:54:58.353934 kubelet[2682]: I0513 12:54:58.353814 2682 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 12:54:58.354713 kubelet[2682]: E0513 12:54:58.354687 2682 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 12:54:58.355183 kubelet[2682]: I0513 12:54:58.354727 2682 server.go:490] "Adding debug handlers to kubelet server" May 13 12:54:58.356102 kubelet[2682]: I0513 12:54:58.356088 2682 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 12:54:58.357288 kubelet[2682]: I0513 12:54:58.357270 2682 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 12:54:58.359071 kubelet[2682]: E0513 12:54:58.359045 2682 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 12:54:58.359118 kubelet[2682]: I0513 12:54:58.359075 2682 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 12:54:58.359288 kubelet[2682]: I0513 12:54:58.359261 2682 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 12:54:58.359441 kubelet[2682]: I0513 12:54:58.359422 2682 reconciler.go:26] "Reconciler: start to sync state" May 13 12:54:58.360439 kubelet[2682]: I0513 12:54:58.360411 2682 factory.go:221] Registration of the systemd container factory successfully May 13 12:54:58.360866 kubelet[2682]: I0513 12:54:58.360525 2682 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 12:54:58.362647 kubelet[2682]: I0513 12:54:58.361943 2682 factory.go:221] Registration of the containerd container factory successfully May 13 12:54:58.375636 kubelet[2682]: I0513 12:54:58.375599 2682 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 12:54:58.378534 kubelet[2682]: I0513 12:54:58.378495 2682 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 12:54:58.378627 kubelet[2682]: I0513 12:54:58.378542 2682 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 12:54:58.378781 kubelet[2682]: I0513 12:54:58.378715 2682 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 12:54:58.378781 kubelet[2682]: I0513 12:54:58.378732 2682 kubelet.go:2388] "Starting kubelet main sync loop" May 13 12:54:58.378838 kubelet[2682]: E0513 12:54:58.378782 2682 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 12:54:58.399709 kubelet[2682]: I0513 12:54:58.399683 2682 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 12:54:58.399847 kubelet[2682]: I0513 12:54:58.399700 2682 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 12:54:58.399847 kubelet[2682]: I0513 12:54:58.399742 2682 state_mem.go:36] "Initialized new in-memory state store" May 13 12:54:58.399902 kubelet[2682]: I0513 12:54:58.399882 2682 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 12:54:58.399922 kubelet[2682]: I0513 12:54:58.399891 2682 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 12:54:58.399922 kubelet[2682]: I0513 12:54:58.399910 2682 policy_none.go:49] "None policy: Start" May 13 12:54:58.399922 kubelet[2682]: I0513 12:54:58.399920 2682 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 12:54:58.399980 kubelet[2682]: I0513 12:54:58.399929 2682 state_mem.go:35] "Initializing new in-memory state store" May 13 12:54:58.400029 kubelet[2682]: I0513 12:54:58.400017 2682 state_mem.go:75] "Updated machine memory state" May 13 12:54:58.404279 kubelet[2682]: I0513 12:54:58.404218 2682 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 12:54:58.404761 kubelet[2682]: I0513 12:54:58.404382 2682 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 12:54:58.404761 kubelet[2682]: I0513 12:54:58.404392 2682 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 12:54:58.404761 kubelet[2682]: I0513 12:54:58.404591 2682 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 12:54:58.405583 kubelet[2682]: E0513 12:54:58.405562 2682 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 12:54:58.479568 kubelet[2682]: I0513 12:54:58.479522 2682 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 13 12:54:58.479711 kubelet[2682]: I0513 12:54:58.479522 2682 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 12:54:58.479863 kubelet[2682]: I0513 12:54:58.479523 2682 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 12:54:58.509723 kubelet[2682]: I0513 12:54:58.509682 2682 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 12:54:58.516824 kubelet[2682]: I0513 12:54:58.516477 2682 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 13 12:54:58.516824 kubelet[2682]: I0513 12:54:58.516555 2682 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 13 12:54:58.559973 kubelet[2682]: I0513 12:54:58.559937 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:54:58.559973 kubelet[2682]: I0513 12:54:58.559969 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:54:58.559973 kubelet[2682]: I0513 12:54:58.559985 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0b98a6c50ddfc6b7e6f87bbdbd36c82e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0b98a6c50ddfc6b7e6f87bbdbd36c82e\") " pod="kube-system/kube-apiserver-localhost" May 13 12:54:58.560577 kubelet[2682]: I0513 12:54:58.560001 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0b98a6c50ddfc6b7e6f87bbdbd36c82e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0b98a6c50ddfc6b7e6f87bbdbd36c82e\") " pod="kube-system/kube-apiserver-localhost" May 13 12:54:58.560577 kubelet[2682]: I0513 12:54:58.560024 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:54:58.560577 kubelet[2682]: I0513 12:54:58.560037 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 13 12:54:58.560577 kubelet[2682]: I0513 12:54:58.560061 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0b98a6c50ddfc6b7e6f87bbdbd36c82e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0b98a6c50ddfc6b7e6f87bbdbd36c82e\") " pod="kube-system/kube-apiserver-localhost" May 13 12:54:58.560577 kubelet[2682]: I0513 12:54:58.560075 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:54:58.560689 kubelet[2682]: I0513 12:54:58.560088 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:54:58.784907 kubelet[2682]: E0513 12:54:58.784763 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:54:58.785828 kubelet[2682]: E0513 12:54:58.785763 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:54:58.785828 kubelet[2682]: E0513 12:54:58.785768 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:54:59.345130 kubelet[2682]: I0513 12:54:59.345099 2682 apiserver.go:52] "Watching apiserver" May 13 12:54:59.360369 kubelet[2682]: I0513 12:54:59.360330 2682 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 12:54:59.393397 kubelet[2682]: I0513 12:54:59.392918 2682 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 12:54:59.393767 kubelet[2682]: I0513 12:54:59.393685 2682 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 12:54:59.393931 kubelet[2682]: I0513 12:54:59.393909 2682 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 13 12:54:59.400795 kubelet[2682]: E0513 12:54:59.400748 2682 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 13 12:54:59.400962 kubelet[2682]: E0513 12:54:59.400827 2682 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 13 12:54:59.400962 kubelet[2682]: E0513 12:54:59.400951 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:54:59.401212 kubelet[2682]: E0513 12:54:59.401189 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:54:59.403156 kubelet[2682]: E0513 12:54:59.403104 2682 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 13 12:54:59.404222 kubelet[2682]: E0513 12:54:59.403241 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:54:59.441170 kubelet[2682]: I0513 12:54:59.439667 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.439646001 podStartE2EDuration="1.439646001s" podCreationTimestamp="2025-05-13 12:54:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:54:59.439406542 +0000 UTC m=+1.156060279" watchObservedRunningTime="2025-05-13 12:54:59.439646001 +0000 UTC m=+1.156299738" May 13 12:54:59.441170 kubelet[2682]: I0513 12:54:59.439789 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.439784341 podStartE2EDuration="1.439784341s" podCreationTimestamp="2025-05-13 12:54:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:54:59.423919112 +0000 UTC m=+1.140572849" watchObservedRunningTime="2025-05-13 12:54:59.439784341 +0000 UTC m=+1.156438068" May 13 12:54:59.476573 kubelet[2682]: I0513 12:54:59.476358 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.4763357209999999 podStartE2EDuration="1.476335721s" podCreationTimestamp="2025-05-13 12:54:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:54:59.458444302 +0000 UTC m=+1.175098039" watchObservedRunningTime="2025-05-13 12:54:59.476335721 +0000 UTC m=+1.192989458" May 13 12:55:00.394350 kubelet[2682]: E0513 12:55:00.394298 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:55:00.394893 kubelet[2682]: E0513 12:55:00.394371 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:55:00.394893 kubelet[2682]: E0513 12:55:00.394528 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:55:01.395549 kubelet[2682]: E0513 12:55:01.395515 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:55:02.795381 sudo[1764]: pam_unix(sudo:session): session closed for user root May 13 12:55:02.796987 sshd[1763]: Connection closed by 10.0.0.1 port 40886 May 13 12:55:02.797427 sshd-session[1761]: pam_unix(sshd:session): session closed for user core May 13 12:55:02.803522 systemd[1]: sshd@6-10.0.0.90:22-10.0.0.1:40886.service: Deactivated successfully. May 13 12:55:02.805919 systemd[1]: session-7.scope: Deactivated successfully. May 13 12:55:02.806260 systemd[1]: session-7.scope: Consumed 4.082s CPU time, 218.7M memory peak. May 13 12:55:02.808206 systemd-logind[1539]: Session 7 logged out. Waiting for processes to exit. May 13 12:55:02.810035 systemd-logind[1539]: Removed session 7. May 13 12:55:02.834737 kubelet[2682]: I0513 12:55:02.834705 2682 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 12:55:02.835120 containerd[1557]: time="2025-05-13T12:55:02.834934262Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 12:55:02.835383 kubelet[2682]: I0513 12:55:02.835124 2682 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 12:55:03.500761 systemd[1]: Created slice kubepods-besteffort-podb053765d_eede_4120_b4c7_b0aa53f3b48a.slice - libcontainer container kubepods-besteffort-podb053765d_eede_4120_b4c7_b0aa53f3b48a.slice. May 13 12:55:03.591941 kubelet[2682]: I0513 12:55:03.591892 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b053765d-eede-4120-b4c7-b0aa53f3b48a-xtables-lock\") pod \"kube-proxy-4f8q6\" (UID: \"b053765d-eede-4120-b4c7-b0aa53f3b48a\") " pod="kube-system/kube-proxy-4f8q6" May 13 12:55:03.591941 kubelet[2682]: I0513 12:55:03.591924 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b053765d-eede-4120-b4c7-b0aa53f3b48a-kube-proxy\") pod \"kube-proxy-4f8q6\" (UID: \"b053765d-eede-4120-b4c7-b0aa53f3b48a\") " pod="kube-system/kube-proxy-4f8q6" May 13 12:55:03.591941 kubelet[2682]: I0513 12:55:03.591941 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b053765d-eede-4120-b4c7-b0aa53f3b48a-lib-modules\") pod \"kube-proxy-4f8q6\" (UID: \"b053765d-eede-4120-b4c7-b0aa53f3b48a\") " pod="kube-system/kube-proxy-4f8q6" May 13 12:55:03.592195 kubelet[2682]: I0513 12:55:03.591966 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swcgt\" (UniqueName: \"kubernetes.io/projected/b053765d-eede-4120-b4c7-b0aa53f3b48a-kube-api-access-swcgt\") pod \"kube-proxy-4f8q6\" (UID: \"b053765d-eede-4120-b4c7-b0aa53f3b48a\") " pod="kube-system/kube-proxy-4f8q6" May 13 12:55:03.808539 kubelet[2682]: E0513 12:55:03.808392 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:55:03.809416 containerd[1557]: time="2025-05-13T12:55:03.809366602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4f8q6,Uid:b053765d-eede-4120-b4c7-b0aa53f3b48a,Namespace:kube-system,Attempt:0,}" May 13 12:55:03.851095 containerd[1557]: time="2025-05-13T12:55:03.851049550Z" level=info msg="connecting to shim ece0d7512cde8921e17e7bf55260c437845fc03a6439793a74775c9d40931a40" address="unix:///run/containerd/s/d91d27610ff2f6acc6a15984f5d2ff2e43ec7e00c7bee07d58edbb68ba19cd86" namespace=k8s.io protocol=ttrpc version=3 May 13 12:55:03.917333 systemd[1]: Started cri-containerd-ece0d7512cde8921e17e7bf55260c437845fc03a6439793a74775c9d40931a40.scope - libcontainer container ece0d7512cde8921e17e7bf55260c437845fc03a6439793a74775c9d40931a40. May 13 12:55:03.924838 systemd[1]: Created slice kubepods-besteffort-pod415833d1_4338_4b84_939b_476610a0b609.slice - libcontainer container kubepods-besteffort-pod415833d1_4338_4b84_939b_476610a0b609.slice. May 13 12:55:03.946272 containerd[1557]: time="2025-05-13T12:55:03.946235917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4f8q6,Uid:b053765d-eede-4120-b4c7-b0aa53f3b48a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ece0d7512cde8921e17e7bf55260c437845fc03a6439793a74775c9d40931a40\"" May 13 12:55:03.946996 kubelet[2682]: E0513 12:55:03.946971 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:55:03.953534 containerd[1557]: time="2025-05-13T12:55:03.953485664Z" level=info msg="CreateContainer within sandbox \"ece0d7512cde8921e17e7bf55260c437845fc03a6439793a74775c9d40931a40\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 12:55:03.965385 containerd[1557]: time="2025-05-13T12:55:03.965342384Z" level=info msg="Container 0e63638bf3a57e876b038a86835204682227fbf3875ee579a85ebbb082434e9a: CDI devices from CRI Config.CDIDevices: []" May 13 12:55:03.974027 containerd[1557]: time="2025-05-13T12:55:03.973967349Z" level=info msg="CreateContainer within sandbox \"ece0d7512cde8921e17e7bf55260c437845fc03a6439793a74775c9d40931a40\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0e63638bf3a57e876b038a86835204682227fbf3875ee579a85ebbb082434e9a\"" May 13 12:55:03.974670 containerd[1557]: time="2025-05-13T12:55:03.974609720Z" level=info msg="StartContainer for \"0e63638bf3a57e876b038a86835204682227fbf3875ee579a85ebbb082434e9a\"" May 13 12:55:03.976299 containerd[1557]: time="2025-05-13T12:55:03.976260449Z" level=info msg="connecting to shim 0e63638bf3a57e876b038a86835204682227fbf3875ee579a85ebbb082434e9a" address="unix:///run/containerd/s/d91d27610ff2f6acc6a15984f5d2ff2e43ec7e00c7bee07d58edbb68ba19cd86" protocol=ttrpc version=3 May 13 12:55:03.995023 kubelet[2682]: I0513 12:55:03.994981 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fb7k\" (UniqueName: \"kubernetes.io/projected/415833d1-4338-4b84-939b-476610a0b609-kube-api-access-5fb7k\") pod \"tigera-operator-789496d6f5-xvhdn\" (UID: \"415833d1-4338-4b84-939b-476610a0b609\") " pod="tigera-operator/tigera-operator-789496d6f5-xvhdn" May 13 12:55:03.995153 kubelet[2682]: I0513 12:55:03.995025 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/415833d1-4338-4b84-939b-476610a0b609-var-lib-calico\") pod \"tigera-operator-789496d6f5-xvhdn\" (UID: \"415833d1-4338-4b84-939b-476610a0b609\") " pod="tigera-operator/tigera-operator-789496d6f5-xvhdn" May 13 12:55:04.004298 systemd[1]: Started cri-containerd-0e63638bf3a57e876b038a86835204682227fbf3875ee579a85ebbb082434e9a.scope - libcontainer container 0e63638bf3a57e876b038a86835204682227fbf3875ee579a85ebbb082434e9a. May 13 12:55:04.048783 containerd[1557]: time="2025-05-13T12:55:04.048748310Z" level=info msg="StartContainer for \"0e63638bf3a57e876b038a86835204682227fbf3875ee579a85ebbb082434e9a\" returns successfully" May 13 12:55:04.228415 containerd[1557]: time="2025-05-13T12:55:04.228364295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-xvhdn,Uid:415833d1-4338-4b84-939b-476610a0b609,Namespace:tigera-operator,Attempt:0,}" May 13 12:55:04.250283 containerd[1557]: time="2025-05-13T12:55:04.250223053Z" level=info msg="connecting to shim 8e763960a3653198a44bee92884f2c37ffc014bcd3b202a73cd52d629594e5d1" address="unix:///run/containerd/s/2264adebc6188cb0ab66e48122a2a59389d7a337a69730a5ed8b182d9c8b23cc" namespace=k8s.io protocol=ttrpc version=3 May 13 12:55:04.274298 systemd[1]: Started cri-containerd-8e763960a3653198a44bee92884f2c37ffc014bcd3b202a73cd52d629594e5d1.scope - libcontainer container 8e763960a3653198a44bee92884f2c37ffc014bcd3b202a73cd52d629594e5d1. May 13 12:55:04.289234 kubelet[2682]: E0513 12:55:04.289169 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:55:04.323671 containerd[1557]: time="2025-05-13T12:55:04.323607735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-xvhdn,Uid:415833d1-4338-4b84-939b-476610a0b609,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8e763960a3653198a44bee92884f2c37ffc014bcd3b202a73cd52d629594e5d1\"" May 13 12:55:04.325247 containerd[1557]: time="2025-05-13T12:55:04.325133077Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 13 12:55:04.404686 kubelet[2682]: E0513 12:55:04.404462 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:55:04.406116 kubelet[2682]: E0513 12:55:04.406081 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:55:04.417862 kubelet[2682]: I0513 12:55:04.417807 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4f8q6" podStartSLOduration=1.417791392 podStartE2EDuration="1.417791392s" podCreationTimestamp="2025-05-13 12:55:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:55:04.417357344 +0000 UTC m=+6.134011082" watchObservedRunningTime="2025-05-13 12:55:04.417791392 +0000 UTC m=+6.134445129" May 13 12:55:06.112794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4273849625.mount: Deactivated successfully. May 13 12:55:06.426521 containerd[1557]: time="2025-05-13T12:55:06.426452581Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:55:06.427262 containerd[1557]: time="2025-05-13T12:55:06.427228242Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 13 12:55:06.428405 containerd[1557]: time="2025-05-13T12:55:06.428373012Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:55:06.430834 containerd[1557]: time="2025-05-13T12:55:06.430803635Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:55:06.431529 containerd[1557]: time="2025-05-13T12:55:06.431473383Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 2.106265941s" May 13 12:55:06.431529 containerd[1557]: time="2025-05-13T12:55:06.431508539Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 13 12:55:06.433379 containerd[1557]: time="2025-05-13T12:55:06.433341584Z" level=info msg="CreateContainer within sandbox \"8e763960a3653198a44bee92884f2c37ffc014bcd3b202a73cd52d629594e5d1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 13 12:55:06.442830 containerd[1557]: time="2025-05-13T12:55:06.442776280Z" level=info msg="Container 4fd6751333504e09f8fe36340d09832db688df9b3a0d5488da35854cfd0f3f88: CDI devices from CRI Config.CDIDevices: []" May 13 12:55:06.449980 containerd[1557]: time="2025-05-13T12:55:06.449937494Z" level=info msg="CreateContainer within sandbox \"8e763960a3653198a44bee92884f2c37ffc014bcd3b202a73cd52d629594e5d1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4fd6751333504e09f8fe36340d09832db688df9b3a0d5488da35854cfd0f3f88\"" May 13 12:55:06.450507 containerd[1557]: time="2025-05-13T12:55:06.450472954Z" level=info msg="StartContainer for \"4fd6751333504e09f8fe36340d09832db688df9b3a0d5488da35854cfd0f3f88\"" May 13 12:55:06.451530 containerd[1557]: time="2025-05-13T12:55:06.451480571Z" level=info msg="connecting to shim 4fd6751333504e09f8fe36340d09832db688df9b3a0d5488da35854cfd0f3f88" address="unix:///run/containerd/s/2264adebc6188cb0ab66e48122a2a59389d7a337a69730a5ed8b182d9c8b23cc" protocol=ttrpc version=3 May 13 12:55:06.486314 systemd[1]: Started cri-containerd-4fd6751333504e09f8fe36340d09832db688df9b3a0d5488da35854cfd0f3f88.scope - libcontainer container 4fd6751333504e09f8fe36340d09832db688df9b3a0d5488da35854cfd0f3f88. May 13 12:55:06.517514 containerd[1557]: time="2025-05-13T12:55:06.517474006Z" level=info msg="StartContainer for \"4fd6751333504e09f8fe36340d09832db688df9b3a0d5488da35854cfd0f3f88\" returns successfully" May 13 12:55:07.972810 kubelet[2682]: E0513 12:55:07.972755 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:55:08.059818 kubelet[2682]: I0513 12:55:08.059751 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-789496d6f5-xvhdn" podStartSLOduration=2.952172671 podStartE2EDuration="5.059733084s" podCreationTimestamp="2025-05-13 12:55:03 +0000 UTC" firstStartedPulling="2025-05-13 12:55:04.324715953 +0000 UTC m=+6.041369690" lastFinishedPulling="2025-05-13 12:55:06.432276366 +0000 UTC m=+8.148930103" observedRunningTime="2025-05-13 12:55:07.507589914 +0000 UTC m=+9.224243651" watchObservedRunningTime="2025-05-13 12:55:08.059733084 +0000 UTC m=+9.776386821" May 13 12:55:08.414019 kubelet[2682]: E0513 12:55:08.413985 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:55:09.447356 systemd[1]: Created slice kubepods-besteffort-pod41101600_76dc_4e45_a085_834d20adad33.slice - libcontainer container kubepods-besteffort-pod41101600_76dc_4e45_a085_834d20adad33.slice. May 13 12:55:09.457091 systemd[1]: Created slice kubepods-besteffort-pod31f9c550_0d42_4e05_9662_72cf1b1971e6.slice - libcontainer container kubepods-besteffort-pod31f9c550_0d42_4e05_9662_72cf1b1971e6.slice. May 13 12:55:09.502200 kubelet[2682]: E0513 12:55:09.501898 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ct2sc" podUID="99af3312-c9d6-477a-83b3-e903dd409646" May 13 12:55:09.528809 kubelet[2682]: I0513 12:55:09.528768 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/41101600-76dc-4e45-a085-834d20adad33-typha-certs\") pod \"calico-typha-6765fcb49c-ngh69\" (UID: \"41101600-76dc-4e45-a085-834d20adad33\") " pod="calico-system/calico-typha-6765fcb49c-ngh69" May 13 12:55:09.528809 kubelet[2682]: I0513 12:55:09.528811 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31f9c550-0d42-4e05-9662-72cf1b1971e6-lib-modules\") pod \"calico-node-87fvd\" (UID: \"31f9c550-0d42-4e05-9662-72cf1b1971e6\") " pod="calico-system/calico-node-87fvd" May 13 12:55:09.529081 kubelet[2682]: I0513 12:55:09.528827 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/31f9c550-0d42-4e05-9662-72cf1b1971e6-cni-net-dir\") pod \"calico-node-87fvd\" (UID: \"31f9c550-0d42-4e05-9662-72cf1b1971e6\") " pod="calico-system/calico-node-87fvd" May 13 12:55:09.529081 kubelet[2682]: I0513 12:55:09.528887 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/31f9c550-0d42-4e05-9662-72cf1b1971e6-flexvol-driver-host\") pod \"calico-node-87fvd\" (UID: \"31f9c550-0d42-4e05-9662-72cf1b1971e6\") " pod="calico-system/calico-node-87fvd" May 13 12:55:09.529081 kubelet[2682]: I0513 12:55:09.528904 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/99af3312-c9d6-477a-83b3-e903dd409646-varrun\") pod \"csi-node-driver-ct2sc\" (UID: \"99af3312-c9d6-477a-83b3-e903dd409646\") " pod="calico-system/csi-node-driver-ct2sc" May 13 12:55:09.529081 kubelet[2682]: I0513 12:55:09.528919 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31f9c550-0d42-4e05-9662-72cf1b1971e6-xtables-lock\") pod \"calico-node-87fvd\" (UID: \"31f9c550-0d42-4e05-9662-72cf1b1971e6\") " pod="calico-system/calico-node-87fvd" May 13 12:55:09.529081 kubelet[2682]: I0513 12:55:09.529019 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31f9c550-0d42-4e05-9662-72cf1b1971e6-tigera-ca-bundle\") pod \"calico-node-87fvd\" (UID: \"31f9c550-0d42-4e05-9662-72cf1b1971e6\") " pod="calico-system/calico-node-87fvd" May 13 12:55:09.529312 kubelet[2682]: I0513 12:55:09.529060 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/31f9c550-0d42-4e05-9662-72cf1b1971e6-var-lib-calico\") pod \"calico-node-87fvd\" (UID: \"31f9c550-0d42-4e05-9662-72cf1b1971e6\") " pod="calico-system/calico-node-87fvd" May 13 12:55:09.529312 kubelet[2682]: I0513 12:55:09.529075 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/99af3312-c9d6-477a-83b3-e903dd409646-registration-dir\") pod \"csi-node-driver-ct2sc\" (UID: \"99af3312-c9d6-477a-83b3-e903dd409646\") " pod="calico-system/csi-node-driver-ct2sc" May 13 12:55:09.529312 kubelet[2682]: I0513 12:55:09.529102 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/31f9c550-0d42-4e05-9662-72cf1b1971e6-var-run-calico\") pod \"calico-node-87fvd\" (UID: \"31f9c550-0d42-4e05-9662-72cf1b1971e6\") " pod="calico-system/calico-node-87fvd" May 13 12:55:09.529312 kubelet[2682]: I0513 12:55:09.529119 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvn76\" (UniqueName: \"kubernetes.io/projected/31f9c550-0d42-4e05-9662-72cf1b1971e6-kube-api-access-xvn76\") pod \"calico-node-87fvd\" (UID: \"31f9c550-0d42-4e05-9662-72cf1b1971e6\") " pod="calico-system/calico-node-87fvd" May 13 12:55:09.529312 kubelet[2682]: I0513 12:55:09.529167 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/99af3312-c9d6-477a-83b3-e903dd409646-kubelet-dir\") pod \"csi-node-driver-ct2sc\" (UID: \"99af3312-c9d6-477a-83b3-e903dd409646\") " pod="calico-system/csi-node-driver-ct2sc" May 13 12:55:09.529549 kubelet[2682]: I0513 12:55:09.529201 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/31f9c550-0d42-4e05-9662-72cf1b1971e6-cni-bin-dir\") pod \"calico-node-87fvd\" (UID: \"31f9c550-0d42-4e05-9662-72cf1b1971e6\") " pod="calico-system/calico-node-87fvd" May 13 12:55:09.529549 kubelet[2682]: I0513 12:55:09.529242 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/31f9c550-0d42-4e05-9662-72cf1b1971e6-cni-log-dir\") pod \"calico-node-87fvd\" (UID: \"31f9c550-0d42-4e05-9662-72cf1b1971e6\") " pod="calico-system/calico-node-87fvd" May 13 12:55:09.529549 kubelet[2682]: I0513 12:55:09.529262 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m27np\" (UniqueName: \"kubernetes.io/projected/99af3312-c9d6-477a-83b3-e903dd409646-kube-api-access-m27np\") pod \"csi-node-driver-ct2sc\" (UID: \"99af3312-c9d6-477a-83b3-e903dd409646\") " pod="calico-system/csi-node-driver-ct2sc" May 13 12:55:09.529549 kubelet[2682]: I0513 12:55:09.529325 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/99af3312-c9d6-477a-83b3-e903dd409646-socket-dir\") pod \"csi-node-driver-ct2sc\" (UID: \"99af3312-c9d6-477a-83b3-e903dd409646\") " pod="calico-system/csi-node-driver-ct2sc" May 13 12:55:09.529549 kubelet[2682]: I0513 12:55:09.529340 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41101600-76dc-4e45-a085-834d20adad33-tigera-ca-bundle\") pod \"calico-typha-6765fcb49c-ngh69\" (UID: \"41101600-76dc-4e45-a085-834d20adad33\") " pod="calico-system/calico-typha-6765fcb49c-ngh69" May 13 12:55:09.529710 kubelet[2682]: I0513 12:55:09.529353 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/31f9c550-0d42-4e05-9662-72cf1b1971e6-policysync\") pod \"calico-node-87fvd\" (UID: \"31f9c550-0d42-4e05-9662-72cf1b1971e6\") " pod="calico-system/calico-node-87fvd" May 13 12:55:09.529710 kubelet[2682]: I0513 12:55:09.529396 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66979\" (UniqueName: \"kubernetes.io/projected/41101600-76dc-4e45-a085-834d20adad33-kube-api-access-66979\") pod \"calico-typha-6765fcb49c-ngh69\" (UID: \"41101600-76dc-4e45-a085-834d20adad33\") " pod="calico-system/calico-typha-6765fcb49c-ngh69" May 13 12:55:09.529710 kubelet[2682]: I0513 12:55:09.529410 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/31f9c550-0d42-4e05-9662-72cf1b1971e6-node-certs\") pod \"calico-node-87fvd\" (UID: \"31f9c550-0d42-4e05-9662-72cf1b1971e6\") " pod="calico-system/calico-node-87fvd" May 13 12:55:09.632631 kubelet[2682]: E0513 12:55:09.632486 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:09.632631 kubelet[2682]: W0513 12:55:09.632508 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:09.632631 kubelet[2682]: E0513 12:55:09.632540 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:09.632837 kubelet[2682]: E0513 12:55:09.632717 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:09.632837 kubelet[2682]: W0513 12:55:09.632724 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:09.632837 kubelet[2682]: E0513 12:55:09.632790 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:09.632910 kubelet[2682]: E0513 12:55:09.632882 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:09.632910 kubelet[2682]: W0513 12:55:09.632889 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:09.632955 kubelet[2682]: E0513 12:55:09.632919 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:09.633380 kubelet[2682]: E0513 12:55:09.633199 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:09.633380 kubelet[2682]: W0513 12:55:09.633211 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:09.633380 kubelet[2682]: E0513 12:55:09.633255 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:09.633837 kubelet[2682]: E0513 12:55:09.633820 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:09.633837 kubelet[2682]: W0513 12:55:09.633833 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:09.635931 kubelet[2682]: E0513 12:55:09.635814 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:09.635931 kubelet[2682]: W0513 12:55:09.635830 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:09.635931 kubelet[2682]: E0513 12:55:09.635926 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:09.636022 kubelet[2682]: E0513 12:55:09.635936 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:09.637085 kubelet[2682]: E0513 12:55:09.637073 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:09.637735 kubelet[2682]: W0513 12:55:09.637622 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:09.638193 kubelet[2682]: E0513 12:55:09.638009 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:09.638193 kubelet[2682]: E0513 12:55:09.638061 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:09.638193 kubelet[2682]: W0513 12:55:09.638071 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:09.638433 kubelet[2682]: E0513 12:55:09.638419 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:09.640414 kubelet[2682]: E0513 12:55:09.640371 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:09.640414 kubelet[2682]: W0513 12:55:09.640410 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:09.641027 kubelet[2682]: E0513 12:55:09.640504 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:09.641068 kubelet[2682]: E0513 12:55:09.641059 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:09.641097 kubelet[2682]: W0513 12:55:09.641068 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:09.641244 kubelet[2682]: E0513 12:55:09.641195 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:09.641341 kubelet[2682]: E0513 12:55:09.641303 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:09.641341 kubelet[2682]: W0513 12:55:09.641316 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:09.641514 kubelet[2682]: E0513 12:55:09.641503 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:09.641902 kubelet[2682]: E0513 12:55:09.641885 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:09.641981 kubelet[2682]: W0513 12:55:09.641960 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:09.642069 kubelet[2682]: E0513 12:55:09.642060 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:09.645457 kubelet[2682]: E0513 12:55:09.645444 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:09.645588 kubelet[2682]: W0513 12:55:09.645505 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:09.645588 kubelet[2682]: E0513 12:55:09.645529 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:09.645741 kubelet[2682]: E0513 12:55:09.645731 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:09.645801 kubelet[2682]: W0513 12:55:09.645791 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:09.645882 kubelet[2682]: E0513 12:55:09.645872 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:09.646765 kubelet[2682]: E0513 12:55:09.646439 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:09.646824 kubelet[2682]: W0513 12:55:09.646811 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:09.646890 kubelet[2682]: E0513 12:55:09.646879 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:09.647094 kubelet[2682]: E0513 12:55:09.647084 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:09.647174 kubelet[2682]: W0513 12:55:09.647163 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:09.647236 kubelet[2682]: E0513 12:55:09.647226 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:09.647413 kubelet[2682]: E0513 12:55:09.647400 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:09.647413 kubelet[2682]: W0513 12:55:09.647411 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:09.647466 kubelet[2682]: E0513 12:55:09.647425 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:09.647599 kubelet[2682]: E0513 12:55:09.647588 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:09.647599 kubelet[2682]: W0513 12:55:09.647597 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:09.647648 kubelet[2682]: E0513 12:55:09.647608 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:09.649466 kubelet[2682]: E0513 12:55:09.649285 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:09.649466 kubelet[2682]: W0513 12:55:09.649310 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:09.649466 kubelet[2682]: E0513 12:55:09.649326 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:09.649566 kubelet[2682]: E0513 12:55:09.649561 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:09.649597 kubelet[2682]: W0513 12:55:09.649570 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:09.649597 kubelet[2682]: E0513 12:55:09.649580 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:09.650317 kubelet[2682]: E0513 12:55:09.650283 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:09.650317 kubelet[2682]: W0513 12:55:09.650302 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:09.650317 kubelet[2682]: E0513 12:55:09.650311 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:09.651189 kubelet[2682]: E0513 12:55:09.651104 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:09.651189 kubelet[2682]: W0513 12:55:09.651119 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:09.651189 kubelet[2682]: E0513 12:55:09.651170 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:09.753383 kubelet[2682]: E0513 12:55:09.753271 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:55:09.754548 containerd[1557]: time="2025-05-13T12:55:09.754509387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6765fcb49c-ngh69,Uid:41101600-76dc-4e45-a085-834d20adad33,Namespace:calico-system,Attempt:0,}" May 13 12:55:09.759942 kubelet[2682]: E0513 12:55:09.759911 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:55:09.760474 containerd[1557]: time="2025-05-13T12:55:09.760442884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-87fvd,Uid:31f9c550-0d42-4e05-9662-72cf1b1971e6,Namespace:calico-system,Attempt:0,}" May 13 12:55:09.780718 containerd[1557]: time="2025-05-13T12:55:09.780660041Z" level=info msg="connecting to shim 12f12437df2e1ab03efc310ccc28ab51ec139b1ab4d53b16c744cf35b95354d5" address="unix:///run/containerd/s/1b990346ec76d2e439961c422cddbf8c7f001f7ed45081afd6477233b48e7d7e" namespace=k8s.io protocol=ttrpc version=3 May 13 12:55:09.791197 containerd[1557]: time="2025-05-13T12:55:09.791156601Z" level=info msg="connecting to shim f9e9421590f91265ff047eb5254f928ab8416f087b50265cbf3b6abfdfe30116" address="unix:///run/containerd/s/5adccc43415e65af493194268ea3c18184a6f53ebec82c0ff497cf0bcf361db6" namespace=k8s.io protocol=ttrpc version=3 May 13 12:55:09.810380 systemd[1]: Started cri-containerd-12f12437df2e1ab03efc310ccc28ab51ec139b1ab4d53b16c744cf35b95354d5.scope - libcontainer container 12f12437df2e1ab03efc310ccc28ab51ec139b1ab4d53b16c744cf35b95354d5. May 13 12:55:09.814522 systemd[1]: Started cri-containerd-f9e9421590f91265ff047eb5254f928ab8416f087b50265cbf3b6abfdfe30116.scope - libcontainer container f9e9421590f91265ff047eb5254f928ab8416f087b50265cbf3b6abfdfe30116. May 13 12:55:09.870112 containerd[1557]: time="2025-05-13T12:55:09.870059645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-87fvd,Uid:31f9c550-0d42-4e05-9662-72cf1b1971e6,Namespace:calico-system,Attempt:0,} returns sandbox id \"f9e9421590f91265ff047eb5254f928ab8416f087b50265cbf3b6abfdfe30116\"" May 13 12:55:09.870736 kubelet[2682]: E0513 12:55:09.870710 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:55:09.871795 containerd[1557]: time="2025-05-13T12:55:09.871759499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 13 12:55:09.875114 containerd[1557]: time="2025-05-13T12:55:09.875085595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6765fcb49c-ngh69,Uid:41101600-76dc-4e45-a085-834d20adad33,Namespace:calico-system,Attempt:0,} returns sandbox id \"12f12437df2e1ab03efc310ccc28ab51ec139b1ab4d53b16c744cf35b95354d5\"" May 13 12:55:09.875900 kubelet[2682]: E0513 12:55:09.875883 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:55:10.092481 kubelet[2682]: E0513 12:55:10.092259 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:55:10.123662 kubelet[2682]: E0513 12:55:10.123622 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:10.123662 kubelet[2682]: W0513 12:55:10.123652 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:10.123834 kubelet[2682]: E0513 12:55:10.123677 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:10.123871 kubelet[2682]: E0513 12:55:10.123862 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:10.123896 kubelet[2682]: W0513 12:55:10.123871 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:10.123896 kubelet[2682]: E0513 12:55:10.123881 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:10.124110 kubelet[2682]: E0513 12:55:10.124083 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:10.124110 kubelet[2682]: W0513 12:55:10.124096 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:10.124110 kubelet[2682]: E0513 12:55:10.124105 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:10.124390 kubelet[2682]: E0513 12:55:10.124370 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:10.124390 kubelet[2682]: W0513 12:55:10.124383 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:10.124444 kubelet[2682]: E0513 12:55:10.124393 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:10.124786 kubelet[2682]: E0513 12:55:10.124736 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:10.124786 kubelet[2682]: W0513 12:55:10.124760 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:10.124786 kubelet[2682]: E0513 12:55:10.124784 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:10.125028 kubelet[2682]: E0513 12:55:10.125011 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:10.125028 kubelet[2682]: W0513 12:55:10.125018 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:10.125028 kubelet[2682]: E0513 12:55:10.125026 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:10.125220 kubelet[2682]: E0513 12:55:10.125190 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:10.125220 kubelet[2682]: W0513 12:55:10.125207 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:10.125220 kubelet[2682]: E0513 12:55:10.125215 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:10.125462 kubelet[2682]: E0513 12:55:10.125444 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:10.125462 kubelet[2682]: W0513 12:55:10.125458 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:10.125544 kubelet[2682]: E0513 12:55:10.125470 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:10.125684 kubelet[2682]: E0513 12:55:10.125661 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:10.125684 kubelet[2682]: W0513 12:55:10.125674 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:10.125748 kubelet[2682]: E0513 12:55:10.125683 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:10.125838 kubelet[2682]: E0513 12:55:10.125826 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:10.125838 kubelet[2682]: W0513 12:55:10.125834 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:10.125971 kubelet[2682]: E0513 12:55:10.125841 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:10.126005 kubelet[2682]: E0513 12:55:10.125992 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:10.126005 kubelet[2682]: W0513 12:55:10.125998 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:10.126064 kubelet[2682]: E0513 12:55:10.126005 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:10.126178 kubelet[2682]: E0513 12:55:10.126164 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:10.126178 kubelet[2682]: W0513 12:55:10.126173 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:10.126250 kubelet[2682]: E0513 12:55:10.126179 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:10.126357 kubelet[2682]: E0513 12:55:10.126345 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:10.126357 kubelet[2682]: W0513 12:55:10.126355 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:10.126417 kubelet[2682]: E0513 12:55:10.126362 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:10.126517 kubelet[2682]: E0513 12:55:10.126505 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:10.126517 kubelet[2682]: W0513 12:55:10.126513 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:10.126583 kubelet[2682]: E0513 12:55:10.126520 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:10.126682 kubelet[2682]: E0513 12:55:10.126670 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:10.126682 kubelet[2682]: W0513 12:55:10.126678 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:10.126748 kubelet[2682]: E0513 12:55:10.126685 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:10.417903 kubelet[2682]: E0513 12:55:10.417866 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:55:10.430019 kubelet[2682]: E0513 12:55:10.429980 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:10.430019 kubelet[2682]: W0513 12:55:10.430007 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:10.430177 kubelet[2682]: E0513 12:55:10.430036 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:10.430323 kubelet[2682]: E0513 12:55:10.430310 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:10.430323 kubelet[2682]: W0513 12:55:10.430321 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:10.430373 kubelet[2682]: E0513 12:55:10.430332 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:10.430540 kubelet[2682]: E0513 12:55:10.430527 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:10.430540 kubelet[2682]: W0513 12:55:10.430536 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:10.430596 kubelet[2682]: E0513 12:55:10.430544 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:10.430727 kubelet[2682]: E0513 12:55:10.430715 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:10.430727 kubelet[2682]: W0513 12:55:10.430724 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:10.430776 kubelet[2682]: E0513 12:55:10.430733 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:10.430923 kubelet[2682]: E0513 12:55:10.430911 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:10.430923 kubelet[2682]: W0513 12:55:10.430921 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:10.430965 kubelet[2682]: E0513 12:55:10.430928 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:10.431097 kubelet[2682]: E0513 12:55:10.431085 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:10.431123 kubelet[2682]: W0513 12:55:10.431095 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:10.431123 kubelet[2682]: E0513 12:55:10.431104 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:10.431321 kubelet[2682]: E0513 12:55:10.431309 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:10.431321 kubelet[2682]: W0513 12:55:10.431318 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:10.431366 kubelet[2682]: E0513 12:55:10.431326 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:10.431502 kubelet[2682]: E0513 12:55:10.431491 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:10.431502 kubelet[2682]: W0513 12:55:10.431500 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:10.431549 kubelet[2682]: E0513 12:55:10.431509 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:10.431689 kubelet[2682]: E0513 12:55:10.431677 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:10.431689 kubelet[2682]: W0513 12:55:10.431686 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:10.431731 kubelet[2682]: E0513 12:55:10.431695 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:10.431864 kubelet[2682]: E0513 12:55:10.431852 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:10.431864 kubelet[2682]: W0513 12:55:10.431862 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:10.431905 kubelet[2682]: E0513 12:55:10.431869 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:10.432077 kubelet[2682]: E0513 12:55:10.432066 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:10.432077 kubelet[2682]: W0513 12:55:10.432075 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:10.432122 kubelet[2682]: E0513 12:55:10.432083 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:10.432282 kubelet[2682]: E0513 12:55:10.432271 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:10.432282 kubelet[2682]: W0513 12:55:10.432279 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:10.432335 kubelet[2682]: E0513 12:55:10.432287 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:10.432452 kubelet[2682]: E0513 12:55:10.432442 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:10.432452 kubelet[2682]: W0513 12:55:10.432450 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:10.432494 kubelet[2682]: E0513 12:55:10.432456 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:10.432616 kubelet[2682]: E0513 12:55:10.432606 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:10.432616 kubelet[2682]: W0513 12:55:10.432614 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:10.432657 kubelet[2682]: E0513 12:55:10.432620 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:10.432778 kubelet[2682]: E0513 12:55:10.432766 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:55:10.432778 kubelet[2682]: W0513 12:55:10.432776 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:55:10.432825 kubelet[2682]: E0513 12:55:10.432784 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:55:11.379993 kubelet[2682]: E0513 12:55:11.379931 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ct2sc" podUID="99af3312-c9d6-477a-83b3-e903dd409646" May 13 12:55:11.930333 containerd[1557]: time="2025-05-13T12:55:11.930247194Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:55:11.931041 containerd[1557]: time="2025-05-13T12:55:11.931000684Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 13 12:55:11.932276 containerd[1557]: time="2025-05-13T12:55:11.932236323Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:55:11.934423 containerd[1557]: time="2025-05-13T12:55:11.934383132Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:55:11.935038 containerd[1557]: time="2025-05-13T12:55:11.934996874Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 2.063204562s" May 13 12:55:11.935089 containerd[1557]: time="2025-05-13T12:55:11.935039144Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 13 12:55:11.937157 containerd[1557]: time="2025-05-13T12:55:11.937005490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 13 12:55:11.938302 containerd[1557]: time="2025-05-13T12:55:11.938269272Z" level=info msg="CreateContainer within sandbox \"f9e9421590f91265ff047eb5254f928ab8416f087b50265cbf3b6abfdfe30116\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 13 12:55:11.947740 containerd[1557]: time="2025-05-13T12:55:11.947689239Z" level=info msg="Container 51d8de80c24a4165b9f870a7fc4435fc9a3cc1f01ae38aca45cb798cadfaab31: CDI devices from CRI Config.CDIDevices: []" May 13 12:55:11.956191 containerd[1557]: time="2025-05-13T12:55:11.956130416Z" level=info msg="CreateContainer within sandbox \"f9e9421590f91265ff047eb5254f928ab8416f087b50265cbf3b6abfdfe30116\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"51d8de80c24a4165b9f870a7fc4435fc9a3cc1f01ae38aca45cb798cadfaab31\"" May 13 12:55:11.956777 containerd[1557]: time="2025-05-13T12:55:11.956707258Z" level=info msg="StartContainer for \"51d8de80c24a4165b9f870a7fc4435fc9a3cc1f01ae38aca45cb798cadfaab31\"" May 13 12:55:11.958390 containerd[1557]: time="2025-05-13T12:55:11.958364562Z" level=info msg="connecting to shim 51d8de80c24a4165b9f870a7fc4435fc9a3cc1f01ae38aca45cb798cadfaab31" address="unix:///run/containerd/s/5adccc43415e65af493194268ea3c18184a6f53ebec82c0ff497cf0bcf361db6" protocol=ttrpc version=3 May 13 12:55:11.984304 systemd[1]: Started cri-containerd-51d8de80c24a4165b9f870a7fc4435fc9a3cc1f01ae38aca45cb798cadfaab31.scope - libcontainer container 51d8de80c24a4165b9f870a7fc4435fc9a3cc1f01ae38aca45cb798cadfaab31. May 13 12:55:12.028470 containerd[1557]: time="2025-05-13T12:55:12.028366838Z" level=info msg="StartContainer for \"51d8de80c24a4165b9f870a7fc4435fc9a3cc1f01ae38aca45cb798cadfaab31\" returns successfully" May 13 12:55:12.038945 systemd[1]: cri-containerd-51d8de80c24a4165b9f870a7fc4435fc9a3cc1f01ae38aca45cb798cadfaab31.scope: Deactivated successfully. May 13 12:55:12.041061 containerd[1557]: time="2025-05-13T12:55:12.041009221Z" level=info msg="received exit event container_id:\"51d8de80c24a4165b9f870a7fc4435fc9a3cc1f01ae38aca45cb798cadfaab31\" id:\"51d8de80c24a4165b9f870a7fc4435fc9a3cc1f01ae38aca45cb798cadfaab31\" pid:3252 exited_at:{seconds:1747140912 nanos:40667639}" May 13 12:55:12.041220 containerd[1557]: time="2025-05-13T12:55:12.041104632Z" level=info msg="TaskExit event in podsandbox handler container_id:\"51d8de80c24a4165b9f870a7fc4435fc9a3cc1f01ae38aca45cb798cadfaab31\" id:\"51d8de80c24a4165b9f870a7fc4435fc9a3cc1f01ae38aca45cb798cadfaab31\" pid:3252 exited_at:{seconds:1747140912 nanos:40667639}" May 13 12:55:12.068469 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-51d8de80c24a4165b9f870a7fc4435fc9a3cc1f01ae38aca45cb798cadfaab31-rootfs.mount: Deactivated successfully. May 13 12:55:12.422818 kubelet[2682]: E0513 12:55:12.422780 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:55:13.379306 kubelet[2682]: E0513 12:55:13.379270 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ct2sc" podUID="99af3312-c9d6-477a-83b3-e903dd409646" May 13 12:55:15.376572 containerd[1557]: time="2025-05-13T12:55:15.376524882Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:55:15.377398 containerd[1557]: time="2025-05-13T12:55:15.377356493Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 13 12:55:15.378548 containerd[1557]: time="2025-05-13T12:55:15.378506731Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:55:15.379257 kubelet[2682]: E0513 12:55:15.379203 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ct2sc" podUID="99af3312-c9d6-477a-83b3-e903dd409646" May 13 12:55:15.380672 containerd[1557]: time="2025-05-13T12:55:15.380627464Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:55:15.381670 containerd[1557]: time="2025-05-13T12:55:15.381594153Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 3.444551153s" May 13 12:55:15.381670 containerd[1557]: time="2025-05-13T12:55:15.381646742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 13 12:55:15.382577 containerd[1557]: time="2025-05-13T12:55:15.382477963Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 13 12:55:15.391373 containerd[1557]: time="2025-05-13T12:55:15.391318295Z" level=info msg="CreateContainer within sandbox \"12f12437df2e1ab03efc310ccc28ab51ec139b1ab4d53b16c744cf35b95354d5\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 13 12:55:15.400247 containerd[1557]: time="2025-05-13T12:55:15.400214243Z" level=info msg="Container d951754426990a498276e450ca6c88f3546c61dea5cba3d8cabdfa159473afd2: CDI devices from CRI Config.CDIDevices: []" May 13 12:55:15.407264 containerd[1557]: time="2025-05-13T12:55:15.407230536Z" level=info msg="CreateContainer within sandbox \"12f12437df2e1ab03efc310ccc28ab51ec139b1ab4d53b16c744cf35b95354d5\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d951754426990a498276e450ca6c88f3546c61dea5cba3d8cabdfa159473afd2\"" May 13 12:55:15.407748 containerd[1557]: time="2025-05-13T12:55:15.407690230Z" level=info msg="StartContainer for \"d951754426990a498276e450ca6c88f3546c61dea5cba3d8cabdfa159473afd2\"" May 13 12:55:15.408687 containerd[1557]: time="2025-05-13T12:55:15.408596734Z" level=info msg="connecting to shim d951754426990a498276e450ca6c88f3546c61dea5cba3d8cabdfa159473afd2" address="unix:///run/containerd/s/1b990346ec76d2e439961c422cddbf8c7f001f7ed45081afd6477233b48e7d7e" protocol=ttrpc version=3 May 13 12:55:15.438250 systemd[1]: Started cri-containerd-d951754426990a498276e450ca6c88f3546c61dea5cba3d8cabdfa159473afd2.scope - libcontainer container d951754426990a498276e450ca6c88f3546c61dea5cba3d8cabdfa159473afd2. May 13 12:55:15.484288 containerd[1557]: time="2025-05-13T12:55:15.484242803Z" level=info msg="StartContainer for \"d951754426990a498276e450ca6c88f3546c61dea5cba3d8cabdfa159473afd2\" returns successfully" May 13 12:55:15.909673 update_engine[1541]: I20250513 12:55:15.909616 1541 update_attempter.cc:509] Updating boot flags... May 13 12:55:16.433022 kubelet[2682]: E0513 12:55:16.432990 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:55:16.443718 kubelet[2682]: I0513 12:55:16.443483 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6765fcb49c-ngh69" podStartSLOduration=1.93754431 podStartE2EDuration="7.443469712s" podCreationTimestamp="2025-05-13 12:55:09 +0000 UTC" firstStartedPulling="2025-05-13 12:55:09.876432343 +0000 UTC m=+11.593086080" lastFinishedPulling="2025-05-13 12:55:15.382357745 +0000 UTC m=+17.099011482" observedRunningTime="2025-05-13 12:55:16.443122252 +0000 UTC m=+18.159775989" watchObservedRunningTime="2025-05-13 12:55:16.443469712 +0000 UTC m=+18.160123449" May 13 12:55:17.379308 kubelet[2682]: E0513 12:55:17.379261 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ct2sc" podUID="99af3312-c9d6-477a-83b3-e903dd409646" May 13 12:55:17.433881 kubelet[2682]: I0513 12:55:17.433843 2682 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 12:55:17.434278 kubelet[2682]: E0513 12:55:17.434077 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:55:19.380036 kubelet[2682]: E0513 12:55:19.379983 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ct2sc" podUID="99af3312-c9d6-477a-83b3-e903dd409646" May 13 12:55:20.884074 kubelet[2682]: I0513 12:55:20.884001 2682 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 12:55:20.885030 kubelet[2682]: E0513 12:55:20.884482 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:55:21.379892 kubelet[2682]: E0513 12:55:21.379849 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ct2sc" podUID="99af3312-c9d6-477a-83b3-e903dd409646" May 13 12:55:21.453031 kubelet[2682]: E0513 12:55:21.452993 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:55:23.110127 containerd[1557]: time="2025-05-13T12:55:23.109995386Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:55:23.111044 containerd[1557]: time="2025-05-13T12:55:23.111013321Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 13 12:55:23.112188 containerd[1557]: time="2025-05-13T12:55:23.112153839Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:55:23.114272 containerd[1557]: time="2025-05-13T12:55:23.114227210Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:55:23.114825 containerd[1557]: time="2025-05-13T12:55:23.114786097Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 7.73227953s" May 13 12:55:23.114825 containerd[1557]: time="2025-05-13T12:55:23.114817667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 13 12:55:23.116876 containerd[1557]: time="2025-05-13T12:55:23.116829131Z" level=info msg="CreateContainer within sandbox \"f9e9421590f91265ff047eb5254f928ab8416f087b50265cbf3b6abfdfe30116\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 12:55:23.124481 containerd[1557]: time="2025-05-13T12:55:23.124444904Z" level=info msg="Container db4703075c3f028f1d61d7750695a3d8c0fa29299e7b3ba4266285dbfea185ae: CDI devices from CRI Config.CDIDevices: []" May 13 12:55:23.135190 containerd[1557]: time="2025-05-13T12:55:23.135131736Z" level=info msg="CreateContainer within sandbox \"f9e9421590f91265ff047eb5254f928ab8416f087b50265cbf3b6abfdfe30116\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"db4703075c3f028f1d61d7750695a3d8c0fa29299e7b3ba4266285dbfea185ae\"" May 13 12:55:23.135644 containerd[1557]: time="2025-05-13T12:55:23.135615050Z" level=info msg="StartContainer for \"db4703075c3f028f1d61d7750695a3d8c0fa29299e7b3ba4266285dbfea185ae\"" May 13 12:55:23.136934 containerd[1557]: time="2025-05-13T12:55:23.136905470Z" level=info msg="connecting to shim db4703075c3f028f1d61d7750695a3d8c0fa29299e7b3ba4266285dbfea185ae" address="unix:///run/containerd/s/5adccc43415e65af493194268ea3c18184a6f53ebec82c0ff497cf0bcf361db6" protocol=ttrpc version=3 May 13 12:55:23.157293 systemd[1]: Started cri-containerd-db4703075c3f028f1d61d7750695a3d8c0fa29299e7b3ba4266285dbfea185ae.scope - libcontainer container db4703075c3f028f1d61d7750695a3d8c0fa29299e7b3ba4266285dbfea185ae. May 13 12:55:23.271584 containerd[1557]: time="2025-05-13T12:55:23.271544525Z" level=info msg="StartContainer for \"db4703075c3f028f1d61d7750695a3d8c0fa29299e7b3ba4266285dbfea185ae\" returns successfully" May 13 12:55:23.379128 kubelet[2682]: E0513 12:55:23.379068 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ct2sc" podUID="99af3312-c9d6-477a-83b3-e903dd409646" May 13 12:55:23.459700 kubelet[2682]: E0513 12:55:23.459645 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:55:24.461121 kubelet[2682]: E0513 12:55:24.461078 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:55:25.082415 systemd[1]: cri-containerd-db4703075c3f028f1d61d7750695a3d8c0fa29299e7b3ba4266285dbfea185ae.scope: Deactivated successfully. May 13 12:55:25.082800 systemd[1]: cri-containerd-db4703075c3f028f1d61d7750695a3d8c0fa29299e7b3ba4266285dbfea185ae.scope: Consumed 505ms CPU time, 159.8M memory peak, 16K read from disk, 154M written to disk. May 13 12:55:25.083810 containerd[1557]: time="2025-05-13T12:55:25.083776381Z" level=info msg="received exit event container_id:\"db4703075c3f028f1d61d7750695a3d8c0fa29299e7b3ba4266285dbfea185ae\" id:\"db4703075c3f028f1d61d7750695a3d8c0fa29299e7b3ba4266285dbfea185ae\" pid:3376 exited_at:{seconds:1747140925 nanos:83180395}" May 13 12:55:25.084182 containerd[1557]: time="2025-05-13T12:55:25.083900285Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db4703075c3f028f1d61d7750695a3d8c0fa29299e7b3ba4266285dbfea185ae\" id:\"db4703075c3f028f1d61d7750695a3d8c0fa29299e7b3ba4266285dbfea185ae\" pid:3376 exited_at:{seconds:1747140925 nanos:83180395}" May 13 12:55:25.105821 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db4703075c3f028f1d61d7750695a3d8c0fa29299e7b3ba4266285dbfea185ae-rootfs.mount: Deactivated successfully. May 13 12:55:25.121665 kubelet[2682]: I0513 12:55:25.121626 2682 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 13 12:55:25.253321 systemd[1]: Created slice kubepods-besteffort-pod69edeedd_4476_4240_999e_ba555f61eb5e.slice - libcontainer container kubepods-besteffort-pod69edeedd_4476_4240_999e_ba555f61eb5e.slice. May 13 12:55:25.260360 systemd[1]: Created slice kubepods-besteffort-podc197e0bf_0648_47d6_b266_361e6fefface.slice - libcontainer container kubepods-besteffort-podc197e0bf_0648_47d6_b266_361e6fefface.slice. May 13 12:55:25.266708 systemd[1]: Created slice kubepods-burstable-pod4faa16ac_8041_4063_89da_2ef0847f8c7d.slice - libcontainer container kubepods-burstable-pod4faa16ac_8041_4063_89da_2ef0847f8c7d.slice. May 13 12:55:25.275609 systemd[1]: Created slice kubepods-besteffort-pod1223f4b3_ae3d_43b8_824a_6a7efb5e24c8.slice - libcontainer container kubepods-besteffort-pod1223f4b3_ae3d_43b8_824a_6a7efb5e24c8.slice. May 13 12:55:25.281582 systemd[1]: Created slice kubepods-burstable-pod15b8d047_8ef6_4678_b676_93259a433fcd.slice - libcontainer container kubepods-burstable-pod15b8d047_8ef6_4678_b676_93259a433fcd.slice. May 13 12:55:25.343499 kubelet[2682]: I0513 12:55:25.343321 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9xs7\" (UniqueName: \"kubernetes.io/projected/4faa16ac-8041-4063-89da-2ef0847f8c7d-kube-api-access-m9xs7\") pod \"coredns-668d6bf9bc-fvsq7\" (UID: \"4faa16ac-8041-4063-89da-2ef0847f8c7d\") " pod="kube-system/coredns-668d6bf9bc-fvsq7" May 13 12:55:25.343499 kubelet[2682]: I0513 12:55:25.343371 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc4mr\" (UniqueName: \"kubernetes.io/projected/c197e0bf-0648-47d6-b266-361e6fefface-kube-api-access-dc4mr\") pod \"calico-kube-controllers-857fbf49df-bgllm\" (UID: \"c197e0bf-0648-47d6-b266-361e6fefface\") " pod="calico-system/calico-kube-controllers-857fbf49df-bgllm" May 13 12:55:25.343499 kubelet[2682]: I0513 12:55:25.343394 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvxtn\" (UniqueName: \"kubernetes.io/projected/69edeedd-4476-4240-999e-ba555f61eb5e-kube-api-access-mvxtn\") pod \"calico-apiserver-5559745f68-7jjmz\" (UID: \"69edeedd-4476-4240-999e-ba555f61eb5e\") " pod="calico-apiserver/calico-apiserver-5559745f68-7jjmz" May 13 12:55:25.343499 kubelet[2682]: I0513 12:55:25.343433 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4faa16ac-8041-4063-89da-2ef0847f8c7d-config-volume\") pod \"coredns-668d6bf9bc-fvsq7\" (UID: \"4faa16ac-8041-4063-89da-2ef0847f8c7d\") " pod="kube-system/coredns-668d6bf9bc-fvsq7" May 13 12:55:25.343691 kubelet[2682]: I0513 12:55:25.343517 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c197e0bf-0648-47d6-b266-361e6fefface-tigera-ca-bundle\") pod \"calico-kube-controllers-857fbf49df-bgllm\" (UID: \"c197e0bf-0648-47d6-b266-361e6fefface\") " pod="calico-system/calico-kube-controllers-857fbf49df-bgllm" May 13 12:55:25.343691 kubelet[2682]: I0513 12:55:25.343582 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15b8d047-8ef6-4678-b676-93259a433fcd-config-volume\") pod \"coredns-668d6bf9bc-xlq4k\" (UID: \"15b8d047-8ef6-4678-b676-93259a433fcd\") " pod="kube-system/coredns-668d6bf9bc-xlq4k" May 13 12:55:25.343691 kubelet[2682]: I0513 12:55:25.343612 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdq2l\" (UniqueName: \"kubernetes.io/projected/1223f4b3-ae3d-43b8-824a-6a7efb5e24c8-kube-api-access-kdq2l\") pod \"calico-apiserver-5559745f68-rjh79\" (UID: \"1223f4b3-ae3d-43b8-824a-6a7efb5e24c8\") " pod="calico-apiserver/calico-apiserver-5559745f68-rjh79" May 13 12:55:25.343691 kubelet[2682]: I0513 12:55:25.343643 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqsmx\" (UniqueName: \"kubernetes.io/projected/15b8d047-8ef6-4678-b676-93259a433fcd-kube-api-access-tqsmx\") pod \"coredns-668d6bf9bc-xlq4k\" (UID: \"15b8d047-8ef6-4678-b676-93259a433fcd\") " pod="kube-system/coredns-668d6bf9bc-xlq4k" May 13 12:55:25.343691 kubelet[2682]: I0513 12:55:25.343661 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/69edeedd-4476-4240-999e-ba555f61eb5e-calico-apiserver-certs\") pod \"calico-apiserver-5559745f68-7jjmz\" (UID: \"69edeedd-4476-4240-999e-ba555f61eb5e\") " pod="calico-apiserver/calico-apiserver-5559745f68-7jjmz" May 13 12:55:25.343810 kubelet[2682]: I0513 12:55:25.343683 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1223f4b3-ae3d-43b8-824a-6a7efb5e24c8-calico-apiserver-certs\") pod \"calico-apiserver-5559745f68-rjh79\" (UID: \"1223f4b3-ae3d-43b8-824a-6a7efb5e24c8\") " pod="calico-apiserver/calico-apiserver-5559745f68-rjh79" May 13 12:55:25.385895 systemd[1]: Created slice kubepods-besteffort-pod99af3312_c9d6_477a_83b3_e903dd409646.slice - libcontainer container kubepods-besteffort-pod99af3312_c9d6_477a_83b3_e903dd409646.slice. May 13 12:55:25.388287 containerd[1557]: time="2025-05-13T12:55:25.388198354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ct2sc,Uid:99af3312-c9d6-477a-83b3-e903dd409646,Namespace:calico-system,Attempt:0,}" May 13 12:55:25.473004 kubelet[2682]: E0513 12:55:25.472975 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:55:25.477180 containerd[1557]: time="2025-05-13T12:55:25.476576112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 13 12:55:25.506983 containerd[1557]: time="2025-05-13T12:55:25.506929220Z" level=error msg="Failed to destroy network for sandbox \"1da1846099dd03bf1d482295f6c2ce351245846b5aa0fcc428e6fdc5675dbb32\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:25.540232 containerd[1557]: time="2025-05-13T12:55:25.540111643Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ct2sc,Uid:99af3312-c9d6-477a-83b3-e903dd409646,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1da1846099dd03bf1d482295f6c2ce351245846b5aa0fcc428e6fdc5675dbb32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:25.540479 kubelet[2682]: E0513 12:55:25.540429 2682 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1da1846099dd03bf1d482295f6c2ce351245846b5aa0fcc428e6fdc5675dbb32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:25.540550 kubelet[2682]: E0513 12:55:25.540511 2682 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1da1846099dd03bf1d482295f6c2ce351245846b5aa0fcc428e6fdc5675dbb32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ct2sc" May 13 12:55:25.540550 kubelet[2682]: E0513 12:55:25.540538 2682 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1da1846099dd03bf1d482295f6c2ce351245846b5aa0fcc428e6fdc5675dbb32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ct2sc" May 13 12:55:25.540635 kubelet[2682]: E0513 12:55:25.540603 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ct2sc_calico-system(99af3312-c9d6-477a-83b3-e903dd409646)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ct2sc_calico-system(99af3312-c9d6-477a-83b3-e903dd409646)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1da1846099dd03bf1d482295f6c2ce351245846b5aa0fcc428e6fdc5675dbb32\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ct2sc" podUID="99af3312-c9d6-477a-83b3-e903dd409646" May 13 12:55:25.557367 containerd[1557]: time="2025-05-13T12:55:25.557332080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5559745f68-7jjmz,Uid:69edeedd-4476-4240-999e-ba555f61eb5e,Namespace:calico-apiserver,Attempt:0,}" May 13 12:55:25.564034 containerd[1557]: time="2025-05-13T12:55:25.563990830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-857fbf49df-bgllm,Uid:c197e0bf-0648-47d6-b266-361e6fefface,Namespace:calico-system,Attempt:0,}" May 13 12:55:25.572323 kubelet[2682]: E0513 12:55:25.572257 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:55:25.572610 containerd[1557]: time="2025-05-13T12:55:25.572580238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fvsq7,Uid:4faa16ac-8041-4063-89da-2ef0847f8c7d,Namespace:kube-system,Attempt:0,}" May 13 12:55:25.579413 containerd[1557]: time="2025-05-13T12:55:25.579361080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5559745f68-rjh79,Uid:1223f4b3-ae3d-43b8-824a-6a7efb5e24c8,Namespace:calico-apiserver,Attempt:0,}" May 13 12:55:25.584769 kubelet[2682]: E0513 12:55:25.584714 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:55:25.585251 containerd[1557]: time="2025-05-13T12:55:25.585194762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xlq4k,Uid:15b8d047-8ef6-4678-b676-93259a433fcd,Namespace:kube-system,Attempt:0,}" May 13 12:55:25.849020 containerd[1557]: time="2025-05-13T12:55:25.848961433Z" level=error msg="Failed to destroy network for sandbox \"ef418fe9482d35b172e6a729e9d1f9bafc666e180f16274b10c82cc53e4e7eeb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:25.860597 containerd[1557]: time="2025-05-13T12:55:25.860532455Z" level=error msg="Failed to destroy network for sandbox \"1314dcc7781fa5fa9e9b449b0d6fd5ee7797c7749e20c1c813019dc493c13939\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:25.864824 containerd[1557]: time="2025-05-13T12:55:25.864696332Z" level=error msg="Failed to destroy network for sandbox \"90cc53ff1a8e0beeb49947b37cb2f59f269a57f3c3ec0cb93e7af22899224445\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:25.865597 containerd[1557]: time="2025-05-13T12:55:25.865566076Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5559745f68-7jjmz,Uid:69edeedd-4476-4240-999e-ba555f61eb5e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef418fe9482d35b172e6a729e9d1f9bafc666e180f16274b10c82cc53e4e7eeb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:25.866254 kubelet[2682]: E0513 12:55:25.865897 2682 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef418fe9482d35b172e6a729e9d1f9bafc666e180f16274b10c82cc53e4e7eeb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:25.866254 kubelet[2682]: E0513 12:55:25.865952 2682 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef418fe9482d35b172e6a729e9d1f9bafc666e180f16274b10c82cc53e4e7eeb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5559745f68-7jjmz" May 13 12:55:25.866254 kubelet[2682]: E0513 12:55:25.865971 2682 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef418fe9482d35b172e6a729e9d1f9bafc666e180f16274b10c82cc53e4e7eeb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5559745f68-7jjmz" May 13 12:55:25.866376 kubelet[2682]: E0513 12:55:25.866013 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5559745f68-7jjmz_calico-apiserver(69edeedd-4476-4240-999e-ba555f61eb5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5559745f68-7jjmz_calico-apiserver(69edeedd-4476-4240-999e-ba555f61eb5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ef418fe9482d35b172e6a729e9d1f9bafc666e180f16274b10c82cc53e4e7eeb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5559745f68-7jjmz" podUID="69edeedd-4476-4240-999e-ba555f61eb5e" May 13 12:55:25.866902 containerd[1557]: time="2025-05-13T12:55:25.866852245Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-857fbf49df-bgllm,Uid:c197e0bf-0648-47d6-b266-361e6fefface,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1314dcc7781fa5fa9e9b449b0d6fd5ee7797c7749e20c1c813019dc493c13939\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:25.867055 kubelet[2682]: E0513 12:55:25.867030 2682 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1314dcc7781fa5fa9e9b449b0d6fd5ee7797c7749e20c1c813019dc493c13939\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:25.867200 kubelet[2682]: E0513 12:55:25.867164 2682 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1314dcc7781fa5fa9e9b449b0d6fd5ee7797c7749e20c1c813019dc493c13939\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-857fbf49df-bgllm" May 13 12:55:25.867281 kubelet[2682]: E0513 12:55:25.867266 2682 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1314dcc7781fa5fa9e9b449b0d6fd5ee7797c7749e20c1c813019dc493c13939\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-857fbf49df-bgllm" May 13 12:55:25.867400 kubelet[2682]: E0513 12:55:25.867369 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-857fbf49df-bgllm_calico-system(c197e0bf-0648-47d6-b266-361e6fefface)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-857fbf49df-bgllm_calico-system(c197e0bf-0648-47d6-b266-361e6fefface)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1314dcc7781fa5fa9e9b449b0d6fd5ee7797c7749e20c1c813019dc493c13939\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-857fbf49df-bgllm" podUID="c197e0bf-0648-47d6-b266-361e6fefface" May 13 12:55:25.868494 containerd[1557]: time="2025-05-13T12:55:25.868460363Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fvsq7,Uid:4faa16ac-8041-4063-89da-2ef0847f8c7d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"90cc53ff1a8e0beeb49947b37cb2f59f269a57f3c3ec0cb93e7af22899224445\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:25.868910 kubelet[2682]: E0513 12:55:25.868862 2682 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90cc53ff1a8e0beeb49947b37cb2f59f269a57f3c3ec0cb93e7af22899224445\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:25.868975 kubelet[2682]: E0513 12:55:25.868936 2682 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90cc53ff1a8e0beeb49947b37cb2f59f269a57f3c3ec0cb93e7af22899224445\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fvsq7" May 13 12:55:25.868975 kubelet[2682]: E0513 12:55:25.868960 2682 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90cc53ff1a8e0beeb49947b37cb2f59f269a57f3c3ec0cb93e7af22899224445\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fvsq7" May 13 12:55:25.869048 kubelet[2682]: E0513 12:55:25.869014 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-fvsq7_kube-system(4faa16ac-8041-4063-89da-2ef0847f8c7d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-fvsq7_kube-system(4faa16ac-8041-4063-89da-2ef0847f8c7d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90cc53ff1a8e0beeb49947b37cb2f59f269a57f3c3ec0cb93e7af22899224445\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-fvsq7" podUID="4faa16ac-8041-4063-89da-2ef0847f8c7d" May 13 12:55:25.869988 containerd[1557]: time="2025-05-13T12:55:25.869936071Z" level=error msg="Failed to destroy network for sandbox \"68f909f6559cdbbafed2a7d620058437679aeb889ecf39b44c2fb5498e7537bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:25.871349 containerd[1557]: time="2025-05-13T12:55:25.871305439Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5559745f68-rjh79,Uid:1223f4b3-ae3d-43b8-824a-6a7efb5e24c8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"68f909f6559cdbbafed2a7d620058437679aeb889ecf39b44c2fb5498e7537bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:25.871493 kubelet[2682]: E0513 12:55:25.871468 2682 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68f909f6559cdbbafed2a7d620058437679aeb889ecf39b44c2fb5498e7537bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:25.871529 kubelet[2682]: E0513 12:55:25.871509 2682 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68f909f6559cdbbafed2a7d620058437679aeb889ecf39b44c2fb5498e7537bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5559745f68-rjh79" May 13 12:55:25.871552 kubelet[2682]: E0513 12:55:25.871528 2682 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68f909f6559cdbbafed2a7d620058437679aeb889ecf39b44c2fb5498e7537bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5559745f68-rjh79" May 13 12:55:25.871603 kubelet[2682]: E0513 12:55:25.871575 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5559745f68-rjh79_calico-apiserver(1223f4b3-ae3d-43b8-824a-6a7efb5e24c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5559745f68-rjh79_calico-apiserver(1223f4b3-ae3d-43b8-824a-6a7efb5e24c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"68f909f6559cdbbafed2a7d620058437679aeb889ecf39b44c2fb5498e7537bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5559745f68-rjh79" podUID="1223f4b3-ae3d-43b8-824a-6a7efb5e24c8" May 13 12:55:25.883219 containerd[1557]: time="2025-05-13T12:55:25.883162491Z" level=error msg="Failed to destroy network for sandbox \"cdb4c65b372ee180328927f3a2670d0af6962ec3e87ba39c7adfcc9e2d499017\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:25.907492 containerd[1557]: time="2025-05-13T12:55:25.907449528Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xlq4k,Uid:15b8d047-8ef6-4678-b676-93259a433fcd,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdb4c65b372ee180328927f3a2670d0af6962ec3e87ba39c7adfcc9e2d499017\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:25.907674 kubelet[2682]: E0513 12:55:25.907642 2682 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdb4c65b372ee180328927f3a2670d0af6962ec3e87ba39c7adfcc9e2d499017\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:25.907735 kubelet[2682]: E0513 12:55:25.907689 2682 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdb4c65b372ee180328927f3a2670d0af6962ec3e87ba39c7adfcc9e2d499017\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xlq4k" May 13 12:55:25.907735 kubelet[2682]: E0513 12:55:25.907705 2682 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdb4c65b372ee180328927f3a2670d0af6962ec3e87ba39c7adfcc9e2d499017\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xlq4k" May 13 12:55:25.907794 kubelet[2682]: E0513 12:55:25.907741 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-xlq4k_kube-system(15b8d047-8ef6-4678-b676-93259a433fcd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-xlq4k_kube-system(15b8d047-8ef6-4678-b676-93259a433fcd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cdb4c65b372ee180328927f3a2670d0af6962ec3e87ba39c7adfcc9e2d499017\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-xlq4k" podUID="15b8d047-8ef6-4678-b676-93259a433fcd" May 13 12:55:26.106373 systemd[1]: run-netns-cni\x2ddb8157d7\x2d21b5\x2d2bff\x2ddf5b\x2d018aec8badc3.mount: Deactivated successfully. May 13 12:55:27.708565 systemd[1]: Started sshd@7-10.0.0.90:22-10.0.0.1:50136.service - OpenSSH per-connection server daemon (10.0.0.1:50136). May 13 12:55:27.757486 sshd[3646]: Accepted publickey for core from 10.0.0.1 port 50136 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 12:55:27.759058 sshd-session[3646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:55:27.763351 systemd-logind[1539]: New session 8 of user core. May 13 12:55:27.773268 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 12:55:27.889040 sshd[3649]: Connection closed by 10.0.0.1 port 50136 May 13 12:55:27.889325 sshd-session[3646]: pam_unix(sshd:session): session closed for user core May 13 12:55:27.893204 systemd[1]: sshd@7-10.0.0.90:22-10.0.0.1:50136.service: Deactivated successfully. May 13 12:55:27.895465 systemd[1]: session-8.scope: Deactivated successfully. May 13 12:55:27.896407 systemd-logind[1539]: Session 8 logged out. Waiting for processes to exit. May 13 12:55:27.897655 systemd-logind[1539]: Removed session 8. May 13 12:55:32.876541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1274861571.mount: Deactivated successfully. May 13 12:55:32.901949 systemd[1]: Started sshd@8-10.0.0.90:22-10.0.0.1:41640.service - OpenSSH per-connection server daemon (10.0.0.1:41640). May 13 12:55:33.681977 containerd[1557]: time="2025-05-13T12:55:33.681923176Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:55:33.682788 containerd[1557]: time="2025-05-13T12:55:33.682755043Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 13 12:55:33.683976 containerd[1557]: time="2025-05-13T12:55:33.683944855Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:55:33.686269 containerd[1557]: time="2025-05-13T12:55:33.686215261Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:55:33.686825 containerd[1557]: time="2025-05-13T12:55:33.686788101Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 8.210174728s" May 13 12:55:33.686825 containerd[1557]: time="2025-05-13T12:55:33.686819480Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 13 12:55:33.697729 containerd[1557]: time="2025-05-13T12:55:33.697683586Z" level=info msg="CreateContainer within sandbox \"f9e9421590f91265ff047eb5254f928ab8416f087b50265cbf3b6abfdfe30116\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 13 12:55:33.706894 containerd[1557]: time="2025-05-13T12:55:33.706841298Z" level=info msg="Container 409f6e1784131ed4bdfbcb232fe7ac17369cb740f2a8032a67e204b065e77ee2: CDI devices from CRI Config.CDIDevices: []" May 13 12:55:33.712332 sshd[3676]: Accepted publickey for core from 10.0.0.1 port 41640 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 12:55:33.714025 sshd-session[3676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:55:33.718631 systemd-logind[1539]: New session 9 of user core. May 13 12:55:33.727345 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 12:55:33.728425 containerd[1557]: time="2025-05-13T12:55:33.728382018Z" level=info msg="CreateContainer within sandbox \"f9e9421590f91265ff047eb5254f928ab8416f087b50265cbf3b6abfdfe30116\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"409f6e1784131ed4bdfbcb232fe7ac17369cb740f2a8032a67e204b065e77ee2\"" May 13 12:55:33.729113 containerd[1557]: time="2025-05-13T12:55:33.729089721Z" level=info msg="StartContainer for \"409f6e1784131ed4bdfbcb232fe7ac17369cb740f2a8032a67e204b065e77ee2\"" May 13 12:55:33.735210 containerd[1557]: time="2025-05-13T12:55:33.735119380Z" level=info msg="connecting to shim 409f6e1784131ed4bdfbcb232fe7ac17369cb740f2a8032a67e204b065e77ee2" address="unix:///run/containerd/s/5adccc43415e65af493194268ea3c18184a6f53ebec82c0ff497cf0bcf361db6" protocol=ttrpc version=3 May 13 12:55:33.763370 systemd[1]: Started cri-containerd-409f6e1784131ed4bdfbcb232fe7ac17369cb740f2a8032a67e204b065e77ee2.scope - libcontainer container 409f6e1784131ed4bdfbcb232fe7ac17369cb740f2a8032a67e204b065e77ee2. May 13 12:55:33.813495 containerd[1557]: time="2025-05-13T12:55:33.813451156Z" level=info msg="StartContainer for \"409f6e1784131ed4bdfbcb232fe7ac17369cb740f2a8032a67e204b065e77ee2\" returns successfully" May 13 12:55:33.853494 sshd[3680]: Connection closed by 10.0.0.1 port 41640 May 13 12:55:33.854705 sshd-session[3676]: pam_unix(sshd:session): session closed for user core May 13 12:55:33.859253 systemd[1]: sshd@8-10.0.0.90:22-10.0.0.1:41640.service: Deactivated successfully. May 13 12:55:33.862313 systemd[1]: session-9.scope: Deactivated successfully. May 13 12:55:33.864942 systemd-logind[1539]: Session 9 logged out. Waiting for processes to exit. May 13 12:55:33.866497 systemd-logind[1539]: Removed session 9. May 13 12:55:33.880588 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 13 12:55:33.880649 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 13 12:55:33.904206 systemd[1]: cri-containerd-409f6e1784131ed4bdfbcb232fe7ac17369cb740f2a8032a67e204b065e77ee2.scope: Deactivated successfully. May 13 12:55:33.906279 containerd[1557]: time="2025-05-13T12:55:33.906239506Z" level=info msg="received exit event container_id:\"409f6e1784131ed4bdfbcb232fe7ac17369cb740f2a8032a67e204b065e77ee2\" id:\"409f6e1784131ed4bdfbcb232fe7ac17369cb740f2a8032a67e204b065e77ee2\" pid:3692 exit_status:1 exited_at:{seconds:1747140933 nanos:905836958}" May 13 12:55:33.906531 containerd[1557]: time="2025-05-13T12:55:33.906452057Z" level=info msg="TaskExit event in podsandbox handler container_id:\"409f6e1784131ed4bdfbcb232fe7ac17369cb740f2a8032a67e204b065e77ee2\" id:\"409f6e1784131ed4bdfbcb232fe7ac17369cb740f2a8032a67e204b065e77ee2\" pid:3692 exit_status:1 exited_at:{seconds:1747140933 nanos:905836958}" May 13 12:55:33.928694 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-409f6e1784131ed4bdfbcb232fe7ac17369cb740f2a8032a67e204b065e77ee2-rootfs.mount: Deactivated successfully. May 13 12:55:34.492858 kubelet[2682]: I0513 12:55:34.492826 2682 scope.go:117] "RemoveContainer" containerID="409f6e1784131ed4bdfbcb232fe7ac17369cb740f2a8032a67e204b065e77ee2" May 13 12:55:34.493389 kubelet[2682]: E0513 12:55:34.492903 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:55:34.495350 containerd[1557]: time="2025-05-13T12:55:34.495302537Z" level=info msg="CreateContainer within sandbox \"f9e9421590f91265ff047eb5254f928ab8416f087b50265cbf3b6abfdfe30116\" for container &ContainerMetadata{Name:calico-node,Attempt:1,}" May 13 12:55:34.506932 containerd[1557]: time="2025-05-13T12:55:34.506716984Z" level=info msg="Container af0626dd4587d2ecd3adae74ddd89c5e885fff060558ba80c4ca96fd90513fff: CDI devices from CRI Config.CDIDevices: []" May 13 12:55:34.550879 containerd[1557]: time="2025-05-13T12:55:34.550824804Z" level=info msg="CreateContainer within sandbox \"f9e9421590f91265ff047eb5254f928ab8416f087b50265cbf3b6abfdfe30116\" for &ContainerMetadata{Name:calico-node,Attempt:1,} returns container id \"af0626dd4587d2ecd3adae74ddd89c5e885fff060558ba80c4ca96fd90513fff\"" May 13 12:55:34.551658 containerd[1557]: time="2025-05-13T12:55:34.551611495Z" level=info msg="StartContainer for \"af0626dd4587d2ecd3adae74ddd89c5e885fff060558ba80c4ca96fd90513fff\"" May 13 12:55:34.553167 containerd[1557]: time="2025-05-13T12:55:34.553099669Z" level=info msg="connecting to shim af0626dd4587d2ecd3adae74ddd89c5e885fff060558ba80c4ca96fd90513fff" address="unix:///run/containerd/s/5adccc43415e65af493194268ea3c18184a6f53ebec82c0ff497cf0bcf361db6" protocol=ttrpc version=3 May 13 12:55:34.575333 systemd[1]: Started cri-containerd-af0626dd4587d2ecd3adae74ddd89c5e885fff060558ba80c4ca96fd90513fff.scope - libcontainer container af0626dd4587d2ecd3adae74ddd89c5e885fff060558ba80c4ca96fd90513fff. May 13 12:55:34.627038 containerd[1557]: time="2025-05-13T12:55:34.626992965Z" level=info msg="StartContainer for \"af0626dd4587d2ecd3adae74ddd89c5e885fff060558ba80c4ca96fd90513fff\" returns successfully" May 13 12:55:34.682102 systemd[1]: cri-containerd-af0626dd4587d2ecd3adae74ddd89c5e885fff060558ba80c4ca96fd90513fff.scope: Deactivated successfully. May 13 12:55:34.683230 containerd[1557]: time="2025-05-13T12:55:34.683174696Z" level=info msg="received exit event container_id:\"af0626dd4587d2ecd3adae74ddd89c5e885fff060558ba80c4ca96fd90513fff\" id:\"af0626dd4587d2ecd3adae74ddd89c5e885fff060558ba80c4ca96fd90513fff\" pid:3764 exit_status:1 exited_at:{seconds:1747140934 nanos:682893336}" May 13 12:55:34.683658 containerd[1557]: time="2025-05-13T12:55:34.683225180Z" level=info msg="TaskExit event in podsandbox handler container_id:\"af0626dd4587d2ecd3adae74ddd89c5e885fff060558ba80c4ca96fd90513fff\" id:\"af0626dd4587d2ecd3adae74ddd89c5e885fff060558ba80c4ca96fd90513fff\" pid:3764 exit_status:1 exited_at:{seconds:1747140934 nanos:682893336}" May 13 12:55:34.703601 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af0626dd4587d2ecd3adae74ddd89c5e885fff060558ba80c4ca96fd90513fff-rootfs.mount: Deactivated successfully. May 13 12:55:35.498473 kubelet[2682]: I0513 12:55:35.498442 2682 scope.go:117] "RemoveContainer" containerID="409f6e1784131ed4bdfbcb232fe7ac17369cb740f2a8032a67e204b065e77ee2" May 13 12:55:35.498960 kubelet[2682]: I0513 12:55:35.498808 2682 scope.go:117] "RemoveContainer" containerID="af0626dd4587d2ecd3adae74ddd89c5e885fff060558ba80c4ca96fd90513fff" May 13 12:55:35.498960 kubelet[2682]: E0513 12:55:35.498868 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:55:35.498960 kubelet[2682]: E0513 12:55:35.498950 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-87fvd_calico-system(31f9c550-0d42-4e05-9662-72cf1b1971e6)\"" pod="calico-system/calico-node-87fvd" podUID="31f9c550-0d42-4e05-9662-72cf1b1971e6" May 13 12:55:35.501668 containerd[1557]: time="2025-05-13T12:55:35.501636547Z" level=info msg="RemoveContainer for \"409f6e1784131ed4bdfbcb232fe7ac17369cb740f2a8032a67e204b065e77ee2\"" May 13 12:55:35.643449 containerd[1557]: time="2025-05-13T12:55:35.643404086Z" level=info msg="RemoveContainer for \"409f6e1784131ed4bdfbcb232fe7ac17369cb740f2a8032a67e204b065e77ee2\" returns successfully" May 13 12:55:37.380252 kubelet[2682]: E0513 12:55:37.380216 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:55:37.380681 containerd[1557]: time="2025-05-13T12:55:37.380589790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xlq4k,Uid:15b8d047-8ef6-4678-b676-93259a433fcd,Namespace:kube-system,Attempt:0,}" May 13 12:55:37.616653 containerd[1557]: time="2025-05-13T12:55:37.616582290Z" level=error msg="Failed to destroy network for sandbox \"cfa52f49529d9df41709d00f56aa098ed4e443970d898c7aea6325e43d72e0b2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:37.618612 containerd[1557]: time="2025-05-13T12:55:37.618572776Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xlq4k,Uid:15b8d047-8ef6-4678-b676-93259a433fcd,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfa52f49529d9df41709d00f56aa098ed4e443970d898c7aea6325e43d72e0b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:37.618893 kubelet[2682]: E0513 12:55:37.618853 2682 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfa52f49529d9df41709d00f56aa098ed4e443970d898c7aea6325e43d72e0b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:37.618968 kubelet[2682]: E0513 12:55:37.618913 2682 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfa52f49529d9df41709d00f56aa098ed4e443970d898c7aea6325e43d72e0b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xlq4k" May 13 12:55:37.618968 kubelet[2682]: E0513 12:55:37.618932 2682 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfa52f49529d9df41709d00f56aa098ed4e443970d898c7aea6325e43d72e0b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xlq4k" May 13 12:55:37.619088 kubelet[2682]: E0513 12:55:37.618971 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-xlq4k_kube-system(15b8d047-8ef6-4678-b676-93259a433fcd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-xlq4k_kube-system(15b8d047-8ef6-4678-b676-93259a433fcd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cfa52f49529d9df41709d00f56aa098ed4e443970d898c7aea6325e43d72e0b2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-xlq4k" podUID="15b8d047-8ef6-4678-b676-93259a433fcd" May 13 12:55:37.619176 systemd[1]: run-netns-cni\x2d3ec57d75\x2d2066\x2d2142\x2d3ecd\x2d7a33b87b8109.mount: Deactivated successfully. May 13 12:55:38.870656 systemd[1]: Started sshd@9-10.0.0.90:22-10.0.0.1:37276.service - OpenSSH per-connection server daemon (10.0.0.1:37276). May 13 12:55:38.931567 sshd[3836]: Accepted publickey for core from 10.0.0.1 port 37276 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 12:55:38.933098 sshd-session[3836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:55:38.937378 systemd-logind[1539]: New session 10 of user core. May 13 12:55:38.951283 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 12:55:39.063348 sshd[3838]: Connection closed by 10.0.0.1 port 37276 May 13 12:55:39.063645 sshd-session[3836]: pam_unix(sshd:session): session closed for user core May 13 12:55:39.079720 systemd[1]: sshd@9-10.0.0.90:22-10.0.0.1:37276.service: Deactivated successfully. May 13 12:55:39.081561 systemd[1]: session-10.scope: Deactivated successfully. May 13 12:55:39.082444 systemd-logind[1539]: Session 10 logged out. Waiting for processes to exit. May 13 12:55:39.084906 systemd[1]: Started sshd@10-10.0.0.90:22-10.0.0.1:37286.service - OpenSSH per-connection server daemon (10.0.0.1:37286). May 13 12:55:39.085743 systemd-logind[1539]: Removed session 10. May 13 12:55:39.132186 sshd[3853]: Accepted publickey for core from 10.0.0.1 port 37286 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 12:55:39.133675 sshd-session[3853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:55:39.138002 systemd-logind[1539]: New session 11 of user core. May 13 12:55:39.147264 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 12:55:39.281832 sshd[3855]: Connection closed by 10.0.0.1 port 37286 May 13 12:55:39.282258 sshd-session[3853]: pam_unix(sshd:session): session closed for user core May 13 12:55:39.293626 systemd[1]: sshd@10-10.0.0.90:22-10.0.0.1:37286.service: Deactivated successfully. May 13 12:55:39.295956 systemd[1]: session-11.scope: Deactivated successfully. May 13 12:55:39.297809 systemd-logind[1539]: Session 11 logged out. Waiting for processes to exit. May 13 12:55:39.301957 systemd[1]: Started sshd@11-10.0.0.90:22-10.0.0.1:37300.service - OpenSSH per-connection server daemon (10.0.0.1:37300). May 13 12:55:39.303209 systemd-logind[1539]: Removed session 11. May 13 12:55:39.355182 sshd[3867]: Accepted publickey for core from 10.0.0.1 port 37300 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 12:55:39.356885 sshd-session[3867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:55:39.361558 systemd-logind[1539]: New session 12 of user core. May 13 12:55:39.377257 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 12:55:39.379534 kubelet[2682]: E0513 12:55:39.379506 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:55:39.379951 containerd[1557]: time="2025-05-13T12:55:39.379833598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fvsq7,Uid:4faa16ac-8041-4063-89da-2ef0847f8c7d,Namespace:kube-system,Attempt:0,}" May 13 12:55:39.380422 containerd[1557]: time="2025-05-13T12:55:39.380332096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ct2sc,Uid:99af3312-c9d6-477a-83b3-e903dd409646,Namespace:calico-system,Attempt:0,}" May 13 12:55:39.446079 containerd[1557]: time="2025-05-13T12:55:39.445959326Z" level=error msg="Failed to destroy network for sandbox \"119cb8351c3b1304faca14deebbccb98ff8312071e6a76d4be3e6df6633fc220\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:39.449099 systemd[1]: run-netns-cni\x2dfbeb11ea\x2dce36\x2d4621\x2d1d86\x2d15308f8fca67.mount: Deactivated successfully. May 13 12:55:39.450853 containerd[1557]: time="2025-05-13T12:55:39.450621597Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ct2sc,Uid:99af3312-c9d6-477a-83b3-e903dd409646,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"119cb8351c3b1304faca14deebbccb98ff8312071e6a76d4be3e6df6633fc220\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:39.451398 kubelet[2682]: E0513 12:55:39.451331 2682 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"119cb8351c3b1304faca14deebbccb98ff8312071e6a76d4be3e6df6633fc220\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:39.451499 kubelet[2682]: E0513 12:55:39.451477 2682 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"119cb8351c3b1304faca14deebbccb98ff8312071e6a76d4be3e6df6633fc220\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ct2sc" May 13 12:55:39.451793 kubelet[2682]: E0513 12:55:39.451546 2682 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"119cb8351c3b1304faca14deebbccb98ff8312071e6a76d4be3e6df6633fc220\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ct2sc" May 13 12:55:39.451793 kubelet[2682]: E0513 12:55:39.451593 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ct2sc_calico-system(99af3312-c9d6-477a-83b3-e903dd409646)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ct2sc_calico-system(99af3312-c9d6-477a-83b3-e903dd409646)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"119cb8351c3b1304faca14deebbccb98ff8312071e6a76d4be3e6df6633fc220\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ct2sc" podUID="99af3312-c9d6-477a-83b3-e903dd409646" May 13 12:55:39.452656 containerd[1557]: time="2025-05-13T12:55:39.452628011Z" level=error msg="Failed to destroy network for sandbox \"ec4755714916bbd00c7f6a529495bdc4b9a2b19425a567740f908a43cbb67423\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:39.455026 systemd[1]: run-netns-cni\x2da39d7e2a\x2d7cb7\x2d1fc6\x2dba29\x2d289aeb7dbe26.mount: Deactivated successfully. May 13 12:55:39.455664 containerd[1557]: time="2025-05-13T12:55:39.455631622Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fvsq7,Uid:4faa16ac-8041-4063-89da-2ef0847f8c7d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec4755714916bbd00c7f6a529495bdc4b9a2b19425a567740f908a43cbb67423\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:39.456038 kubelet[2682]: E0513 12:55:39.455991 2682 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec4755714916bbd00c7f6a529495bdc4b9a2b19425a567740f908a43cbb67423\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:39.456204 kubelet[2682]: E0513 12:55:39.456188 2682 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec4755714916bbd00c7f6a529495bdc4b9a2b19425a567740f908a43cbb67423\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fvsq7" May 13 12:55:39.456265 kubelet[2682]: E0513 12:55:39.456252 2682 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec4755714916bbd00c7f6a529495bdc4b9a2b19425a567740f908a43cbb67423\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fvsq7" May 13 12:55:39.456358 kubelet[2682]: E0513 12:55:39.456338 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-fvsq7_kube-system(4faa16ac-8041-4063-89da-2ef0847f8c7d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-fvsq7_kube-system(4faa16ac-8041-4063-89da-2ef0847f8c7d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec4755714916bbd00c7f6a529495bdc4b9a2b19425a567740f908a43cbb67423\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-fvsq7" podUID="4faa16ac-8041-4063-89da-2ef0847f8c7d" May 13 12:55:39.503307 sshd[3869]: Connection closed by 10.0.0.1 port 37300 May 13 12:55:39.503579 sshd-session[3867]: pam_unix(sshd:session): session closed for user core May 13 12:55:39.507755 systemd[1]: sshd@11-10.0.0.90:22-10.0.0.1:37300.service: Deactivated successfully. May 13 12:55:39.509682 systemd[1]: session-12.scope: Deactivated successfully. May 13 12:55:39.510457 systemd-logind[1539]: Session 12 logged out. Waiting for processes to exit. May 13 12:55:39.511689 systemd-logind[1539]: Removed session 12. May 13 12:55:41.380351 containerd[1557]: time="2025-05-13T12:55:41.380185890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-857fbf49df-bgllm,Uid:c197e0bf-0648-47d6-b266-361e6fefface,Namespace:calico-system,Attempt:0,}" May 13 12:55:41.380351 containerd[1557]: time="2025-05-13T12:55:41.380278073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5559745f68-7jjmz,Uid:69edeedd-4476-4240-999e-ba555f61eb5e,Namespace:calico-apiserver,Attempt:0,}" May 13 12:55:41.380816 containerd[1557]: time="2025-05-13T12:55:41.380509869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5559745f68-rjh79,Uid:1223f4b3-ae3d-43b8-824a-6a7efb5e24c8,Namespace:calico-apiserver,Attempt:0,}" May 13 12:55:41.685484 containerd[1557]: time="2025-05-13T12:55:41.685364321Z" level=error msg="Failed to destroy network for sandbox \"0ce78ba15e290ee2582fdb05cb2e46d23a1853fff093e6f11acf2517187dfeaa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:41.687608 systemd[1]: run-netns-cni\x2da6a49a44\x2d6a29\x2d3bc3\x2d69b2\x2df53daf5974cb.mount: Deactivated successfully. May 13 12:55:41.748949 containerd[1557]: time="2025-05-13T12:55:41.748894344Z" level=error msg="Failed to destroy network for sandbox \"483113e99513dca209e4fa6f373565c519f84f7e8f8e60b26ea051ba386c09f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:41.751104 systemd[1]: run-netns-cni\x2dbc765b59\x2d3b8e\x2ddc2f\x2ddf57\x2d0713d0539933.mount: Deactivated successfully. May 13 12:55:41.801848 containerd[1557]: time="2025-05-13T12:55:41.801785650Z" level=error msg="Failed to destroy network for sandbox \"9fb992c7bd6de230f3b82cc64cfce20aa090bbf15a21b07f38f27867715bc510\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:41.817799 containerd[1557]: time="2025-05-13T12:55:41.817762486Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-857fbf49df-bgllm,Uid:c197e0bf-0648-47d6-b266-361e6fefface,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ce78ba15e290ee2582fdb05cb2e46d23a1853fff093e6f11acf2517187dfeaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:41.818122 kubelet[2682]: E0513 12:55:41.818065 2682 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ce78ba15e290ee2582fdb05cb2e46d23a1853fff093e6f11acf2517187dfeaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:41.818527 kubelet[2682]: E0513 12:55:41.818205 2682 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ce78ba15e290ee2582fdb05cb2e46d23a1853fff093e6f11acf2517187dfeaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-857fbf49df-bgllm" May 13 12:55:41.818527 kubelet[2682]: E0513 12:55:41.818236 2682 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ce78ba15e290ee2582fdb05cb2e46d23a1853fff093e6f11acf2517187dfeaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-857fbf49df-bgllm" May 13 12:55:41.818527 kubelet[2682]: E0513 12:55:41.818300 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-857fbf49df-bgllm_calico-system(c197e0bf-0648-47d6-b266-361e6fefface)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-857fbf49df-bgllm_calico-system(c197e0bf-0648-47d6-b266-361e6fefface)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ce78ba15e290ee2582fdb05cb2e46d23a1853fff093e6f11acf2517187dfeaa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-857fbf49df-bgllm" podUID="c197e0bf-0648-47d6-b266-361e6fefface" May 13 12:55:41.932689 containerd[1557]: time="2025-05-13T12:55:41.932611668Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5559745f68-7jjmz,Uid:69edeedd-4476-4240-999e-ba555f61eb5e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"483113e99513dca209e4fa6f373565c519f84f7e8f8e60b26ea051ba386c09f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:41.932924 kubelet[2682]: E0513 12:55:41.932827 2682 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"483113e99513dca209e4fa6f373565c519f84f7e8f8e60b26ea051ba386c09f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:41.932924 kubelet[2682]: E0513 12:55:41.932874 2682 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"483113e99513dca209e4fa6f373565c519f84f7e8f8e60b26ea051ba386c09f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5559745f68-7jjmz" May 13 12:55:41.932924 kubelet[2682]: E0513 12:55:41.932902 2682 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"483113e99513dca209e4fa6f373565c519f84f7e8f8e60b26ea051ba386c09f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5559745f68-7jjmz" May 13 12:55:41.933054 kubelet[2682]: E0513 12:55:41.932943 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5559745f68-7jjmz_calico-apiserver(69edeedd-4476-4240-999e-ba555f61eb5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5559745f68-7jjmz_calico-apiserver(69edeedd-4476-4240-999e-ba555f61eb5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"483113e99513dca209e4fa6f373565c519f84f7e8f8e60b26ea051ba386c09f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5559745f68-7jjmz" podUID="69edeedd-4476-4240-999e-ba555f61eb5e" May 13 12:55:41.933925 containerd[1557]: time="2025-05-13T12:55:41.933876456Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5559745f68-rjh79,Uid:1223f4b3-ae3d-43b8-824a-6a7efb5e24c8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fb992c7bd6de230f3b82cc64cfce20aa090bbf15a21b07f38f27867715bc510\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:41.934063 kubelet[2682]: E0513 12:55:41.934039 2682 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fb992c7bd6de230f3b82cc64cfce20aa090bbf15a21b07f38f27867715bc510\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:41.934102 kubelet[2682]: E0513 12:55:41.934074 2682 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fb992c7bd6de230f3b82cc64cfce20aa090bbf15a21b07f38f27867715bc510\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5559745f68-rjh79" May 13 12:55:41.934102 kubelet[2682]: E0513 12:55:41.934089 2682 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fb992c7bd6de230f3b82cc64cfce20aa090bbf15a21b07f38f27867715bc510\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5559745f68-rjh79" May 13 12:55:41.934177 kubelet[2682]: E0513 12:55:41.934118 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5559745f68-rjh79_calico-apiserver(1223f4b3-ae3d-43b8-824a-6a7efb5e24c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5559745f68-rjh79_calico-apiserver(1223f4b3-ae3d-43b8-824a-6a7efb5e24c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9fb992c7bd6de230f3b82cc64cfce20aa090bbf15a21b07f38f27867715bc510\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5559745f68-rjh79" podUID="1223f4b3-ae3d-43b8-824a-6a7efb5e24c8" May 13 12:55:42.439679 systemd[1]: run-netns-cni\x2dfb2ffd71\x2d6f62\x2d3618\x2dcaad\x2da1d601b2af8f.mount: Deactivated successfully. May 13 12:55:44.520601 systemd[1]: Started sshd@12-10.0.0.90:22-10.0.0.1:37302.service - OpenSSH per-connection server daemon (10.0.0.1:37302). May 13 12:55:44.570006 sshd[4064]: Accepted publickey for core from 10.0.0.1 port 37302 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 12:55:44.571291 sshd-session[4064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:55:44.575343 systemd-logind[1539]: New session 13 of user core. May 13 12:55:44.592256 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 12:55:44.703026 sshd[4066]: Connection closed by 10.0.0.1 port 37302 May 13 12:55:44.703408 sshd-session[4064]: pam_unix(sshd:session): session closed for user core May 13 12:55:44.707471 systemd[1]: sshd@12-10.0.0.90:22-10.0.0.1:37302.service: Deactivated successfully. May 13 12:55:44.709260 systemd[1]: session-13.scope: Deactivated successfully. May 13 12:55:44.709927 systemd-logind[1539]: Session 13 logged out. Waiting for processes to exit. May 13 12:55:44.711007 systemd-logind[1539]: Removed session 13. May 13 12:55:47.379288 kubelet[2682]: I0513 12:55:47.379250 2682 scope.go:117] "RemoveContainer" containerID="af0626dd4587d2ecd3adae74ddd89c5e885fff060558ba80c4ca96fd90513fff" May 13 12:55:47.379687 kubelet[2682]: E0513 12:55:47.379315 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:55:47.381390 containerd[1557]: time="2025-05-13T12:55:47.381343555Z" level=info msg="CreateContainer within sandbox \"f9e9421590f91265ff047eb5254f928ab8416f087b50265cbf3b6abfdfe30116\" for container &ContainerMetadata{Name:calico-node,Attempt:2,}" May 13 12:55:47.396816 containerd[1557]: time="2025-05-13T12:55:47.396739194Z" level=info msg="Container 94dc1d7962871d763bb10cd6a7375648a1058a56b538eac444e9903512ec7c36: CDI devices from CRI Config.CDIDevices: []" May 13 12:55:47.407641 containerd[1557]: time="2025-05-13T12:55:47.407597587Z" level=info msg="CreateContainer within sandbox \"f9e9421590f91265ff047eb5254f928ab8416f087b50265cbf3b6abfdfe30116\" for &ContainerMetadata{Name:calico-node,Attempt:2,} returns container id \"94dc1d7962871d763bb10cd6a7375648a1058a56b538eac444e9903512ec7c36\"" May 13 12:55:47.408089 containerd[1557]: time="2025-05-13T12:55:47.408050848Z" level=info msg="StartContainer for \"94dc1d7962871d763bb10cd6a7375648a1058a56b538eac444e9903512ec7c36\"" May 13 12:55:47.409443 containerd[1557]: time="2025-05-13T12:55:47.409412848Z" level=info msg="connecting to shim 94dc1d7962871d763bb10cd6a7375648a1058a56b538eac444e9903512ec7c36" address="unix:///run/containerd/s/5adccc43415e65af493194268ea3c18184a6f53ebec82c0ff497cf0bcf361db6" protocol=ttrpc version=3 May 13 12:55:47.437436 systemd[1]: Started cri-containerd-94dc1d7962871d763bb10cd6a7375648a1058a56b538eac444e9903512ec7c36.scope - libcontainer container 94dc1d7962871d763bb10cd6a7375648a1058a56b538eac444e9903512ec7c36. May 13 12:55:47.483730 containerd[1557]: time="2025-05-13T12:55:47.483676630Z" level=info msg="StartContainer for \"94dc1d7962871d763bb10cd6a7375648a1058a56b538eac444e9903512ec7c36\" returns successfully" May 13 12:55:47.531499 kubelet[2682]: E0513 12:55:47.531085 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:55:47.543932 systemd[1]: cri-containerd-94dc1d7962871d763bb10cd6a7375648a1058a56b538eac444e9903512ec7c36.scope: Deactivated successfully. May 13 12:55:47.547773 kubelet[2682]: I0513 12:55:47.547699 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-87fvd" podStartSLOduration=14.731488389999999 podStartE2EDuration="38.54766752s" podCreationTimestamp="2025-05-13 12:55:09 +0000 UTC" firstStartedPulling="2025-05-13 12:55:09.871372469 +0000 UTC m=+11.588026206" lastFinishedPulling="2025-05-13 12:55:33.687551599 +0000 UTC m=+35.404205336" observedRunningTime="2025-05-13 12:55:47.54700223 +0000 UTC m=+49.263655967" watchObservedRunningTime="2025-05-13 12:55:47.54766752 +0000 UTC m=+49.264321257" May 13 12:55:47.553798 containerd[1557]: time="2025-05-13T12:55:47.553747755Z" level=info msg="received exit event container_id:\"94dc1d7962871d763bb10cd6a7375648a1058a56b538eac444e9903512ec7c36\" id:\"94dc1d7962871d763bb10cd6a7375648a1058a56b538eac444e9903512ec7c36\" pid:4090 exit_status:1 exited_at:{seconds:1747140947 nanos:553539443}" May 13 12:55:47.553941 containerd[1557]: time="2025-05-13T12:55:47.553907846Z" level=info msg="TaskExit event in podsandbox handler container_id:\"94dc1d7962871d763bb10cd6a7375648a1058a56b538eac444e9903512ec7c36\" id:\"94dc1d7962871d763bb10cd6a7375648a1058a56b538eac444e9903512ec7c36\" pid:4090 exit_status:1 exited_at:{seconds:1747140947 nanos:553539443}" May 13 12:55:47.555090 containerd[1557]: time="2025-05-13T12:55:47.555061082Z" level=error msg="ExecSync for \"94dc1d7962871d763bb10cd6a7375648a1058a56b538eac444e9903512ec7c36\" failed" error="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"02248d1f9d1433ccd0ce2f284e426adba76fb62a60f6477b1dd0a00c0c93258b\": OCI runtime exec failed: exec failed: cannot exec in a stopped container" May 13 12:55:47.555290 kubelet[2682]: E0513 12:55:47.555246 2682 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"02248d1f9d1433ccd0ce2f284e426adba76fb62a60f6477b1dd0a00c0c93258b\": OCI runtime exec failed: exec failed: cannot exec in a stopped container" containerID="94dc1d7962871d763bb10cd6a7375648a1058a56b538eac444e9903512ec7c36" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] May 13 12:55:47.568745 containerd[1557]: time="2025-05-13T12:55:47.568693888Z" level=error msg="ExecSync for \"94dc1d7962871d763bb10cd6a7375648a1058a56b538eac444e9903512ec7c36\" failed" error="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"2d8a92bcdf0b02b843b519d90e2fb7437bcec67a714b746d13f2dd6d1b953755\": cannot exec in a stopped state" May 13 12:55:47.568891 kubelet[2682]: E0513 12:55:47.568861 2682 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"2d8a92bcdf0b02b843b519d90e2fb7437bcec67a714b746d13f2dd6d1b953755\": cannot exec in a stopped state" containerID="94dc1d7962871d763bb10cd6a7375648a1058a56b538eac444e9903512ec7c36" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] May 13 12:55:47.575814 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94dc1d7962871d763bb10cd6a7375648a1058a56b538eac444e9903512ec7c36-rootfs.mount: Deactivated successfully. May 13 12:55:47.582534 containerd[1557]: time="2025-05-13T12:55:47.582484732Z" level=error msg="ExecSync for \"94dc1d7962871d763bb10cd6a7375648a1058a56b538eac444e9903512ec7c36\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to create exec \"fd4ec4b468405ee1e944bddb62198fcfb91a7c7622dc343de4ce8a31b37e3173\": task 94dc1d7962871d763bb10cd6a7375648a1058a56b538eac444e9903512ec7c36 not found" May 13 12:55:47.582778 kubelet[2682]: E0513 12:55:47.582723 2682 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to create exec \"fd4ec4b468405ee1e944bddb62198fcfb91a7c7622dc343de4ce8a31b37e3173\": task 94dc1d7962871d763bb10cd6a7375648a1058a56b538eac444e9903512ec7c36 not found" containerID="94dc1d7962871d763bb10cd6a7375648a1058a56b538eac444e9903512ec7c36" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] May 13 12:55:48.536426 kubelet[2682]: I0513 12:55:48.536380 2682 scope.go:117] "RemoveContainer" containerID="af0626dd4587d2ecd3adae74ddd89c5e885fff060558ba80c4ca96fd90513fff" May 13 12:55:48.536862 kubelet[2682]: I0513 12:55:48.536759 2682 scope.go:117] "RemoveContainer" containerID="94dc1d7962871d763bb10cd6a7375648a1058a56b538eac444e9903512ec7c36" May 13 12:55:48.536862 kubelet[2682]: E0513 12:55:48.536831 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:55:48.536956 kubelet[2682]: E0513 12:55:48.536931 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-87fvd_calico-system(31f9c550-0d42-4e05-9662-72cf1b1971e6)\"" pod="calico-system/calico-node-87fvd" podUID="31f9c550-0d42-4e05-9662-72cf1b1971e6" May 13 12:55:48.538731 containerd[1557]: time="2025-05-13T12:55:48.538650871Z" level=info msg="RemoveContainer for \"af0626dd4587d2ecd3adae74ddd89c5e885fff060558ba80c4ca96fd90513fff\"" May 13 12:55:48.547414 containerd[1557]: time="2025-05-13T12:55:48.547383367Z" level=info msg="RemoveContainer for \"af0626dd4587d2ecd3adae74ddd89c5e885fff060558ba80c4ca96fd90513fff\" returns successfully" May 13 12:55:49.541705 kubelet[2682]: I0513 12:55:49.541670 2682 scope.go:117] "RemoveContainer" containerID="94dc1d7962871d763bb10cd6a7375648a1058a56b538eac444e9903512ec7c36" May 13 12:55:49.542083 kubelet[2682]: E0513 12:55:49.541743 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:55:49.542083 kubelet[2682]: E0513 12:55:49.541845 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-87fvd_calico-system(31f9c550-0d42-4e05-9662-72cf1b1971e6)\"" pod="calico-system/calico-node-87fvd" podUID="31f9c550-0d42-4e05-9662-72cf1b1971e6" May 13 12:55:49.719959 systemd[1]: Started sshd@13-10.0.0.90:22-10.0.0.1:56788.service - OpenSSH per-connection server daemon (10.0.0.1:56788). May 13 12:55:49.774546 sshd[4134]: Accepted publickey for core from 10.0.0.1 port 56788 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 12:55:49.775972 sshd-session[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:55:49.780025 systemd-logind[1539]: New session 14 of user core. May 13 12:55:49.788260 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 12:55:49.901810 sshd[4136]: Connection closed by 10.0.0.1 port 56788 May 13 12:55:49.902100 sshd-session[4134]: pam_unix(sshd:session): session closed for user core May 13 12:55:49.906790 systemd[1]: sshd@13-10.0.0.90:22-10.0.0.1:56788.service: Deactivated successfully. May 13 12:55:49.908544 systemd[1]: session-14.scope: Deactivated successfully. May 13 12:55:49.909304 systemd-logind[1539]: Session 14 logged out. Waiting for processes to exit. May 13 12:55:49.910630 systemd-logind[1539]: Removed session 14. May 13 12:55:50.379746 kubelet[2682]: E0513 12:55:50.379708 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:55:50.380128 containerd[1557]: time="2025-05-13T12:55:50.380068667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fvsq7,Uid:4faa16ac-8041-4063-89da-2ef0847f8c7d,Namespace:kube-system,Attempt:0,}" May 13 12:55:50.435372 containerd[1557]: time="2025-05-13T12:55:50.435298071Z" level=error msg="Failed to destroy network for sandbox \"6907a660cc8cb778f2bf7ff0f730d3f053aeed78f11ad8c0a78e08485689a22e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:50.436990 containerd[1557]: time="2025-05-13T12:55:50.436937991Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fvsq7,Uid:4faa16ac-8041-4063-89da-2ef0847f8c7d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6907a660cc8cb778f2bf7ff0f730d3f053aeed78f11ad8c0a78e08485689a22e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:50.437226 kubelet[2682]: E0513 12:55:50.437174 2682 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6907a660cc8cb778f2bf7ff0f730d3f053aeed78f11ad8c0a78e08485689a22e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:50.437291 kubelet[2682]: E0513 12:55:50.437232 2682 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6907a660cc8cb778f2bf7ff0f730d3f053aeed78f11ad8c0a78e08485689a22e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fvsq7" May 13 12:55:50.437291 kubelet[2682]: E0513 12:55:50.437252 2682 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6907a660cc8cb778f2bf7ff0f730d3f053aeed78f11ad8c0a78e08485689a22e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fvsq7" May 13 12:55:50.437404 kubelet[2682]: E0513 12:55:50.437293 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-fvsq7_kube-system(4faa16ac-8041-4063-89da-2ef0847f8c7d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-fvsq7_kube-system(4faa16ac-8041-4063-89da-2ef0847f8c7d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6907a660cc8cb778f2bf7ff0f730d3f053aeed78f11ad8c0a78e08485689a22e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-fvsq7" podUID="4faa16ac-8041-4063-89da-2ef0847f8c7d" May 13 12:55:50.438003 systemd[1]: run-netns-cni\x2d85d7159f\x2d361e\x2d9054\x2d933f\x2d351d9438255c.mount: Deactivated successfully. May 13 12:55:51.379907 kubelet[2682]: E0513 12:55:51.379856 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:55:51.380956 containerd[1557]: time="2025-05-13T12:55:51.380269692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xlq4k,Uid:15b8d047-8ef6-4678-b676-93259a433fcd,Namespace:kube-system,Attempt:0,}" May 13 12:55:51.430964 containerd[1557]: time="2025-05-13T12:55:51.430896182Z" level=error msg="Failed to destroy network for sandbox \"ddef72b7c33b25bbf876c5add4915702e8bfc5e8131e09054b4f739f6caebb33\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:51.432333 containerd[1557]: time="2025-05-13T12:55:51.432295581Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xlq4k,Uid:15b8d047-8ef6-4678-b676-93259a433fcd,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddef72b7c33b25bbf876c5add4915702e8bfc5e8131e09054b4f739f6caebb33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:51.432605 kubelet[2682]: E0513 12:55:51.432544 2682 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddef72b7c33b25bbf876c5add4915702e8bfc5e8131e09054b4f739f6caebb33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:51.432721 kubelet[2682]: E0513 12:55:51.432620 2682 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddef72b7c33b25bbf876c5add4915702e8bfc5e8131e09054b4f739f6caebb33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xlq4k" May 13 12:55:51.432721 kubelet[2682]: E0513 12:55:51.432657 2682 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddef72b7c33b25bbf876c5add4915702e8bfc5e8131e09054b4f739f6caebb33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xlq4k" May 13 12:55:51.432881 kubelet[2682]: E0513 12:55:51.432715 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-xlq4k_kube-system(15b8d047-8ef6-4678-b676-93259a433fcd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-xlq4k_kube-system(15b8d047-8ef6-4678-b676-93259a433fcd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ddef72b7c33b25bbf876c5add4915702e8bfc5e8131e09054b4f739f6caebb33\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-xlq4k" podUID="15b8d047-8ef6-4678-b676-93259a433fcd" May 13 12:55:51.433229 systemd[1]: run-netns-cni\x2d7e01ae73\x2d0986\x2de362\x2dae9f\x2d306f72563eaf.mount: Deactivated successfully. May 13 12:55:52.380491 containerd[1557]: time="2025-05-13T12:55:52.380399338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-857fbf49df-bgllm,Uid:c197e0bf-0648-47d6-b266-361e6fefface,Namespace:calico-system,Attempt:0,}" May 13 12:55:52.380775 containerd[1557]: time="2025-05-13T12:55:52.380720471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5559745f68-7jjmz,Uid:69edeedd-4476-4240-999e-ba555f61eb5e,Namespace:calico-apiserver,Attempt:0,}" May 13 12:55:52.433208 containerd[1557]: time="2025-05-13T12:55:52.433120403Z" level=error msg="Failed to destroy network for sandbox \"5330e2c6932a89833e82fe3dbd4cb76759ab0929af2e547d6e4300cd4e4e976e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:52.436545 containerd[1557]: time="2025-05-13T12:55:52.436474402Z" level=error msg="Failed to destroy network for sandbox \"d0edfd5b967bfe18c87da5673460d2a402288f5750d3622222d7374b6ca0f27f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:52.436971 systemd[1]: run-netns-cni\x2de6f97fac\x2d9ee8\x2d5aa7\x2ddc49\x2d9f43fd9e2822.mount: Deactivated successfully. May 13 12:55:52.439200 containerd[1557]: time="2025-05-13T12:55:52.439149377Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-857fbf49df-bgllm,Uid:c197e0bf-0648-47d6-b266-361e6fefface,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5330e2c6932a89833e82fe3dbd4cb76759ab0929af2e547d6e4300cd4e4e976e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:52.439624 kubelet[2682]: E0513 12:55:52.439554 2682 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5330e2c6932a89833e82fe3dbd4cb76759ab0929af2e547d6e4300cd4e4e976e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:52.439919 kubelet[2682]: E0513 12:55:52.439635 2682 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5330e2c6932a89833e82fe3dbd4cb76759ab0929af2e547d6e4300cd4e4e976e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-857fbf49df-bgllm" May 13 12:55:52.439919 kubelet[2682]: E0513 12:55:52.439660 2682 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5330e2c6932a89833e82fe3dbd4cb76759ab0929af2e547d6e4300cd4e4e976e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-857fbf49df-bgllm" May 13 12:55:52.439919 kubelet[2682]: E0513 12:55:52.439712 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-857fbf49df-bgllm_calico-system(c197e0bf-0648-47d6-b266-361e6fefface)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-857fbf49df-bgllm_calico-system(c197e0bf-0648-47d6-b266-361e6fefface)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5330e2c6932a89833e82fe3dbd4cb76759ab0929af2e547d6e4300cd4e4e976e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-857fbf49df-bgllm" podUID="c197e0bf-0648-47d6-b266-361e6fefface" May 13 12:55:52.440299 containerd[1557]: time="2025-05-13T12:55:52.440264160Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5559745f68-7jjmz,Uid:69edeedd-4476-4240-999e-ba555f61eb5e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0edfd5b967bfe18c87da5673460d2a402288f5750d3622222d7374b6ca0f27f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:52.440423 kubelet[2682]: E0513 12:55:52.440396 2682 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0edfd5b967bfe18c87da5673460d2a402288f5750d3622222d7374b6ca0f27f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:52.440466 kubelet[2682]: E0513 12:55:52.440421 2682 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0edfd5b967bfe18c87da5673460d2a402288f5750d3622222d7374b6ca0f27f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5559745f68-7jjmz" May 13 12:55:52.440466 kubelet[2682]: E0513 12:55:52.440437 2682 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0edfd5b967bfe18c87da5673460d2a402288f5750d3622222d7374b6ca0f27f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5559745f68-7jjmz" May 13 12:55:52.440539 kubelet[2682]: E0513 12:55:52.440459 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5559745f68-7jjmz_calico-apiserver(69edeedd-4476-4240-999e-ba555f61eb5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5559745f68-7jjmz_calico-apiserver(69edeedd-4476-4240-999e-ba555f61eb5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d0edfd5b967bfe18c87da5673460d2a402288f5750d3622222d7374b6ca0f27f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5559745f68-7jjmz" podUID="69edeedd-4476-4240-999e-ba555f61eb5e" May 13 12:55:52.440800 systemd[1]: run-netns-cni\x2d4d678274\x2dc408\x2ddf00\x2dcca2\x2df6b6d149a530.mount: Deactivated successfully. May 13 12:55:54.381794 containerd[1557]: time="2025-05-13T12:55:54.381745686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5559745f68-rjh79,Uid:1223f4b3-ae3d-43b8-824a-6a7efb5e24c8,Namespace:calico-apiserver,Attempt:0,}" May 13 12:55:54.382188 containerd[1557]: time="2025-05-13T12:55:54.381955319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ct2sc,Uid:99af3312-c9d6-477a-83b3-e903dd409646,Namespace:calico-system,Attempt:0,}" May 13 12:55:54.437387 containerd[1557]: time="2025-05-13T12:55:54.437329602Z" level=error msg="Failed to destroy network for sandbox \"a7f2afed3e9718f31640705eea180c88a2d63670661ae2132545d7782f379087\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:54.438352 containerd[1557]: time="2025-05-13T12:55:54.438324931Z" level=error msg="Failed to destroy network for sandbox \"cea0c9218ec19d36e9a83cfd0669f3b39c8405b4f99efd95753db7ab7a6aa77a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:54.439257 containerd[1557]: time="2025-05-13T12:55:54.439221996Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5559745f68-rjh79,Uid:1223f4b3-ae3d-43b8-824a-6a7efb5e24c8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7f2afed3e9718f31640705eea180c88a2d63670661ae2132545d7782f379087\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:54.439582 kubelet[2682]: E0513 12:55:54.439538 2682 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7f2afed3e9718f31640705eea180c88a2d63670661ae2132545d7782f379087\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:54.439895 kubelet[2682]: E0513 12:55:54.439601 2682 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7f2afed3e9718f31640705eea180c88a2d63670661ae2132545d7782f379087\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5559745f68-rjh79" May 13 12:55:54.439895 kubelet[2682]: E0513 12:55:54.439630 2682 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7f2afed3e9718f31640705eea180c88a2d63670661ae2132545d7782f379087\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5559745f68-rjh79" May 13 12:55:54.439895 kubelet[2682]: E0513 12:55:54.439673 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5559745f68-rjh79_calico-apiserver(1223f4b3-ae3d-43b8-824a-6a7efb5e24c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5559745f68-rjh79_calico-apiserver(1223f4b3-ae3d-43b8-824a-6a7efb5e24c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a7f2afed3e9718f31640705eea180c88a2d63670661ae2132545d7782f379087\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5559745f68-rjh79" podUID="1223f4b3-ae3d-43b8-824a-6a7efb5e24c8" May 13 12:55:54.440308 systemd[1]: run-netns-cni\x2d49517954\x2d1c76\x2d18b6\x2dfff0\x2d9042a6f2fd17.mount: Deactivated successfully. May 13 12:55:54.440857 systemd[1]: run-netns-cni\x2d22797ef2\x2d2c22\x2dca7c\x2d7496\x2d0fffc37ce672.mount: Deactivated successfully. May 13 12:55:54.441537 kubelet[2682]: E0513 12:55:54.441035 2682 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cea0c9218ec19d36e9a83cfd0669f3b39c8405b4f99efd95753db7ab7a6aa77a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:54.441537 kubelet[2682]: E0513 12:55:54.441094 2682 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cea0c9218ec19d36e9a83cfd0669f3b39c8405b4f99efd95753db7ab7a6aa77a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ct2sc" May 13 12:55:54.441537 kubelet[2682]: E0513 12:55:54.441113 2682 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cea0c9218ec19d36e9a83cfd0669f3b39c8405b4f99efd95753db7ab7a6aa77a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ct2sc" May 13 12:55:54.441624 containerd[1557]: time="2025-05-13T12:55:54.440831628Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ct2sc,Uid:99af3312-c9d6-477a-83b3-e903dd409646,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cea0c9218ec19d36e9a83cfd0669f3b39c8405b4f99efd95753db7ab7a6aa77a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:55:54.441753 kubelet[2682]: E0513 12:55:54.441165 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ct2sc_calico-system(99af3312-c9d6-477a-83b3-e903dd409646)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ct2sc_calico-system(99af3312-c9d6-477a-83b3-e903dd409646)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cea0c9218ec19d36e9a83cfd0669f3b39c8405b4f99efd95753db7ab7a6aa77a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ct2sc" podUID="99af3312-c9d6-477a-83b3-e903dd409646" May 13 12:55:54.917676 systemd[1]: Started sshd@14-10.0.0.90:22-10.0.0.1:56796.service - OpenSSH per-connection server daemon (10.0.0.1:56796). May 13 12:55:54.965830 sshd[4380]: Accepted publickey for core from 10.0.0.1 port 56796 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 12:55:54.967243 sshd-session[4380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:55:54.971105 systemd-logind[1539]: New session 15 of user core. May 13 12:55:54.986255 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 12:55:55.091941 sshd[4382]: Connection closed by 10.0.0.1 port 56796 May 13 12:55:55.092353 sshd-session[4380]: pam_unix(sshd:session): session closed for user core May 13 12:55:55.096774 systemd[1]: sshd@14-10.0.0.90:22-10.0.0.1:56796.service: Deactivated successfully. May 13 12:55:55.098799 systemd[1]: session-15.scope: Deactivated successfully. May 13 12:55:55.099660 systemd-logind[1539]: Session 15 logged out. Waiting for processes to exit. May 13 12:55:55.100691 systemd-logind[1539]: Removed session 15. May 13 12:56:00.108086 systemd[1]: Started sshd@15-10.0.0.90:22-10.0.0.1:40400.service - OpenSSH per-connection server daemon (10.0.0.1:40400). May 13 12:56:00.150927 sshd[4398]: Accepted publickey for core from 10.0.0.1 port 40400 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 12:56:00.152327 sshd-session[4398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:56:00.156331 systemd-logind[1539]: New session 16 of user core. May 13 12:56:00.167285 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 12:56:00.270575 sshd[4400]: Connection closed by 10.0.0.1 port 40400 May 13 12:56:00.270845 sshd-session[4398]: pam_unix(sshd:session): session closed for user core May 13 12:56:00.274684 systemd[1]: sshd@15-10.0.0.90:22-10.0.0.1:40400.service: Deactivated successfully. May 13 12:56:00.276751 systemd[1]: session-16.scope: Deactivated successfully. May 13 12:56:00.277645 systemd-logind[1539]: Session 16 logged out. Waiting for processes to exit. May 13 12:56:00.278985 systemd-logind[1539]: Removed session 16. May 13 12:56:01.380006 kubelet[2682]: E0513 12:56:01.379948 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:56:01.380481 containerd[1557]: time="2025-05-13T12:56:01.380355390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fvsq7,Uid:4faa16ac-8041-4063-89da-2ef0847f8c7d,Namespace:kube-system,Attempt:0,}" May 13 12:56:01.451240 containerd[1557]: time="2025-05-13T12:56:01.451177116Z" level=error msg="Failed to destroy network for sandbox \"145c46c7c3114452afb56f2fbaf55a73c9d7e30e348fc6d071ca88ee511b4957\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:56:01.453346 containerd[1557]: time="2025-05-13T12:56:01.453292377Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fvsq7,Uid:4faa16ac-8041-4063-89da-2ef0847f8c7d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"145c46c7c3114452afb56f2fbaf55a73c9d7e30e348fc6d071ca88ee511b4957\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:56:01.453654 kubelet[2682]: E0513 12:56:01.453602 2682 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"145c46c7c3114452afb56f2fbaf55a73c9d7e30e348fc6d071ca88ee511b4957\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:56:01.453720 kubelet[2682]: E0513 12:56:01.453668 2682 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"145c46c7c3114452afb56f2fbaf55a73c9d7e30e348fc6d071ca88ee511b4957\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fvsq7" May 13 12:56:01.453720 kubelet[2682]: E0513 12:56:01.453691 2682 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"145c46c7c3114452afb56f2fbaf55a73c9d7e30e348fc6d071ca88ee511b4957\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fvsq7" May 13 12:56:01.453781 kubelet[2682]: E0513 12:56:01.453747 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-fvsq7_kube-system(4faa16ac-8041-4063-89da-2ef0847f8c7d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-fvsq7_kube-system(4faa16ac-8041-4063-89da-2ef0847f8c7d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"145c46c7c3114452afb56f2fbaf55a73c9d7e30e348fc6d071ca88ee511b4957\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-fvsq7" podUID="4faa16ac-8041-4063-89da-2ef0847f8c7d" May 13 12:56:01.454194 systemd[1]: run-netns-cni\x2d3cfe3dbb\x2d8207\x2d2b64\x2d70fe\x2d4d4a1066e7ad.mount: Deactivated successfully. May 13 12:56:02.379457 kubelet[2682]: E0513 12:56:02.379424 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:56:02.379618 kubelet[2682]: I0513 12:56:02.379571 2682 scope.go:117] "RemoveContainer" containerID="94dc1d7962871d763bb10cd6a7375648a1058a56b538eac444e9903512ec7c36" May 13 12:56:02.379642 kubelet[2682]: E0513 12:56:02.379616 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:56:02.380120 kubelet[2682]: E0513 12:56:02.379681 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-87fvd_calico-system(31f9c550-0d42-4e05-9662-72cf1b1971e6)\"" pod="calico-system/calico-node-87fvd" podUID="31f9c550-0d42-4e05-9662-72cf1b1971e6" May 13 12:56:02.380479 containerd[1557]: time="2025-05-13T12:56:02.379884416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xlq4k,Uid:15b8d047-8ef6-4678-b676-93259a433fcd,Namespace:kube-system,Attempt:0,}" May 13 12:56:02.551049 containerd[1557]: time="2025-05-13T12:56:02.550990697Z" level=error msg="Failed to destroy network for sandbox \"61ee0e1269e71370eb48e397a1316110980701423ca125aba0b0f54c45822789\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:56:02.553466 systemd[1]: run-netns-cni\x2da8b62c22\x2d184b\x2dfacb\x2dac8f\x2d54a88bfeabbf.mount: Deactivated successfully. May 13 12:56:02.590387 containerd[1557]: time="2025-05-13T12:56:02.590305870Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xlq4k,Uid:15b8d047-8ef6-4678-b676-93259a433fcd,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"61ee0e1269e71370eb48e397a1316110980701423ca125aba0b0f54c45822789\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:56:02.590602 kubelet[2682]: E0513 12:56:02.590553 2682 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61ee0e1269e71370eb48e397a1316110980701423ca125aba0b0f54c45822789\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:56:02.590645 kubelet[2682]: E0513 12:56:02.590611 2682 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61ee0e1269e71370eb48e397a1316110980701423ca125aba0b0f54c45822789\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xlq4k" May 13 12:56:02.590645 kubelet[2682]: E0513 12:56:02.590637 2682 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61ee0e1269e71370eb48e397a1316110980701423ca125aba0b0f54c45822789\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xlq4k" May 13 12:56:02.590730 kubelet[2682]: E0513 12:56:02.590683 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-xlq4k_kube-system(15b8d047-8ef6-4678-b676-93259a433fcd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-xlq4k_kube-system(15b8d047-8ef6-4678-b676-93259a433fcd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"61ee0e1269e71370eb48e397a1316110980701423ca125aba0b0f54c45822789\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-xlq4k" podUID="15b8d047-8ef6-4678-b676-93259a433fcd" May 13 12:56:03.380480 containerd[1557]: time="2025-05-13T12:56:03.380437582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-857fbf49df-bgllm,Uid:c197e0bf-0648-47d6-b266-361e6fefface,Namespace:calico-system,Attempt:0,}" May 13 12:56:03.441248 containerd[1557]: time="2025-05-13T12:56:03.441202732Z" level=error msg="Failed to destroy network for sandbox \"aeb20a88a1de8cfc9cfdffd7048a68aa066d161d504da14e31bded0425b55be7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:56:03.443162 containerd[1557]: time="2025-05-13T12:56:03.443112175Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-857fbf49df-bgllm,Uid:c197e0bf-0648-47d6-b266-361e6fefface,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aeb20a88a1de8cfc9cfdffd7048a68aa066d161d504da14e31bded0425b55be7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:56:03.443403 kubelet[2682]: E0513 12:56:03.443360 2682 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aeb20a88a1de8cfc9cfdffd7048a68aa066d161d504da14e31bded0425b55be7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:56:03.443732 kubelet[2682]: E0513 12:56:03.443435 2682 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aeb20a88a1de8cfc9cfdffd7048a68aa066d161d504da14e31bded0425b55be7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-857fbf49df-bgllm" May 13 12:56:03.443732 kubelet[2682]: E0513 12:56:03.443456 2682 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aeb20a88a1de8cfc9cfdffd7048a68aa066d161d504da14e31bded0425b55be7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-857fbf49df-bgllm" May 13 12:56:03.443732 kubelet[2682]: E0513 12:56:03.443509 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-857fbf49df-bgllm_calico-system(c197e0bf-0648-47d6-b266-361e6fefface)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-857fbf49df-bgllm_calico-system(c197e0bf-0648-47d6-b266-361e6fefface)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aeb20a88a1de8cfc9cfdffd7048a68aa066d161d504da14e31bded0425b55be7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-857fbf49df-bgllm" podUID="c197e0bf-0648-47d6-b266-361e6fefface" May 13 12:56:03.443701 systemd[1]: run-netns-cni\x2d38e9c72c\x2d4ae9\x2d918e\x2d5251\x2dbba5831892ec.mount: Deactivated successfully. May 13 12:56:05.286414 systemd[1]: Started sshd@16-10.0.0.90:22-10.0.0.1:40410.service - OpenSSH per-connection server daemon (10.0.0.1:40410). May 13 12:56:05.348341 sshd[4526]: Accepted publickey for core from 10.0.0.1 port 40410 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 12:56:05.350430 sshd-session[4526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:56:05.355417 systemd-logind[1539]: New session 17 of user core. May 13 12:56:05.372318 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 12:56:05.485976 sshd[4528]: Connection closed by 10.0.0.1 port 40410 May 13 12:56:05.486332 sshd-session[4526]: pam_unix(sshd:session): session closed for user core May 13 12:56:05.489275 systemd[1]: sshd@16-10.0.0.90:22-10.0.0.1:40410.service: Deactivated successfully. May 13 12:56:05.491130 systemd[1]: session-17.scope: Deactivated successfully. May 13 12:56:05.493279 systemd-logind[1539]: Session 17 logged out. Waiting for processes to exit. May 13 12:56:05.494273 systemd-logind[1539]: Removed session 17. May 13 12:56:06.862884 kubelet[2682]: I0513 12:56:06.862837 2682 scope.go:117] "RemoveContainer" containerID="94dc1d7962871d763bb10cd6a7375648a1058a56b538eac444e9903512ec7c36" May 13 12:56:06.863452 kubelet[2682]: E0513 12:56:06.862911 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:56:06.863452 kubelet[2682]: E0513 12:56:06.862989 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-87fvd_calico-system(31f9c550-0d42-4e05-9662-72cf1b1971e6)\"" pod="calico-system/calico-node-87fvd" podUID="31f9c550-0d42-4e05-9662-72cf1b1971e6" May 13 12:56:07.380765 containerd[1557]: time="2025-05-13T12:56:07.380710856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5559745f68-7jjmz,Uid:69edeedd-4476-4240-999e-ba555f61eb5e,Namespace:calico-apiserver,Attempt:0,}" May 13 12:56:07.437691 containerd[1557]: time="2025-05-13T12:56:07.437635980Z" level=error msg="Failed to destroy network for sandbox \"b2e8b907372c2751847dadf1540a95c33e7eb257f3a2192b7939579087251347\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:56:07.438955 containerd[1557]: time="2025-05-13T12:56:07.438918682Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5559745f68-7jjmz,Uid:69edeedd-4476-4240-999e-ba555f61eb5e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2e8b907372c2751847dadf1540a95c33e7eb257f3a2192b7939579087251347\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:56:07.439228 kubelet[2682]: E0513 12:56:07.439176 2682 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2e8b907372c2751847dadf1540a95c33e7eb257f3a2192b7939579087251347\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:56:07.439284 kubelet[2682]: E0513 12:56:07.439258 2682 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2e8b907372c2751847dadf1540a95c33e7eb257f3a2192b7939579087251347\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5559745f68-7jjmz" May 13 12:56:07.439284 kubelet[2682]: E0513 12:56:07.439279 2682 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2e8b907372c2751847dadf1540a95c33e7eb257f3a2192b7939579087251347\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5559745f68-7jjmz" May 13 12:56:07.439393 kubelet[2682]: E0513 12:56:07.439342 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5559745f68-7jjmz_calico-apiserver(69edeedd-4476-4240-999e-ba555f61eb5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5559745f68-7jjmz_calico-apiserver(69edeedd-4476-4240-999e-ba555f61eb5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b2e8b907372c2751847dadf1540a95c33e7eb257f3a2192b7939579087251347\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5559745f68-7jjmz" podUID="69edeedd-4476-4240-999e-ba555f61eb5e" May 13 12:56:07.439876 systemd[1]: run-netns-cni\x2d9c069675\x2d790b\x2dfab0\x2db5a2\x2d549f1d04c12e.mount: Deactivated successfully. May 13 12:56:08.379886 containerd[1557]: time="2025-05-13T12:56:08.379836531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ct2sc,Uid:99af3312-c9d6-477a-83b3-e903dd409646,Namespace:calico-system,Attempt:0,}" May 13 12:56:08.434670 containerd[1557]: time="2025-05-13T12:56:08.434620665Z" level=error msg="Failed to destroy network for sandbox \"8878e2df048ffefadc7a848e684b5f659de0bf2e678f6b75be60a42c15b82ff3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:56:08.436579 containerd[1557]: time="2025-05-13T12:56:08.436539238Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ct2sc,Uid:99af3312-c9d6-477a-83b3-e903dd409646,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8878e2df048ffefadc7a848e684b5f659de0bf2e678f6b75be60a42c15b82ff3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:56:08.436778 kubelet[2682]: E0513 12:56:08.436716 2682 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8878e2df048ffefadc7a848e684b5f659de0bf2e678f6b75be60a42c15b82ff3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:56:08.437018 kubelet[2682]: E0513 12:56:08.436780 2682 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8878e2df048ffefadc7a848e684b5f659de0bf2e678f6b75be60a42c15b82ff3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ct2sc" May 13 12:56:08.437018 kubelet[2682]: E0513 12:56:08.436799 2682 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8878e2df048ffefadc7a848e684b5f659de0bf2e678f6b75be60a42c15b82ff3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ct2sc" May 13 12:56:08.437018 kubelet[2682]: E0513 12:56:08.436845 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ct2sc_calico-system(99af3312-c9d6-477a-83b3-e903dd409646)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ct2sc_calico-system(99af3312-c9d6-477a-83b3-e903dd409646)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8878e2df048ffefadc7a848e684b5f659de0bf2e678f6b75be60a42c15b82ff3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ct2sc" podUID="99af3312-c9d6-477a-83b3-e903dd409646" May 13 12:56:08.437416 systemd[1]: run-netns-cni\x2d0b5f2528\x2d7cb8\x2d3190\x2df650\x2d3b7e3adb1116.mount: Deactivated successfully. May 13 12:56:09.380586 containerd[1557]: time="2025-05-13T12:56:09.380541881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5559745f68-rjh79,Uid:1223f4b3-ae3d-43b8-824a-6a7efb5e24c8,Namespace:calico-apiserver,Attempt:0,}" May 13 12:56:09.436107 containerd[1557]: time="2025-05-13T12:56:09.436057616Z" level=error msg="Failed to destroy network for sandbox \"7a956a6703db5025a771d86990eceb4f99c989c149d412b3a130aa1de9f9df99\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:56:09.437916 containerd[1557]: time="2025-05-13T12:56:09.437852976Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5559745f68-rjh79,Uid:1223f4b3-ae3d-43b8-824a-6a7efb5e24c8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a956a6703db5025a771d86990eceb4f99c989c149d412b3a130aa1de9f9df99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:56:09.438361 kubelet[2682]: E0513 12:56:09.438315 2682 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a956a6703db5025a771d86990eceb4f99c989c149d412b3a130aa1de9f9df99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:56:09.438616 kubelet[2682]: E0513 12:56:09.438383 2682 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a956a6703db5025a771d86990eceb4f99c989c149d412b3a130aa1de9f9df99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5559745f68-rjh79" May 13 12:56:09.438616 kubelet[2682]: E0513 12:56:09.438409 2682 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a956a6703db5025a771d86990eceb4f99c989c149d412b3a130aa1de9f9df99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5559745f68-rjh79" May 13 12:56:09.438616 kubelet[2682]: E0513 12:56:09.438454 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5559745f68-rjh79_calico-apiserver(1223f4b3-ae3d-43b8-824a-6a7efb5e24c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5559745f68-rjh79_calico-apiserver(1223f4b3-ae3d-43b8-824a-6a7efb5e24c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a956a6703db5025a771d86990eceb4f99c989c149d412b3a130aa1de9f9df99\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5559745f68-rjh79" podUID="1223f4b3-ae3d-43b8-824a-6a7efb5e24c8" May 13 12:56:09.438412 systemd[1]: run-netns-cni\x2dad08b846\x2db947\x2df949\x2dcef1\x2d53c5e582b8c5.mount: Deactivated successfully. May 13 12:56:09.632061 containerd[1557]: time="2025-05-13T12:56:09.631530969Z" level=info msg="StopPodSandbox for \"f9e9421590f91265ff047eb5254f928ab8416f087b50265cbf3b6abfdfe30116\"" May 13 12:56:09.637736 containerd[1557]: time="2025-05-13T12:56:09.637689751Z" level=info msg="Container to stop \"51d8de80c24a4165b9f870a7fc4435fc9a3cc1f01ae38aca45cb798cadfaab31\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 12:56:09.637836 containerd[1557]: time="2025-05-13T12:56:09.637744216Z" level=info msg="Container to stop \"db4703075c3f028f1d61d7750695a3d8c0fa29299e7b3ba4266285dbfea185ae\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 12:56:09.637836 containerd[1557]: time="2025-05-13T12:56:09.637754356Z" level=info msg="Container to stop \"94dc1d7962871d763bb10cd6a7375648a1058a56b538eac444e9903512ec7c36\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 12:56:09.657420 containerd[1557]: time="2025-05-13T12:56:09.657375868Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f9e9421590f91265ff047eb5254f928ab8416f087b50265cbf3b6abfdfe30116\" id:\"f9e9421590f91265ff047eb5254f928ab8416f087b50265cbf3b6abfdfe30116\" pid:3176 exit_status:137 exited_at:{seconds:1747140969 nanos:656882301}" May 13 12:56:09.657742 systemd[1]: cri-containerd-f9e9421590f91265ff047eb5254f928ab8416f087b50265cbf3b6abfdfe30116.scope: Deactivated successfully. May 13 12:56:09.680719 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9e9421590f91265ff047eb5254f928ab8416f087b50265cbf3b6abfdfe30116-rootfs.mount: Deactivated successfully. May 13 12:56:09.693971 containerd[1557]: time="2025-05-13T12:56:09.693926532Z" level=info msg="shim disconnected" id=f9e9421590f91265ff047eb5254f928ab8416f087b50265cbf3b6abfdfe30116 namespace=k8s.io May 13 12:56:09.695006 containerd[1557]: time="2025-05-13T12:56:09.694418306Z" level=warning msg="cleaning up after shim disconnected" id=f9e9421590f91265ff047eb5254f928ab8416f087b50265cbf3b6abfdfe30116 namespace=k8s.io May 13 12:56:09.710561 containerd[1557]: time="2025-05-13T12:56:09.694432974Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 12:56:09.710957 containerd[1557]: time="2025-05-13T12:56:09.694523690Z" level=error msg="Failed to handle event container_id:\"f9e9421590f91265ff047eb5254f928ab8416f087b50265cbf3b6abfdfe30116\" id:\"f9e9421590f91265ff047eb5254f928ab8416f087b50265cbf3b6abfdfe30116\" pid:3176 exit_status:137 exited_at:{seconds:1747140969 nanos:656882301} for f9e9421590f91265ff047eb5254f928ab8416f087b50265cbf3b6abfdfe30116" error="failed to handle container TaskExit event: failed to stop sandbox: ttrpc: closed" May 13 12:56:09.750691 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f9e9421590f91265ff047eb5254f928ab8416f087b50265cbf3b6abfdfe30116-shm.mount: Deactivated successfully. May 13 12:56:09.773154 containerd[1557]: time="2025-05-13T12:56:09.773098151Z" level=info msg="TearDown network for sandbox \"f9e9421590f91265ff047eb5254f928ab8416f087b50265cbf3b6abfdfe30116\" successfully" May 13 12:56:09.773154 containerd[1557]: time="2025-05-13T12:56:09.773155472Z" level=info msg="StopPodSandbox for \"f9e9421590f91265ff047eb5254f928ab8416f087b50265cbf3b6abfdfe30116\" returns successfully" May 13 12:56:09.774086 containerd[1557]: time="2025-05-13T12:56:09.773969240Z" level=info msg="received exit event sandbox_id:\"f9e9421590f91265ff047eb5254f928ab8416f087b50265cbf3b6abfdfe30116\" exit_status:137 exited_at:{seconds:1747140969 nanos:656882301}" May 13 12:56:09.805010 kubelet[2682]: I0513 12:56:09.804968 2682 memory_manager.go:355] "RemoveStaleState removing state" podUID="31f9c550-0d42-4e05-9662-72cf1b1971e6" containerName="calico-node" May 13 12:56:09.805010 kubelet[2682]: I0513 12:56:09.804998 2682 memory_manager.go:355] "RemoveStaleState removing state" podUID="31f9c550-0d42-4e05-9662-72cf1b1971e6" containerName="calico-node" May 13 12:56:09.805010 kubelet[2682]: I0513 12:56:09.805005 2682 memory_manager.go:355] "RemoveStaleState removing state" podUID="31f9c550-0d42-4e05-9662-72cf1b1971e6" containerName="calico-node" May 13 12:56:09.811162 kubelet[2682]: I0513 12:56:09.811101 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31f9c550-0d42-4e05-9662-72cf1b1971e6-tigera-ca-bundle\") pod \"31f9c550-0d42-4e05-9662-72cf1b1971e6\" (UID: \"31f9c550-0d42-4e05-9662-72cf1b1971e6\") " May 13 12:56:09.812319 kubelet[2682]: I0513 12:56:09.811312 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvn76\" (UniqueName: \"kubernetes.io/projected/31f9c550-0d42-4e05-9662-72cf1b1971e6-kube-api-access-xvn76\") pod \"31f9c550-0d42-4e05-9662-72cf1b1971e6\" (UID: \"31f9c550-0d42-4e05-9662-72cf1b1971e6\") " May 13 12:56:09.812473 kubelet[2682]: I0513 12:56:09.812337 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/31f9c550-0d42-4e05-9662-72cf1b1971e6-cni-log-dir\") pod \"31f9c550-0d42-4e05-9662-72cf1b1971e6\" (UID: \"31f9c550-0d42-4e05-9662-72cf1b1971e6\") " May 13 12:56:09.812473 kubelet[2682]: I0513 12:56:09.812369 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31f9c550-0d42-4e05-9662-72cf1b1971e6-lib-modules\") pod \"31f9c550-0d42-4e05-9662-72cf1b1971e6\" (UID: \"31f9c550-0d42-4e05-9662-72cf1b1971e6\") " May 13 12:56:09.812473 kubelet[2682]: I0513 12:56:09.812387 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/31f9c550-0d42-4e05-9662-72cf1b1971e6-var-run-calico\") pod \"31f9c550-0d42-4e05-9662-72cf1b1971e6\" (UID: \"31f9c550-0d42-4e05-9662-72cf1b1971e6\") " May 13 12:56:09.812473 kubelet[2682]: I0513 12:56:09.812406 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/31f9c550-0d42-4e05-9662-72cf1b1971e6-policysync\") pod \"31f9c550-0d42-4e05-9662-72cf1b1971e6\" (UID: \"31f9c550-0d42-4e05-9662-72cf1b1971e6\") " May 13 12:56:09.812473 kubelet[2682]: I0513 12:56:09.812427 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31f9c550-0d42-4e05-9662-72cf1b1971e6-xtables-lock\") pod \"31f9c550-0d42-4e05-9662-72cf1b1971e6\" (UID: \"31f9c550-0d42-4e05-9662-72cf1b1971e6\") " May 13 12:56:09.812473 kubelet[2682]: I0513 12:56:09.812449 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/31f9c550-0d42-4e05-9662-72cf1b1971e6-var-lib-calico\") pod \"31f9c550-0d42-4e05-9662-72cf1b1971e6\" (UID: \"31f9c550-0d42-4e05-9662-72cf1b1971e6\") " May 13 12:56:09.812624 kubelet[2682]: I0513 12:56:09.812468 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/31f9c550-0d42-4e05-9662-72cf1b1971e6-flexvol-driver-host\") pod \"31f9c550-0d42-4e05-9662-72cf1b1971e6\" (UID: \"31f9c550-0d42-4e05-9662-72cf1b1971e6\") " May 13 12:56:09.812624 kubelet[2682]: I0513 12:56:09.812488 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/31f9c550-0d42-4e05-9662-72cf1b1971e6-cni-bin-dir\") pod \"31f9c550-0d42-4e05-9662-72cf1b1971e6\" (UID: \"31f9c550-0d42-4e05-9662-72cf1b1971e6\") " May 13 12:56:09.812624 kubelet[2682]: I0513 12:56:09.812513 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/31f9c550-0d42-4e05-9662-72cf1b1971e6-cni-net-dir\") pod \"31f9c550-0d42-4e05-9662-72cf1b1971e6\" (UID: \"31f9c550-0d42-4e05-9662-72cf1b1971e6\") " May 13 12:56:09.812624 kubelet[2682]: I0513 12:56:09.812537 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/31f9c550-0d42-4e05-9662-72cf1b1971e6-node-certs\") pod \"31f9c550-0d42-4e05-9662-72cf1b1971e6\" (UID: \"31f9c550-0d42-4e05-9662-72cf1b1971e6\") " May 13 12:56:09.812970 kubelet[2682]: I0513 12:56:09.812707 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31f9c550-0d42-4e05-9662-72cf1b1971e6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "31f9c550-0d42-4e05-9662-72cf1b1971e6" (UID: "31f9c550-0d42-4e05-9662-72cf1b1971e6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 12:56:09.813030 kubelet[2682]: I0513 12:56:09.812921 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31f9c550-0d42-4e05-9662-72cf1b1971e6-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "31f9c550-0d42-4e05-9662-72cf1b1971e6" (UID: "31f9c550-0d42-4e05-9662-72cf1b1971e6"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 12:56:09.813030 kubelet[2682]: I0513 12:56:09.812951 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31f9c550-0d42-4e05-9662-72cf1b1971e6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "31f9c550-0d42-4e05-9662-72cf1b1971e6" (UID: "31f9c550-0d42-4e05-9662-72cf1b1971e6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 12:56:09.813084 kubelet[2682]: I0513 12:56:09.813042 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31f9c550-0d42-4e05-9662-72cf1b1971e6-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "31f9c550-0d42-4e05-9662-72cf1b1971e6" (UID: "31f9c550-0d42-4e05-9662-72cf1b1971e6"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 12:56:09.813084 kubelet[2682]: I0513 12:56:09.813074 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31f9c550-0d42-4e05-9662-72cf1b1971e6-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "31f9c550-0d42-4e05-9662-72cf1b1971e6" (UID: "31f9c550-0d42-4e05-9662-72cf1b1971e6"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 12:56:09.813084 kubelet[2682]: I0513 12:56:09.813092 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31f9c550-0d42-4e05-9662-72cf1b1971e6-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "31f9c550-0d42-4e05-9662-72cf1b1971e6" (UID: "31f9c550-0d42-4e05-9662-72cf1b1971e6"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 12:56:09.813243 kubelet[2682]: I0513 12:56:09.813110 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31f9c550-0d42-4e05-9662-72cf1b1971e6-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "31f9c550-0d42-4e05-9662-72cf1b1971e6" (UID: "31f9c550-0d42-4e05-9662-72cf1b1971e6"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 12:56:09.813243 kubelet[2682]: I0513 12:56:09.813130 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31f9c550-0d42-4e05-9662-72cf1b1971e6-policysync" (OuterVolumeSpecName: "policysync") pod "31f9c550-0d42-4e05-9662-72cf1b1971e6" (UID: "31f9c550-0d42-4e05-9662-72cf1b1971e6"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 12:56:09.813243 kubelet[2682]: I0513 12:56:09.813164 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31f9c550-0d42-4e05-9662-72cf1b1971e6-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "31f9c550-0d42-4e05-9662-72cf1b1971e6" (UID: "31f9c550-0d42-4e05-9662-72cf1b1971e6"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 12:56:09.816878 systemd[1]: Created slice kubepods-besteffort-podc9ef06b7_86c0_4559_a499_7c67baa98761.slice - libcontainer container kubepods-besteffort-podc9ef06b7_86c0_4559_a499_7c67baa98761.slice. May 13 12:56:09.818885 kubelet[2682]: I0513 12:56:09.818726 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31f9c550-0d42-4e05-9662-72cf1b1971e6-node-certs" (OuterVolumeSpecName: "node-certs") pod "31f9c550-0d42-4e05-9662-72cf1b1971e6" (UID: "31f9c550-0d42-4e05-9662-72cf1b1971e6"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 13 12:56:09.819878 kubelet[2682]: I0513 12:56:09.819849 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31f9c550-0d42-4e05-9662-72cf1b1971e6-kube-api-access-xvn76" (OuterVolumeSpecName: "kube-api-access-xvn76") pod "31f9c550-0d42-4e05-9662-72cf1b1971e6" (UID: "31f9c550-0d42-4e05-9662-72cf1b1971e6"). InnerVolumeSpecName "kube-api-access-xvn76". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 12:56:09.820021 systemd[1]: var-lib-kubelet-pods-31f9c550\x2d0d42\x2d4e05\x2d9662\x2d72cf1b1971e6-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. May 13 12:56:09.820653 kubelet[2682]: I0513 12:56:09.820213 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31f9c550-0d42-4e05-9662-72cf1b1971e6-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "31f9c550-0d42-4e05-9662-72cf1b1971e6" (UID: "31f9c550-0d42-4e05-9662-72cf1b1971e6"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 13 12:56:09.820307 systemd[1]: var-lib-kubelet-pods-31f9c550\x2d0d42\x2d4e05\x2d9662\x2d72cf1b1971e6-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. May 13 12:56:09.820391 systemd[1]: var-lib-kubelet-pods-31f9c550\x2d0d42\x2d4e05\x2d9662\x2d72cf1b1971e6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxvn76.mount: Deactivated successfully. May 13 12:56:09.914366 kubelet[2682]: I0513 12:56:09.913491 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c9ef06b7-86c0-4559-a499-7c67baa98761-flexvol-driver-host\") pod \"calico-node-b9lf8\" (UID: \"c9ef06b7-86c0-4559-a499-7c67baa98761\") " pod="calico-system/calico-node-b9lf8" May 13 12:56:09.914366 kubelet[2682]: I0513 12:56:09.913531 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mszq5\" (UniqueName: \"kubernetes.io/projected/c9ef06b7-86c0-4559-a499-7c67baa98761-kube-api-access-mszq5\") pod \"calico-node-b9lf8\" (UID: \"c9ef06b7-86c0-4559-a499-7c67baa98761\") " pod="calico-system/calico-node-b9lf8" May 13 12:56:09.914366 kubelet[2682]: I0513 12:56:09.913548 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c9ef06b7-86c0-4559-a499-7c67baa98761-cni-bin-dir\") pod \"calico-node-b9lf8\" (UID: \"c9ef06b7-86c0-4559-a499-7c67baa98761\") " pod="calico-system/calico-node-b9lf8" May 13 12:56:09.914366 kubelet[2682]: I0513 12:56:09.913565 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c9ef06b7-86c0-4559-a499-7c67baa98761-xtables-lock\") pod \"calico-node-b9lf8\" (UID: \"c9ef06b7-86c0-4559-a499-7c67baa98761\") " pod="calico-system/calico-node-b9lf8" May 13 12:56:09.914366 kubelet[2682]: I0513 12:56:09.913585 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c9ef06b7-86c0-4559-a499-7c67baa98761-cni-log-dir\") pod \"calico-node-b9lf8\" (UID: \"c9ef06b7-86c0-4559-a499-7c67baa98761\") " pod="calico-system/calico-node-b9lf8" May 13 12:56:09.914570 kubelet[2682]: I0513 12:56:09.913600 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c9ef06b7-86c0-4559-a499-7c67baa98761-policysync\") pod \"calico-node-b9lf8\" (UID: \"c9ef06b7-86c0-4559-a499-7c67baa98761\") " pod="calico-system/calico-node-b9lf8" May 13 12:56:09.914570 kubelet[2682]: I0513 12:56:09.913616 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c9ef06b7-86c0-4559-a499-7c67baa98761-node-certs\") pod \"calico-node-b9lf8\" (UID: \"c9ef06b7-86c0-4559-a499-7c67baa98761\") " pod="calico-system/calico-node-b9lf8" May 13 12:56:09.914570 kubelet[2682]: I0513 12:56:09.913937 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c9ef06b7-86c0-4559-a499-7c67baa98761-cni-net-dir\") pod \"calico-node-b9lf8\" (UID: \"c9ef06b7-86c0-4559-a499-7c67baa98761\") " pod="calico-system/calico-node-b9lf8" May 13 12:56:09.914570 kubelet[2682]: I0513 12:56:09.914286 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c9ef06b7-86c0-4559-a499-7c67baa98761-var-run-calico\") pod \"calico-node-b9lf8\" (UID: \"c9ef06b7-86c0-4559-a499-7c67baa98761\") " pod="calico-system/calico-node-b9lf8" May 13 12:56:09.914570 kubelet[2682]: I0513 12:56:09.914390 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9ef06b7-86c0-4559-a499-7c67baa98761-tigera-ca-bundle\") pod \"calico-node-b9lf8\" (UID: \"c9ef06b7-86c0-4559-a499-7c67baa98761\") " pod="calico-system/calico-node-b9lf8" May 13 12:56:09.914690 kubelet[2682]: I0513 12:56:09.914446 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c9ef06b7-86c0-4559-a499-7c67baa98761-lib-modules\") pod \"calico-node-b9lf8\" (UID: \"c9ef06b7-86c0-4559-a499-7c67baa98761\") " pod="calico-system/calico-node-b9lf8" May 13 12:56:09.914690 kubelet[2682]: I0513 12:56:09.914464 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c9ef06b7-86c0-4559-a499-7c67baa98761-var-lib-calico\") pod \"calico-node-b9lf8\" (UID: \"c9ef06b7-86c0-4559-a499-7c67baa98761\") " pod="calico-system/calico-node-b9lf8" May 13 12:56:09.914690 kubelet[2682]: I0513 12:56:09.914510 2682 reconciler_common.go:299] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31f9c550-0d42-4e05-9662-72cf1b1971e6-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 13 12:56:09.914690 kubelet[2682]: I0513 12:56:09.914523 2682 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31f9c550-0d42-4e05-9662-72cf1b1971e6-lib-modules\") on node \"localhost\" DevicePath \"\"" May 13 12:56:09.914690 kubelet[2682]: I0513 12:56:09.914545 2682 reconciler_common.go:299] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/31f9c550-0d42-4e05-9662-72cf1b1971e6-var-run-calico\") on node \"localhost\" DevicePath \"\"" May 13 12:56:09.914690 kubelet[2682]: I0513 12:56:09.914567 2682 reconciler_common.go:299] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/31f9c550-0d42-4e05-9662-72cf1b1971e6-policysync\") on node \"localhost\" DevicePath \"\"" May 13 12:56:09.914690 kubelet[2682]: I0513 12:56:09.914576 2682 reconciler_common.go:299] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/31f9c550-0d42-4e05-9662-72cf1b1971e6-cni-bin-dir\") on node \"localhost\" DevicePath \"\"" May 13 12:56:09.914844 kubelet[2682]: I0513 12:56:09.914585 2682 reconciler_common.go:299] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/31f9c550-0d42-4e05-9662-72cf1b1971e6-cni-net-dir\") on node \"localhost\" DevicePath \"\"" May 13 12:56:09.914844 kubelet[2682]: I0513 12:56:09.914593 2682 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xvn76\" (UniqueName: \"kubernetes.io/projected/31f9c550-0d42-4e05-9662-72cf1b1971e6-kube-api-access-xvn76\") on node \"localhost\" DevicePath \"\"" May 13 12:56:09.914844 kubelet[2682]: I0513 12:56:09.914604 2682 reconciler_common.go:299] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/31f9c550-0d42-4e05-9662-72cf1b1971e6-cni-log-dir\") on node \"localhost\" DevicePath \"\"" May 13 12:56:09.914844 kubelet[2682]: I0513 12:56:09.914612 2682 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31f9c550-0d42-4e05-9662-72cf1b1971e6-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 13 12:56:09.914844 kubelet[2682]: I0513 12:56:09.914625 2682 reconciler_common.go:299] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/31f9c550-0d42-4e05-9662-72cf1b1971e6-var-lib-calico\") on node \"localhost\" DevicePath \"\"" May 13 12:56:09.914844 kubelet[2682]: I0513 12:56:09.914731 2682 reconciler_common.go:299] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/31f9c550-0d42-4e05-9662-72cf1b1971e6-flexvol-driver-host\") on node \"localhost\" DevicePath \"\"" May 13 12:56:09.914844 kubelet[2682]: I0513 12:56:09.914741 2682 reconciler_common.go:299] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/31f9c550-0d42-4e05-9662-72cf1b1971e6-node-certs\") on node \"localhost\" DevicePath \"\"" May 13 12:56:10.127588 kubelet[2682]: E0513 12:56:10.127545 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:56:10.128219 containerd[1557]: time="2025-05-13T12:56:10.128174714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-b9lf8,Uid:c9ef06b7-86c0-4559-a499-7c67baa98761,Namespace:calico-system,Attempt:0,}" May 13 12:56:10.168104 containerd[1557]: time="2025-05-13T12:56:10.167991943Z" level=info msg="connecting to shim a8b9687c915482ab1192a020f5a047aeb226adec6c8a601a8d451174b8db6188" address="unix:///run/containerd/s/4c652f0dca784daf1cfc1ff3741c6e136f84b1eae1824de206d2181ce7519240" namespace=k8s.io protocol=ttrpc version=3 May 13 12:56:10.192271 systemd[1]: Started cri-containerd-a8b9687c915482ab1192a020f5a047aeb226adec6c8a601a8d451174b8db6188.scope - libcontainer container a8b9687c915482ab1192a020f5a047aeb226adec6c8a601a8d451174b8db6188. May 13 12:56:10.218117 containerd[1557]: time="2025-05-13T12:56:10.218072121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-b9lf8,Uid:c9ef06b7-86c0-4559-a499-7c67baa98761,Namespace:calico-system,Attempt:0,} returns sandbox id \"a8b9687c915482ab1192a020f5a047aeb226adec6c8a601a8d451174b8db6188\"" May 13 12:56:10.221776 kubelet[2682]: E0513 12:56:10.221752 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:56:10.223733 containerd[1557]: time="2025-05-13T12:56:10.223708148Z" level=info msg="CreateContainer within sandbox \"a8b9687c915482ab1192a020f5a047aeb226adec6c8a601a8d451174b8db6188\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 13 12:56:10.250608 containerd[1557]: time="2025-05-13T12:56:10.250567507Z" level=info msg="Container 8cce704d8d981dac6c56c159b6c0ed2940d33c48e1881a614e2a08888db89a57: CDI devices from CRI Config.CDIDevices: []" May 13 12:56:10.260120 containerd[1557]: time="2025-05-13T12:56:10.260051588Z" level=info msg="CreateContainer within sandbox \"a8b9687c915482ab1192a020f5a047aeb226adec6c8a601a8d451174b8db6188\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8cce704d8d981dac6c56c159b6c0ed2940d33c48e1881a614e2a08888db89a57\"" May 13 12:56:10.260543 containerd[1557]: time="2025-05-13T12:56:10.260494326Z" level=info msg="StartContainer for \"8cce704d8d981dac6c56c159b6c0ed2940d33c48e1881a614e2a08888db89a57\"" May 13 12:56:10.261909 containerd[1557]: time="2025-05-13T12:56:10.261880251Z" level=info msg="connecting to shim 8cce704d8d981dac6c56c159b6c0ed2940d33c48e1881a614e2a08888db89a57" address="unix:///run/containerd/s/4c652f0dca784daf1cfc1ff3741c6e136f84b1eae1824de206d2181ce7519240" protocol=ttrpc version=3 May 13 12:56:10.290383 systemd[1]: Started cri-containerd-8cce704d8d981dac6c56c159b6c0ed2940d33c48e1881a614e2a08888db89a57.scope - libcontainer container 8cce704d8d981dac6c56c159b6c0ed2940d33c48e1881a614e2a08888db89a57. May 13 12:56:10.331045 containerd[1557]: time="2025-05-13T12:56:10.330996683Z" level=info msg="StartContainer for \"8cce704d8d981dac6c56c159b6c0ed2940d33c48e1881a614e2a08888db89a57\" returns successfully" May 13 12:56:10.348212 systemd[1]: cri-containerd-8cce704d8d981dac6c56c159b6c0ed2940d33c48e1881a614e2a08888db89a57.scope: Deactivated successfully. May 13 12:56:10.348538 systemd[1]: cri-containerd-8cce704d8d981dac6c56c159b6c0ed2940d33c48e1881a614e2a08888db89a57.scope: Consumed 40ms CPU time, 15.9M memory peak, 7.8M read from disk, 6.3M written to disk. May 13 12:56:10.350248 containerd[1557]: time="2025-05-13T12:56:10.350013901Z" level=info msg="received exit event container_id:\"8cce704d8d981dac6c56c159b6c0ed2940d33c48e1881a614e2a08888db89a57\" id:\"8cce704d8d981dac6c56c159b6c0ed2940d33c48e1881a614e2a08888db89a57\" pid:4750 exited_at:{seconds:1747140970 nanos:349720903}" May 13 12:56:10.350248 containerd[1557]: time="2025-05-13T12:56:10.350107132Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8cce704d8d981dac6c56c159b6c0ed2940d33c48e1881a614e2a08888db89a57\" id:\"8cce704d8d981dac6c56c159b6c0ed2940d33c48e1881a614e2a08888db89a57\" pid:4750 exited_at:{seconds:1747140970 nanos:349720903}" May 13 12:56:10.380463 kubelet[2682]: E0513 12:56:10.380420 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:56:10.393746 systemd[1]: Removed slice kubepods-besteffort-pod31f9c550_0d42_4e05_9662_72cf1b1971e6.slice - libcontainer container kubepods-besteffort-pod31f9c550_0d42_4e05_9662_72cf1b1971e6.slice. May 13 12:56:10.394048 systemd[1]: kubepods-besteffort-pod31f9c550_0d42_4e05_9662_72cf1b1971e6.slice: Consumed 797ms CPU time, 161.2M memory peak, 16K read from disk, 160.4M written to disk. May 13 12:56:10.501334 systemd[1]: Started sshd@17-10.0.0.90:22-10.0.0.1:38748.service - OpenSSH per-connection server daemon (10.0.0.1:38748). May 13 12:56:10.556657 sshd[4785]: Accepted publickey for core from 10.0.0.1 port 38748 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 12:56:10.558103 sshd-session[4785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:56:10.562370 systemd-logind[1539]: New session 18 of user core. May 13 12:56:10.573269 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 12:56:10.576133 kubelet[2682]: E0513 12:56:10.576104 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:56:10.578408 containerd[1557]: time="2025-05-13T12:56:10.578321658Z" level=info msg="CreateContainer within sandbox \"a8b9687c915482ab1192a020f5a047aeb226adec6c8a601a8d451174b8db6188\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 12:56:10.582920 kubelet[2682]: I0513 12:56:10.582851 2682 scope.go:117] "RemoveContainer" containerID="94dc1d7962871d763bb10cd6a7375648a1058a56b538eac444e9903512ec7c36" May 13 12:56:10.585426 containerd[1557]: time="2025-05-13T12:56:10.585312410Z" level=info msg="RemoveContainer for \"94dc1d7962871d763bb10cd6a7375648a1058a56b538eac444e9903512ec7c36\"" May 13 12:56:10.591915 containerd[1557]: time="2025-05-13T12:56:10.591807050Z" level=info msg="Container dbdc68a07baa9f6591c8db47a33a4afafdd718146bd124819fc8f2fe96377e4f: CDI devices from CRI Config.CDIDevices: []" May 13 12:56:10.592649 containerd[1557]: time="2025-05-13T12:56:10.592545281Z" level=info msg="RemoveContainer for \"94dc1d7962871d763bb10cd6a7375648a1058a56b538eac444e9903512ec7c36\" returns successfully" May 13 12:56:10.592734 kubelet[2682]: I0513 12:56:10.592701 2682 scope.go:117] "RemoveContainer" containerID="db4703075c3f028f1d61d7750695a3d8c0fa29299e7b3ba4266285dbfea185ae" May 13 12:56:10.598664 containerd[1557]: time="2025-05-13T12:56:10.598537588Z" level=info msg="RemoveContainer for \"db4703075c3f028f1d61d7750695a3d8c0fa29299e7b3ba4266285dbfea185ae\"" May 13 12:56:10.607514 containerd[1557]: time="2025-05-13T12:56:10.607468227Z" level=info msg="CreateContainer within sandbox \"a8b9687c915482ab1192a020f5a047aeb226adec6c8a601a8d451174b8db6188\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"dbdc68a07baa9f6591c8db47a33a4afafdd718146bd124819fc8f2fe96377e4f\"" May 13 12:56:10.608005 containerd[1557]: time="2025-05-13T12:56:10.607966623Z" level=info msg="StartContainer for \"dbdc68a07baa9f6591c8db47a33a4afafdd718146bd124819fc8f2fe96377e4f\"" May 13 12:56:10.615962 containerd[1557]: time="2025-05-13T12:56:10.615927664Z" level=info msg="connecting to shim dbdc68a07baa9f6591c8db47a33a4afafdd718146bd124819fc8f2fe96377e4f" address="unix:///run/containerd/s/4c652f0dca784daf1cfc1ff3741c6e136f84b1eae1824de206d2181ce7519240" protocol=ttrpc version=3 May 13 12:56:10.622178 containerd[1557]: time="2025-05-13T12:56:10.621342392Z" level=info msg="RemoveContainer for \"db4703075c3f028f1d61d7750695a3d8c0fa29299e7b3ba4266285dbfea185ae\" returns successfully" May 13 12:56:10.622236 kubelet[2682]: I0513 12:56:10.621531 2682 scope.go:117] "RemoveContainer" containerID="51d8de80c24a4165b9f870a7fc4435fc9a3cc1f01ae38aca45cb798cadfaab31" May 13 12:56:10.626802 containerd[1557]: time="2025-05-13T12:56:10.626750668Z" level=info msg="RemoveContainer for \"51d8de80c24a4165b9f870a7fc4435fc9a3cc1f01ae38aca45cb798cadfaab31\"" May 13 12:56:10.632092 containerd[1557]: time="2025-05-13T12:56:10.632056897Z" level=info msg="RemoveContainer for \"51d8de80c24a4165b9f870a7fc4435fc9a3cc1f01ae38aca45cb798cadfaab31\" returns successfully" May 13 12:56:10.641301 systemd[1]: Started cri-containerd-dbdc68a07baa9f6591c8db47a33a4afafdd718146bd124819fc8f2fe96377e4f.scope - libcontainer container dbdc68a07baa9f6591c8db47a33a4afafdd718146bd124819fc8f2fe96377e4f. May 13 12:56:10.690488 containerd[1557]: time="2025-05-13T12:56:10.690449738Z" level=info msg="StartContainer for \"dbdc68a07baa9f6591c8db47a33a4afafdd718146bd124819fc8f2fe96377e4f\" returns successfully" May 13 12:56:10.700782 sshd[4792]: Connection closed by 10.0.0.1 port 38748 May 13 12:56:10.701157 sshd-session[4785]: pam_unix(sshd:session): session closed for user core May 13 12:56:10.704761 systemd[1]: sshd@17-10.0.0.90:22-10.0.0.1:38748.service: Deactivated successfully. May 13 12:56:10.707039 systemd[1]: session-18.scope: Deactivated successfully. May 13 12:56:10.708895 systemd-logind[1539]: Session 18 logged out. Waiting for processes to exit. May 13 12:56:10.710250 systemd-logind[1539]: Removed session 18. May 13 12:56:10.790182 containerd[1557]: time="2025-05-13T12:56:10.789907694Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f9e9421590f91265ff047eb5254f928ab8416f087b50265cbf3b6abfdfe30116\" id:\"f9e9421590f91265ff047eb5254f928ab8416f087b50265cbf3b6abfdfe30116\" pid:3176 exit_status:137 exited_at:{seconds:1747140969 nanos:656882301}" May 13 12:56:11.046862 systemd[1]: cri-containerd-dbdc68a07baa9f6591c8db47a33a4afafdd718146bd124819fc8f2fe96377e4f.scope: Deactivated successfully. May 13 12:56:11.047254 systemd[1]: cri-containerd-dbdc68a07baa9f6591c8db47a33a4afafdd718146bd124819fc8f2fe96377e4f.scope: Consumed 639ms CPU time, 113.3M memory peak, 100.8M read from disk. May 13 12:56:11.047808 containerd[1557]: time="2025-05-13T12:56:11.047723051Z" level=info msg="received exit event container_id:\"dbdc68a07baa9f6591c8db47a33a4afafdd718146bd124819fc8f2fe96377e4f\" id:\"dbdc68a07baa9f6591c8db47a33a4afafdd718146bd124819fc8f2fe96377e4f\" pid:4815 exited_at:{seconds:1747140971 nanos:47481163}" May 13 12:56:11.048421 containerd[1557]: time="2025-05-13T12:56:11.048190646Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dbdc68a07baa9f6591c8db47a33a4afafdd718146bd124819fc8f2fe96377e4f\" id:\"dbdc68a07baa9f6591c8db47a33a4afafdd718146bd124819fc8f2fe96377e4f\" pid:4815 exited_at:{seconds:1747140971 nanos:47481163}" May 13 12:56:11.070806 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dbdc68a07baa9f6591c8db47a33a4afafdd718146bd124819fc8f2fe96377e4f-rootfs.mount: Deactivated successfully. May 13 12:56:11.587171 kubelet[2682]: E0513 12:56:11.587085 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:56:11.601059 containerd[1557]: time="2025-05-13T12:56:11.600306170Z" level=info msg="CreateContainer within sandbox \"a8b9687c915482ab1192a020f5a047aeb226adec6c8a601a8d451174b8db6188\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 13 12:56:11.611973 containerd[1557]: time="2025-05-13T12:56:11.611926599Z" level=info msg="Container f1e9f5bdcab78643f7b4c4b99dfc386d283dab99aea673955aa8b97f68c61944: CDI devices from CRI Config.CDIDevices: []" May 13 12:56:11.622268 containerd[1557]: time="2025-05-13T12:56:11.622227796Z" level=info msg="CreateContainer within sandbox \"a8b9687c915482ab1192a020f5a047aeb226adec6c8a601a8d451174b8db6188\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f1e9f5bdcab78643f7b4c4b99dfc386d283dab99aea673955aa8b97f68c61944\"" May 13 12:56:11.622691 containerd[1557]: time="2025-05-13T12:56:11.622672616Z" level=info msg="StartContainer for \"f1e9f5bdcab78643f7b4c4b99dfc386d283dab99aea673955aa8b97f68c61944\"" May 13 12:56:11.624022 containerd[1557]: time="2025-05-13T12:56:11.623995588Z" level=info msg="connecting to shim f1e9f5bdcab78643f7b4c4b99dfc386d283dab99aea673955aa8b97f68c61944" address="unix:///run/containerd/s/4c652f0dca784daf1cfc1ff3741c6e136f84b1eae1824de206d2181ce7519240" protocol=ttrpc version=3 May 13 12:56:11.651493 systemd[1]: Started cri-containerd-f1e9f5bdcab78643f7b4c4b99dfc386d283dab99aea673955aa8b97f68c61944.scope - libcontainer container f1e9f5bdcab78643f7b4c4b99dfc386d283dab99aea673955aa8b97f68c61944. May 13 12:56:11.719071 containerd[1557]: time="2025-05-13T12:56:11.719013814Z" level=info msg="StartContainer for \"f1e9f5bdcab78643f7b4c4b99dfc386d283dab99aea673955aa8b97f68c61944\" returns successfully" May 13 12:56:12.382655 kubelet[2682]: I0513 12:56:12.382603 2682 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31f9c550-0d42-4e05-9662-72cf1b1971e6" path="/var/lib/kubelet/pods/31f9c550-0d42-4e05-9662-72cf1b1971e6/volumes" May 13 12:56:12.594901 kubelet[2682]: E0513 12:56:12.594795 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:56:12.662973 containerd[1557]: time="2025-05-13T12:56:12.662860666Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f1e9f5bdcab78643f7b4c4b99dfc386d283dab99aea673955aa8b97f68c61944\" id:\"40108803f5c5d2fef1e6b5adf3758e6b3728cad4bd3df0a2287b6897308856f1\" pid:4916 exit_status:1 exited_at:{seconds:1747140972 nanos:662568231}" May 13 12:56:13.376886 systemd-networkd[1488]: vxlan.calico: Link UP May 13 12:56:13.376895 systemd-networkd[1488]: vxlan.calico: Gained carrier May 13 12:56:13.380188 kubelet[2682]: E0513 12:56:13.380159 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:56:13.380878 containerd[1557]: time="2025-05-13T12:56:13.380602296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xlq4k,Uid:15b8d047-8ef6-4678-b676-93259a433fcd,Namespace:kube-system,Attempt:0,}" May 13 12:56:13.500175 systemd-networkd[1488]: cali84e46be32e1: Link UP May 13 12:56:13.500639 systemd-networkd[1488]: cali84e46be32e1: Gained carrier May 13 12:56:13.512962 kubelet[2682]: I0513 12:56:13.512905 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-b9lf8" podStartSLOduration=4.512881452 podStartE2EDuration="4.512881452s" podCreationTimestamp="2025-05-13 12:56:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:56:12.760302343 +0000 UTC m=+74.476956080" watchObservedRunningTime="2025-05-13 12:56:13.512881452 +0000 UTC m=+75.229535189" May 13 12:56:13.516451 containerd[1557]: 2025-05-13 12:56:13.434 [INFO][5092] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--xlq4k-eth0 coredns-668d6bf9bc- kube-system 15b8d047-8ef6-4678-b676-93259a433fcd 709 0 2025-05-13 12:55:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-xlq4k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali84e46be32e1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="662217e5bbadb6a95a8c18649d006e52d865c2a40fe20a5ceb2b1b2952ef4e2f" Namespace="kube-system" Pod="coredns-668d6bf9bc-xlq4k" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xlq4k-" May 13 12:56:13.516451 containerd[1557]: 2025-05-13 12:56:13.434 [INFO][5092] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="662217e5bbadb6a95a8c18649d006e52d865c2a40fe20a5ceb2b1b2952ef4e2f" Namespace="kube-system" Pod="coredns-668d6bf9bc-xlq4k" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xlq4k-eth0" May 13 12:56:13.516451 containerd[1557]: 2025-05-13 12:56:13.464 [INFO][5115] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="662217e5bbadb6a95a8c18649d006e52d865c2a40fe20a5ceb2b1b2952ef4e2f" HandleID="k8s-pod-network.662217e5bbadb6a95a8c18649d006e52d865c2a40fe20a5ceb2b1b2952ef4e2f" Workload="localhost-k8s-coredns--668d6bf9bc--xlq4k-eth0" May 13 12:56:13.516646 containerd[1557]: 2025-05-13 12:56:13.471 [INFO][5115] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="662217e5bbadb6a95a8c18649d006e52d865c2a40fe20a5ceb2b1b2952ef4e2f" HandleID="k8s-pod-network.662217e5bbadb6a95a8c18649d006e52d865c2a40fe20a5ceb2b1b2952ef4e2f" Workload="localhost-k8s-coredns--668d6bf9bc--xlq4k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000362270), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-xlq4k", "timestamp":"2025-05-13 12:56:13.464343138 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 12:56:13.516646 containerd[1557]: 2025-05-13 12:56:13.471 [INFO][5115] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 12:56:13.516646 containerd[1557]: 2025-05-13 12:56:13.471 [INFO][5115] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 12:56:13.516646 containerd[1557]: 2025-05-13 12:56:13.471 [INFO][5115] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 12:56:13.516646 containerd[1557]: 2025-05-13 12:56:13.473 [INFO][5115] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.662217e5bbadb6a95a8c18649d006e52d865c2a40fe20a5ceb2b1b2952ef4e2f" host="localhost" May 13 12:56:13.516646 containerd[1557]: 2025-05-13 12:56:13.476 [INFO][5115] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 12:56:13.516646 containerd[1557]: 2025-05-13 12:56:13.479 [INFO][5115] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 12:56:13.516646 containerd[1557]: 2025-05-13 12:56:13.481 [INFO][5115] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 12:56:13.516646 containerd[1557]: 2025-05-13 12:56:13.482 [INFO][5115] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 12:56:13.516646 containerd[1557]: 2025-05-13 12:56:13.482 [INFO][5115] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.662217e5bbadb6a95a8c18649d006e52d865c2a40fe20a5ceb2b1b2952ef4e2f" host="localhost" May 13 12:56:13.516886 containerd[1557]: 2025-05-13 12:56:13.484 [INFO][5115] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.662217e5bbadb6a95a8c18649d006e52d865c2a40fe20a5ceb2b1b2952ef4e2f May 13 12:56:13.516886 containerd[1557]: 2025-05-13 12:56:13.487 [INFO][5115] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.662217e5bbadb6a95a8c18649d006e52d865c2a40fe20a5ceb2b1b2952ef4e2f" host="localhost" May 13 12:56:13.516886 containerd[1557]: 2025-05-13 12:56:13.492 [INFO][5115] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.662217e5bbadb6a95a8c18649d006e52d865c2a40fe20a5ceb2b1b2952ef4e2f" host="localhost" May 13 12:56:13.516886 containerd[1557]: 2025-05-13 12:56:13.492 [INFO][5115] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.662217e5bbadb6a95a8c18649d006e52d865c2a40fe20a5ceb2b1b2952ef4e2f" host="localhost" May 13 12:56:13.516886 containerd[1557]: 2025-05-13 12:56:13.492 [INFO][5115] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 12:56:13.516886 containerd[1557]: 2025-05-13 12:56:13.492 [INFO][5115] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="662217e5bbadb6a95a8c18649d006e52d865c2a40fe20a5ceb2b1b2952ef4e2f" HandleID="k8s-pod-network.662217e5bbadb6a95a8c18649d006e52d865c2a40fe20a5ceb2b1b2952ef4e2f" Workload="localhost-k8s-coredns--668d6bf9bc--xlq4k-eth0" May 13 12:56:13.517005 containerd[1557]: 2025-05-13 12:56:13.497 [INFO][5092] cni-plugin/k8s.go 386: Populated endpoint ContainerID="662217e5bbadb6a95a8c18649d006e52d865c2a40fe20a5ceb2b1b2952ef4e2f" Namespace="kube-system" Pod="coredns-668d6bf9bc-xlq4k" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xlq4k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--xlq4k-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"15b8d047-8ef6-4678-b676-93259a433fcd", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 12, 55, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-xlq4k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali84e46be32e1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 12:56:13.517067 containerd[1557]: 2025-05-13 12:56:13.497 [INFO][5092] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="662217e5bbadb6a95a8c18649d006e52d865c2a40fe20a5ceb2b1b2952ef4e2f" Namespace="kube-system" Pod="coredns-668d6bf9bc-xlq4k" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xlq4k-eth0" May 13 12:56:13.517067 containerd[1557]: 2025-05-13 12:56:13.497 [INFO][5092] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali84e46be32e1 ContainerID="662217e5bbadb6a95a8c18649d006e52d865c2a40fe20a5ceb2b1b2952ef4e2f" Namespace="kube-system" Pod="coredns-668d6bf9bc-xlq4k" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xlq4k-eth0" May 13 12:56:13.517067 containerd[1557]: 2025-05-13 12:56:13.501 [INFO][5092] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="662217e5bbadb6a95a8c18649d006e52d865c2a40fe20a5ceb2b1b2952ef4e2f" Namespace="kube-system" Pod="coredns-668d6bf9bc-xlq4k" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xlq4k-eth0" May 13 12:56:13.517165 containerd[1557]: 2025-05-13 12:56:13.501 [INFO][5092] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="662217e5bbadb6a95a8c18649d006e52d865c2a40fe20a5ceb2b1b2952ef4e2f" Namespace="kube-system" Pod="coredns-668d6bf9bc-xlq4k" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xlq4k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--xlq4k-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"15b8d047-8ef6-4678-b676-93259a433fcd", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 12, 55, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"662217e5bbadb6a95a8c18649d006e52d865c2a40fe20a5ceb2b1b2952ef4e2f", Pod:"coredns-668d6bf9bc-xlq4k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali84e46be32e1", MAC:"fe:0d:75:28:ed:a9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 12:56:13.517165 containerd[1557]: 2025-05-13 12:56:13.513 [INFO][5092] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="662217e5bbadb6a95a8c18649d006e52d865c2a40fe20a5ceb2b1b2952ef4e2f" Namespace="kube-system" Pod="coredns-668d6bf9bc-xlq4k" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xlq4k-eth0" May 13 12:56:13.596069 kubelet[2682]: E0513 12:56:13.596036 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:56:13.624771 containerd[1557]: time="2025-05-13T12:56:13.624711837Z" level=info msg="connecting to shim 662217e5bbadb6a95a8c18649d006e52d865c2a40fe20a5ceb2b1b2952ef4e2f" address="unix:///run/containerd/s/8f3833a7fb59a964abd279e57a10848de581c3d6d1a2013c1a98d4328743405c" namespace=k8s.io protocol=ttrpc version=3 May 13 12:56:13.665467 systemd[1]: Started cri-containerd-662217e5bbadb6a95a8c18649d006e52d865c2a40fe20a5ceb2b1b2952ef4e2f.scope - libcontainer container 662217e5bbadb6a95a8c18649d006e52d865c2a40fe20a5ceb2b1b2952ef4e2f. May 13 12:56:13.683244 systemd-resolved[1403]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 12:56:13.688664 containerd[1557]: time="2025-05-13T12:56:13.688591423Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f1e9f5bdcab78643f7b4c4b99dfc386d283dab99aea673955aa8b97f68c61944\" id:\"f1c482964ffb1857e8d43d49b7ff8ad75325fe05205f57a2f1c757f11a5c4ddc\" pid:5146 exit_status:1 exited_at:{seconds:1747140973 nanos:688105854}" May 13 12:56:13.763379 containerd[1557]: time="2025-05-13T12:56:13.763329770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xlq4k,Uid:15b8d047-8ef6-4678-b676-93259a433fcd,Namespace:kube-system,Attempt:0,} returns sandbox id \"662217e5bbadb6a95a8c18649d006e52d865c2a40fe20a5ceb2b1b2952ef4e2f\"" May 13 12:56:13.764126 kubelet[2682]: E0513 12:56:13.764093 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:56:13.765781 containerd[1557]: time="2025-05-13T12:56:13.765754384Z" level=info msg="CreateContainer within sandbox \"662217e5bbadb6a95a8c18649d006e52d865c2a40fe20a5ceb2b1b2952ef4e2f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 12:56:13.830457 containerd[1557]: time="2025-05-13T12:56:13.830392576Z" level=info msg="Container 99469464e287368d1b4a0aca25a39f4b5df80692095bec2118c3346f9e15fe17: CDI devices from CRI Config.CDIDevices: []" May 13 12:56:13.830661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1104645644.mount: Deactivated successfully. May 13 12:56:13.837209 containerd[1557]: time="2025-05-13T12:56:13.837166765Z" level=info msg="CreateContainer within sandbox \"662217e5bbadb6a95a8c18649d006e52d865c2a40fe20a5ceb2b1b2952ef4e2f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"99469464e287368d1b4a0aca25a39f4b5df80692095bec2118c3346f9e15fe17\"" May 13 12:56:13.837657 containerd[1557]: time="2025-05-13T12:56:13.837629038Z" level=info msg="StartContainer for \"99469464e287368d1b4a0aca25a39f4b5df80692095bec2118c3346f9e15fe17\"" May 13 12:56:13.838369 containerd[1557]: time="2025-05-13T12:56:13.838347185Z" level=info msg="connecting to shim 99469464e287368d1b4a0aca25a39f4b5df80692095bec2118c3346f9e15fe17" address="unix:///run/containerd/s/8f3833a7fb59a964abd279e57a10848de581c3d6d1a2013c1a98d4328743405c" protocol=ttrpc version=3 May 13 12:56:13.865405 systemd[1]: Started cri-containerd-99469464e287368d1b4a0aca25a39f4b5df80692095bec2118c3346f9e15fe17.scope - libcontainer container 99469464e287368d1b4a0aca25a39f4b5df80692095bec2118c3346f9e15fe17. May 13 12:56:13.903112 containerd[1557]: time="2025-05-13T12:56:13.903071654Z" level=info msg="StartContainer for \"99469464e287368d1b4a0aca25a39f4b5df80692095bec2118c3346f9e15fe17\" returns successfully" May 13 12:56:14.382582 kubelet[2682]: E0513 12:56:14.382549 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:56:14.398675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2065653928.mount: Deactivated successfully. May 13 12:56:14.600939 kubelet[2682]: E0513 12:56:14.600890 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:56:14.615868 kubelet[2682]: I0513 12:56:14.615727 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-xlq4k" podStartSLOduration=71.615706961 podStartE2EDuration="1m11.615706961s" podCreationTimestamp="2025-05-13 12:55:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:56:14.614450656 +0000 UTC m=+76.331104413" watchObservedRunningTime="2025-05-13 12:56:14.615706961 +0000 UTC m=+76.332360698" May 13 12:56:14.724334 systemd-networkd[1488]: cali84e46be32e1: Gained IPv6LL May 13 12:56:15.364314 systemd-networkd[1488]: vxlan.calico: Gained IPv6LL May 13 12:56:15.602642 kubelet[2682]: E0513 12:56:15.602613 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:56:15.718522 systemd[1]: Started sshd@18-10.0.0.90:22-10.0.0.1:38752.service - OpenSSH per-connection server daemon (10.0.0.1:38752). May 13 12:56:15.786369 sshd[5279]: Accepted publickey for core from 10.0.0.1 port 38752 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 12:56:15.788276 sshd-session[5279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:56:15.792697 systemd-logind[1539]: New session 19 of user core. May 13 12:56:15.802256 systemd[1]: Started session-19.scope - Session 19 of User core. May 13 12:56:15.919825 sshd[5281]: Connection closed by 10.0.0.1 port 38752 May 13 12:56:15.920171 sshd-session[5279]: pam_unix(sshd:session): session closed for user core May 13 12:56:15.923185 systemd[1]: sshd@18-10.0.0.90:22-10.0.0.1:38752.service: Deactivated successfully. May 13 12:56:15.925320 systemd[1]: session-19.scope: Deactivated successfully. May 13 12:56:15.926796 systemd-logind[1539]: Session 19 logged out. Waiting for processes to exit. May 13 12:56:15.928022 systemd-logind[1539]: Removed session 19. May 13 12:56:16.379969 kubelet[2682]: E0513 12:56:16.379935 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:56:16.380734 containerd[1557]: time="2025-05-13T12:56:16.380380479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fvsq7,Uid:4faa16ac-8041-4063-89da-2ef0847f8c7d,Namespace:kube-system,Attempt:0,}" May 13 12:56:16.511979 systemd-networkd[1488]: cali1028224dadc: Link UP May 13 12:56:16.512871 systemd-networkd[1488]: cali1028224dadc: Gained carrier May 13 12:56:16.528072 containerd[1557]: 2025-05-13 12:56:16.422 [INFO][5294] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--fvsq7-eth0 coredns-668d6bf9bc- kube-system 4faa16ac-8041-4063-89da-2ef0847f8c7d 708 0 2025-05-13 12:55:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-fvsq7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1028224dadc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b46ef9efecc075252a69ed1032a2b48fab91ab71a62f729973f5e8524e5e903b" Namespace="kube-system" Pod="coredns-668d6bf9bc-fvsq7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fvsq7-" May 13 12:56:16.528072 containerd[1557]: 2025-05-13 12:56:16.422 [INFO][5294] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b46ef9efecc075252a69ed1032a2b48fab91ab71a62f729973f5e8524e5e903b" Namespace="kube-system" Pod="coredns-668d6bf9bc-fvsq7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fvsq7-eth0" May 13 12:56:16.528072 containerd[1557]: 2025-05-13 12:56:16.449 [INFO][5309] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b46ef9efecc075252a69ed1032a2b48fab91ab71a62f729973f5e8524e5e903b" HandleID="k8s-pod-network.b46ef9efecc075252a69ed1032a2b48fab91ab71a62f729973f5e8524e5e903b" Workload="localhost-k8s-coredns--668d6bf9bc--fvsq7-eth0" May 13 12:56:16.528072 containerd[1557]: 2025-05-13 12:56:16.456 [INFO][5309] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b46ef9efecc075252a69ed1032a2b48fab91ab71a62f729973f5e8524e5e903b" HandleID="k8s-pod-network.b46ef9efecc075252a69ed1032a2b48fab91ab71a62f729973f5e8524e5e903b" Workload="localhost-k8s-coredns--668d6bf9bc--fvsq7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00019d6a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-fvsq7", "timestamp":"2025-05-13 12:56:16.449651013 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 12:56:16.528072 containerd[1557]: 2025-05-13 12:56:16.456 [INFO][5309] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 12:56:16.528072 containerd[1557]: 2025-05-13 12:56:16.456 [INFO][5309] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 12:56:16.528072 containerd[1557]: 2025-05-13 12:56:16.456 [INFO][5309] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 12:56:16.528072 containerd[1557]: 2025-05-13 12:56:16.458 [INFO][5309] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b46ef9efecc075252a69ed1032a2b48fab91ab71a62f729973f5e8524e5e903b" host="localhost" May 13 12:56:16.528072 containerd[1557]: 2025-05-13 12:56:16.461 [INFO][5309] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 12:56:16.528072 containerd[1557]: 2025-05-13 12:56:16.465 [INFO][5309] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 12:56:16.528072 containerd[1557]: 2025-05-13 12:56:16.466 [INFO][5309] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 12:56:16.528072 containerd[1557]: 2025-05-13 12:56:16.468 [INFO][5309] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 12:56:16.528072 containerd[1557]: 2025-05-13 12:56:16.468 [INFO][5309] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b46ef9efecc075252a69ed1032a2b48fab91ab71a62f729973f5e8524e5e903b" host="localhost" May 13 12:56:16.528072 containerd[1557]: 2025-05-13 12:56:16.469 [INFO][5309] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b46ef9efecc075252a69ed1032a2b48fab91ab71a62f729973f5e8524e5e903b May 13 12:56:16.528072 containerd[1557]: 2025-05-13 12:56:16.473 [INFO][5309] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b46ef9efecc075252a69ed1032a2b48fab91ab71a62f729973f5e8524e5e903b" host="localhost" May 13 12:56:16.528072 containerd[1557]: 2025-05-13 12:56:16.487 [INFO][5309] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.b46ef9efecc075252a69ed1032a2b48fab91ab71a62f729973f5e8524e5e903b" host="localhost" May 13 12:56:16.528072 containerd[1557]: 2025-05-13 12:56:16.487 [INFO][5309] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.b46ef9efecc075252a69ed1032a2b48fab91ab71a62f729973f5e8524e5e903b" host="localhost" May 13 12:56:16.528072 containerd[1557]: 2025-05-13 12:56:16.487 [INFO][5309] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 12:56:16.528072 containerd[1557]: 2025-05-13 12:56:16.487 [INFO][5309] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="b46ef9efecc075252a69ed1032a2b48fab91ab71a62f729973f5e8524e5e903b" HandleID="k8s-pod-network.b46ef9efecc075252a69ed1032a2b48fab91ab71a62f729973f5e8524e5e903b" Workload="localhost-k8s-coredns--668d6bf9bc--fvsq7-eth0" May 13 12:56:16.528836 containerd[1557]: 2025-05-13 12:56:16.498 [INFO][5294] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b46ef9efecc075252a69ed1032a2b48fab91ab71a62f729973f5e8524e5e903b" Namespace="kube-system" Pod="coredns-668d6bf9bc-fvsq7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fvsq7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--fvsq7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4faa16ac-8041-4063-89da-2ef0847f8c7d", ResourceVersion:"708", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 12, 55, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-fvsq7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1028224dadc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 12:56:16.528836 containerd[1557]: 2025-05-13 12:56:16.498 [INFO][5294] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="b46ef9efecc075252a69ed1032a2b48fab91ab71a62f729973f5e8524e5e903b" Namespace="kube-system" Pod="coredns-668d6bf9bc-fvsq7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fvsq7-eth0" May 13 12:56:16.528836 containerd[1557]: 2025-05-13 12:56:16.498 [INFO][5294] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1028224dadc ContainerID="b46ef9efecc075252a69ed1032a2b48fab91ab71a62f729973f5e8524e5e903b" Namespace="kube-system" Pod="coredns-668d6bf9bc-fvsq7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fvsq7-eth0" May 13 12:56:16.528836 containerd[1557]: 2025-05-13 12:56:16.513 [INFO][5294] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b46ef9efecc075252a69ed1032a2b48fab91ab71a62f729973f5e8524e5e903b" Namespace="kube-system" Pod="coredns-668d6bf9bc-fvsq7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fvsq7-eth0" May 13 12:56:16.528836 containerd[1557]: 2025-05-13 12:56:16.513 [INFO][5294] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b46ef9efecc075252a69ed1032a2b48fab91ab71a62f729973f5e8524e5e903b" Namespace="kube-system" Pod="coredns-668d6bf9bc-fvsq7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fvsq7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--fvsq7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4faa16ac-8041-4063-89da-2ef0847f8c7d", ResourceVersion:"708", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 12, 55, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b46ef9efecc075252a69ed1032a2b48fab91ab71a62f729973f5e8524e5e903b", Pod:"coredns-668d6bf9bc-fvsq7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1028224dadc", MAC:"46:11:e0:ab:3c:5b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 12:56:16.528836 containerd[1557]: 2025-05-13 12:56:16.523 [INFO][5294] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b46ef9efecc075252a69ed1032a2b48fab91ab71a62f729973f5e8524e5e903b" Namespace="kube-system" Pod="coredns-668d6bf9bc-fvsq7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fvsq7-eth0" May 13 12:56:16.559234 containerd[1557]: time="2025-05-13T12:56:16.559113966Z" level=info msg="connecting to shim b46ef9efecc075252a69ed1032a2b48fab91ab71a62f729973f5e8524e5e903b" address="unix:///run/containerd/s/dbad368dc67fc4339cddbbf700193416c751188d0c4b663af8a5a54bea5aebad" namespace=k8s.io protocol=ttrpc version=3 May 13 12:56:16.590398 systemd[1]: Started cri-containerd-b46ef9efecc075252a69ed1032a2b48fab91ab71a62f729973f5e8524e5e903b.scope - libcontainer container b46ef9efecc075252a69ed1032a2b48fab91ab71a62f729973f5e8524e5e903b. May 13 12:56:16.604108 kubelet[2682]: E0513 12:56:16.604066 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:56:16.604641 systemd-resolved[1403]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 12:56:16.634384 containerd[1557]: time="2025-05-13T12:56:16.633230421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fvsq7,Uid:4faa16ac-8041-4063-89da-2ef0847f8c7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"b46ef9efecc075252a69ed1032a2b48fab91ab71a62f729973f5e8524e5e903b\"" May 13 12:56:16.634539 kubelet[2682]: E0513 12:56:16.634082 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:56:16.641731 containerd[1557]: time="2025-05-13T12:56:16.641683236Z" level=info msg="CreateContainer within sandbox \"b46ef9efecc075252a69ed1032a2b48fab91ab71a62f729973f5e8524e5e903b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 12:56:16.652295 containerd[1557]: time="2025-05-13T12:56:16.652265655Z" level=info msg="Container c09789cd8258d8655b97bc8e3f09e293dd530737112cf387d0325300e4e0e5b3: CDI devices from CRI Config.CDIDevices: []" May 13 12:56:16.658420 containerd[1557]: time="2025-05-13T12:56:16.658388669Z" level=info msg="CreateContainer within sandbox \"b46ef9efecc075252a69ed1032a2b48fab91ab71a62f729973f5e8524e5e903b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c09789cd8258d8655b97bc8e3f09e293dd530737112cf387d0325300e4e0e5b3\"" May 13 12:56:16.658791 containerd[1557]: time="2025-05-13T12:56:16.658763942Z" level=info msg="StartContainer for \"c09789cd8258d8655b97bc8e3f09e293dd530737112cf387d0325300e4e0e5b3\"" May 13 12:56:16.659599 containerd[1557]: time="2025-05-13T12:56:16.659576178Z" level=info msg="connecting to shim c09789cd8258d8655b97bc8e3f09e293dd530737112cf387d0325300e4e0e5b3" address="unix:///run/containerd/s/dbad368dc67fc4339cddbbf700193416c751188d0c4b663af8a5a54bea5aebad" protocol=ttrpc version=3 May 13 12:56:16.683307 systemd[1]: Started cri-containerd-c09789cd8258d8655b97bc8e3f09e293dd530737112cf387d0325300e4e0e5b3.scope - libcontainer container c09789cd8258d8655b97bc8e3f09e293dd530737112cf387d0325300e4e0e5b3. May 13 12:56:16.712639 containerd[1557]: time="2025-05-13T12:56:16.712604401Z" level=info msg="StartContainer for \"c09789cd8258d8655b97bc8e3f09e293dd530737112cf387d0325300e4e0e5b3\" returns successfully" May 13 12:56:17.379718 containerd[1557]: time="2025-05-13T12:56:17.379667920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-857fbf49df-bgllm,Uid:c197e0bf-0648-47d6-b266-361e6fefface,Namespace:calico-system,Attempt:0,}" May 13 12:56:17.482368 systemd-networkd[1488]: cali48066e34173: Link UP May 13 12:56:17.483182 systemd-networkd[1488]: cali48066e34173: Gained carrier May 13 12:56:17.493543 containerd[1557]: 2025-05-13 12:56:17.413 [INFO][5411] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--857fbf49df--bgllm-eth0 calico-kube-controllers-857fbf49df- calico-system c197e0bf-0648-47d6-b266-361e6fefface 707 0 2025-05-13 12:55:09 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:857fbf49df projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-857fbf49df-bgllm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali48066e34173 [] []}} ContainerID="4369b44fa35855b8004d8f7ea316cf9de3697c810d68e179a692a8c65157bb37" Namespace="calico-system" Pod="calico-kube-controllers-857fbf49df-bgllm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--857fbf49df--bgllm-" May 13 12:56:17.493543 containerd[1557]: 2025-05-13 12:56:17.413 [INFO][5411] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4369b44fa35855b8004d8f7ea316cf9de3697c810d68e179a692a8c65157bb37" Namespace="calico-system" Pod="calico-kube-controllers-857fbf49df-bgllm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--857fbf49df--bgllm-eth0" May 13 12:56:17.493543 containerd[1557]: 2025-05-13 12:56:17.443 [INFO][5426] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4369b44fa35855b8004d8f7ea316cf9de3697c810d68e179a692a8c65157bb37" HandleID="k8s-pod-network.4369b44fa35855b8004d8f7ea316cf9de3697c810d68e179a692a8c65157bb37" Workload="localhost-k8s-calico--kube--controllers--857fbf49df--bgllm-eth0" May 13 12:56:17.493543 containerd[1557]: 2025-05-13 12:56:17.451 [INFO][5426] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4369b44fa35855b8004d8f7ea316cf9de3697c810d68e179a692a8c65157bb37" HandleID="k8s-pod-network.4369b44fa35855b8004d8f7ea316cf9de3697c810d68e179a692a8c65157bb37" Workload="localhost-k8s-calico--kube--controllers--857fbf49df--bgllm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030bd80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-857fbf49df-bgllm", "timestamp":"2025-05-13 12:56:17.442984117 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 12:56:17.493543 containerd[1557]: 2025-05-13 12:56:17.451 [INFO][5426] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 12:56:17.493543 containerd[1557]: 2025-05-13 12:56:17.451 [INFO][5426] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 12:56:17.493543 containerd[1557]: 2025-05-13 12:56:17.451 [INFO][5426] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 12:56:17.493543 containerd[1557]: 2025-05-13 12:56:17.453 [INFO][5426] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4369b44fa35855b8004d8f7ea316cf9de3697c810d68e179a692a8c65157bb37" host="localhost" May 13 12:56:17.493543 containerd[1557]: 2025-05-13 12:56:17.456 [INFO][5426] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 12:56:17.493543 containerd[1557]: 2025-05-13 12:56:17.460 [INFO][5426] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 12:56:17.493543 containerd[1557]: 2025-05-13 12:56:17.462 [INFO][5426] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 12:56:17.493543 containerd[1557]: 2025-05-13 12:56:17.464 [INFO][5426] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 12:56:17.493543 containerd[1557]: 2025-05-13 12:56:17.464 [INFO][5426] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4369b44fa35855b8004d8f7ea316cf9de3697c810d68e179a692a8c65157bb37" host="localhost" May 13 12:56:17.493543 containerd[1557]: 2025-05-13 12:56:17.465 [INFO][5426] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4369b44fa35855b8004d8f7ea316cf9de3697c810d68e179a692a8c65157bb37 May 13 12:56:17.493543 containerd[1557]: 2025-05-13 12:56:17.470 [INFO][5426] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4369b44fa35855b8004d8f7ea316cf9de3697c810d68e179a692a8c65157bb37" host="localhost" May 13 12:56:17.493543 containerd[1557]: 2025-05-13 12:56:17.475 [INFO][5426] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.4369b44fa35855b8004d8f7ea316cf9de3697c810d68e179a692a8c65157bb37" host="localhost" May 13 12:56:17.493543 containerd[1557]: 2025-05-13 12:56:17.475 [INFO][5426] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.4369b44fa35855b8004d8f7ea316cf9de3697c810d68e179a692a8c65157bb37" host="localhost" May 13 12:56:17.493543 containerd[1557]: 2025-05-13 12:56:17.475 [INFO][5426] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 12:56:17.493543 containerd[1557]: 2025-05-13 12:56:17.475 [INFO][5426] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="4369b44fa35855b8004d8f7ea316cf9de3697c810d68e179a692a8c65157bb37" HandleID="k8s-pod-network.4369b44fa35855b8004d8f7ea316cf9de3697c810d68e179a692a8c65157bb37" Workload="localhost-k8s-calico--kube--controllers--857fbf49df--bgllm-eth0" May 13 12:56:17.494515 containerd[1557]: 2025-05-13 12:56:17.480 [INFO][5411] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4369b44fa35855b8004d8f7ea316cf9de3697c810d68e179a692a8c65157bb37" Namespace="calico-system" Pod="calico-kube-controllers-857fbf49df-bgllm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--857fbf49df--bgllm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--857fbf49df--bgllm-eth0", GenerateName:"calico-kube-controllers-857fbf49df-", Namespace:"calico-system", SelfLink:"", UID:"c197e0bf-0648-47d6-b266-361e6fefface", ResourceVersion:"707", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 12, 55, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"857fbf49df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-857fbf49df-bgllm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali48066e34173", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 12:56:17.494515 containerd[1557]: 2025-05-13 12:56:17.480 [INFO][5411] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="4369b44fa35855b8004d8f7ea316cf9de3697c810d68e179a692a8c65157bb37" Namespace="calico-system" Pod="calico-kube-controllers-857fbf49df-bgllm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--857fbf49df--bgllm-eth0" May 13 12:56:17.494515 containerd[1557]: 2025-05-13 12:56:17.480 [INFO][5411] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali48066e34173 ContainerID="4369b44fa35855b8004d8f7ea316cf9de3697c810d68e179a692a8c65157bb37" Namespace="calico-system" Pod="calico-kube-controllers-857fbf49df-bgllm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--857fbf49df--bgllm-eth0" May 13 12:56:17.494515 containerd[1557]: 2025-05-13 12:56:17.482 [INFO][5411] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4369b44fa35855b8004d8f7ea316cf9de3697c810d68e179a692a8c65157bb37" Namespace="calico-system" Pod="calico-kube-controllers-857fbf49df-bgllm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--857fbf49df--bgllm-eth0" May 13 12:56:17.494515 containerd[1557]: 2025-05-13 12:56:17.483 [INFO][5411] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4369b44fa35855b8004d8f7ea316cf9de3697c810d68e179a692a8c65157bb37" Namespace="calico-system" Pod="calico-kube-controllers-857fbf49df-bgllm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--857fbf49df--bgllm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--857fbf49df--bgllm-eth0", GenerateName:"calico-kube-controllers-857fbf49df-", Namespace:"calico-system", SelfLink:"", UID:"c197e0bf-0648-47d6-b266-361e6fefface", ResourceVersion:"707", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 12, 55, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"857fbf49df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4369b44fa35855b8004d8f7ea316cf9de3697c810d68e179a692a8c65157bb37", Pod:"calico-kube-controllers-857fbf49df-bgllm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali48066e34173", MAC:"4e:a0:19:f8:0c:82", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 12:56:17.494515 containerd[1557]: 2025-05-13 12:56:17.490 [INFO][5411] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4369b44fa35855b8004d8f7ea316cf9de3697c810d68e179a692a8c65157bb37" Namespace="calico-system" Pod="calico-kube-controllers-857fbf49df-bgllm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--857fbf49df--bgllm-eth0" May 13 12:56:17.521093 containerd[1557]: time="2025-05-13T12:56:17.521052612Z" level=info msg="connecting to shim 4369b44fa35855b8004d8f7ea316cf9de3697c810d68e179a692a8c65157bb37" address="unix:///run/containerd/s/680e14b46c8cdf3742f3c2480ed758d66cb30499941bead6e0ea4f4d2b744ae9" namespace=k8s.io protocol=ttrpc version=3 May 13 12:56:17.554313 systemd[1]: Started cri-containerd-4369b44fa35855b8004d8f7ea316cf9de3697c810d68e179a692a8c65157bb37.scope - libcontainer container 4369b44fa35855b8004d8f7ea316cf9de3697c810d68e179a692a8c65157bb37. May 13 12:56:17.566410 systemd-resolved[1403]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 12:56:17.606975 kubelet[2682]: E0513 12:56:17.606942 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:56:17.630436 containerd[1557]: time="2025-05-13T12:56:17.630323593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-857fbf49df-bgllm,Uid:c197e0bf-0648-47d6-b266-361e6fefface,Namespace:calico-system,Attempt:0,} returns sandbox id \"4369b44fa35855b8004d8f7ea316cf9de3697c810d68e179a692a8c65157bb37\"" May 13 12:56:17.634406 containerd[1557]: time="2025-05-13T12:56:17.634381048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 13 12:56:17.890867 kubelet[2682]: I0513 12:56:17.890481 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-fvsq7" podStartSLOduration=74.890463023 podStartE2EDuration="1m14.890463023s" podCreationTimestamp="2025-05-13 12:55:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:56:17.702829021 +0000 UTC m=+79.419482758" watchObservedRunningTime="2025-05-13 12:56:17.890463023 +0000 UTC m=+79.607116760" May 13 12:56:17.988360 systemd-networkd[1488]: cali1028224dadc: Gained IPv6LL May 13 12:56:18.380401 kubelet[2682]: E0513 12:56:18.380249 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:56:18.380401 kubelet[2682]: E0513 12:56:18.380379 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:56:18.380894 containerd[1557]: time="2025-05-13T12:56:18.380820004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5559745f68-7jjmz,Uid:69edeedd-4476-4240-999e-ba555f61eb5e,Namespace:calico-apiserver,Attempt:0,}" May 13 12:56:18.481436 systemd-networkd[1488]: cali2e0747770c5: Link UP May 13 12:56:18.482041 systemd-networkd[1488]: cali2e0747770c5: Gained carrier May 13 12:56:18.494007 containerd[1557]: 2025-05-13 12:56:18.418 [INFO][5498] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5559745f68--7jjmz-eth0 calico-apiserver-5559745f68- calico-apiserver 69edeedd-4476-4240-999e-ba555f61eb5e 702 0 2025-05-13 12:55:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5559745f68 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5559745f68-7jjmz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2e0747770c5 [] []}} ContainerID="28482a96e1c8c820172a3702fa2841b7c574d33bf5518ac98686ea495d292847" Namespace="calico-apiserver" Pod="calico-apiserver-5559745f68-7jjmz" WorkloadEndpoint="localhost-k8s-calico--apiserver--5559745f68--7jjmz-" May 13 12:56:18.494007 containerd[1557]: 2025-05-13 12:56:18.418 [INFO][5498] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="28482a96e1c8c820172a3702fa2841b7c574d33bf5518ac98686ea495d292847" Namespace="calico-apiserver" Pod="calico-apiserver-5559745f68-7jjmz" WorkloadEndpoint="localhost-k8s-calico--apiserver--5559745f68--7jjmz-eth0" May 13 12:56:18.494007 containerd[1557]: 2025-05-13 12:56:18.447 [INFO][5516] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="28482a96e1c8c820172a3702fa2841b7c574d33bf5518ac98686ea495d292847" HandleID="k8s-pod-network.28482a96e1c8c820172a3702fa2841b7c574d33bf5518ac98686ea495d292847" Workload="localhost-k8s-calico--apiserver--5559745f68--7jjmz-eth0" May 13 12:56:18.494007 containerd[1557]: 2025-05-13 12:56:18.454 [INFO][5516] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="28482a96e1c8c820172a3702fa2841b7c574d33bf5518ac98686ea495d292847" HandleID="k8s-pod-network.28482a96e1c8c820172a3702fa2841b7c574d33bf5518ac98686ea495d292847" Workload="localhost-k8s-calico--apiserver--5559745f68--7jjmz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027fdc0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5559745f68-7jjmz", "timestamp":"2025-05-13 12:56:18.447066236 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 12:56:18.494007 containerd[1557]: 2025-05-13 12:56:18.454 [INFO][5516] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 12:56:18.494007 containerd[1557]: 2025-05-13 12:56:18.454 [INFO][5516] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 12:56:18.494007 containerd[1557]: 2025-05-13 12:56:18.454 [INFO][5516] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 12:56:18.494007 containerd[1557]: 2025-05-13 12:56:18.456 [INFO][5516] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.28482a96e1c8c820172a3702fa2841b7c574d33bf5518ac98686ea495d292847" host="localhost" May 13 12:56:18.494007 containerd[1557]: 2025-05-13 12:56:18.460 [INFO][5516] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 12:56:18.494007 containerd[1557]: 2025-05-13 12:56:18.463 [INFO][5516] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 12:56:18.494007 containerd[1557]: 2025-05-13 12:56:18.465 [INFO][5516] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 12:56:18.494007 containerd[1557]: 2025-05-13 12:56:18.466 [INFO][5516] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 12:56:18.494007 containerd[1557]: 2025-05-13 12:56:18.466 [INFO][5516] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.28482a96e1c8c820172a3702fa2841b7c574d33bf5518ac98686ea495d292847" host="localhost" May 13 12:56:18.494007 containerd[1557]: 2025-05-13 12:56:18.468 [INFO][5516] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.28482a96e1c8c820172a3702fa2841b7c574d33bf5518ac98686ea495d292847 May 13 12:56:18.494007 containerd[1557]: 2025-05-13 12:56:18.472 [INFO][5516] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.28482a96e1c8c820172a3702fa2841b7c574d33bf5518ac98686ea495d292847" host="localhost" May 13 12:56:18.494007 containerd[1557]: 2025-05-13 12:56:18.476 [INFO][5516] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.28482a96e1c8c820172a3702fa2841b7c574d33bf5518ac98686ea495d292847" host="localhost" May 13 12:56:18.494007 containerd[1557]: 2025-05-13 12:56:18.477 [INFO][5516] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.28482a96e1c8c820172a3702fa2841b7c574d33bf5518ac98686ea495d292847" host="localhost" May 13 12:56:18.494007 containerd[1557]: 2025-05-13 12:56:18.477 [INFO][5516] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 12:56:18.494007 containerd[1557]: 2025-05-13 12:56:18.477 [INFO][5516] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="28482a96e1c8c820172a3702fa2841b7c574d33bf5518ac98686ea495d292847" HandleID="k8s-pod-network.28482a96e1c8c820172a3702fa2841b7c574d33bf5518ac98686ea495d292847" Workload="localhost-k8s-calico--apiserver--5559745f68--7jjmz-eth0" May 13 12:56:18.496788 containerd[1557]: 2025-05-13 12:56:18.479 [INFO][5498] cni-plugin/k8s.go 386: Populated endpoint ContainerID="28482a96e1c8c820172a3702fa2841b7c574d33bf5518ac98686ea495d292847" Namespace="calico-apiserver" Pod="calico-apiserver-5559745f68-7jjmz" WorkloadEndpoint="localhost-k8s-calico--apiserver--5559745f68--7jjmz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5559745f68--7jjmz-eth0", GenerateName:"calico-apiserver-5559745f68-", Namespace:"calico-apiserver", SelfLink:"", UID:"69edeedd-4476-4240-999e-ba555f61eb5e", ResourceVersion:"702", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 12, 55, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5559745f68", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5559745f68-7jjmz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2e0747770c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 12:56:18.496788 containerd[1557]: 2025-05-13 12:56:18.479 [INFO][5498] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="28482a96e1c8c820172a3702fa2841b7c574d33bf5518ac98686ea495d292847" Namespace="calico-apiserver" Pod="calico-apiserver-5559745f68-7jjmz" WorkloadEndpoint="localhost-k8s-calico--apiserver--5559745f68--7jjmz-eth0" May 13 12:56:18.496788 containerd[1557]: 2025-05-13 12:56:18.479 [INFO][5498] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2e0747770c5 ContainerID="28482a96e1c8c820172a3702fa2841b7c574d33bf5518ac98686ea495d292847" Namespace="calico-apiserver" Pod="calico-apiserver-5559745f68-7jjmz" WorkloadEndpoint="localhost-k8s-calico--apiserver--5559745f68--7jjmz-eth0" May 13 12:56:18.496788 containerd[1557]: 2025-05-13 12:56:18.482 [INFO][5498] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="28482a96e1c8c820172a3702fa2841b7c574d33bf5518ac98686ea495d292847" Namespace="calico-apiserver" Pod="calico-apiserver-5559745f68-7jjmz" WorkloadEndpoint="localhost-k8s-calico--apiserver--5559745f68--7jjmz-eth0" May 13 12:56:18.496788 containerd[1557]: 2025-05-13 12:56:18.482 [INFO][5498] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="28482a96e1c8c820172a3702fa2841b7c574d33bf5518ac98686ea495d292847" Namespace="calico-apiserver" Pod="calico-apiserver-5559745f68-7jjmz" WorkloadEndpoint="localhost-k8s-calico--apiserver--5559745f68--7jjmz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5559745f68--7jjmz-eth0", GenerateName:"calico-apiserver-5559745f68-", Namespace:"calico-apiserver", SelfLink:"", UID:"69edeedd-4476-4240-999e-ba555f61eb5e", ResourceVersion:"702", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 12, 55, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5559745f68", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"28482a96e1c8c820172a3702fa2841b7c574d33bf5518ac98686ea495d292847", Pod:"calico-apiserver-5559745f68-7jjmz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2e0747770c5", MAC:"d6:fe:5a:26:cd:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 12:56:18.496788 containerd[1557]: 2025-05-13 12:56:18.490 [INFO][5498] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="28482a96e1c8c820172a3702fa2841b7c574d33bf5518ac98686ea495d292847" Namespace="calico-apiserver" Pod="calico-apiserver-5559745f68-7jjmz" WorkloadEndpoint="localhost-k8s-calico--apiserver--5559745f68--7jjmz-eth0" May 13 12:56:18.520401 containerd[1557]: time="2025-05-13T12:56:18.520353231Z" level=info msg="connecting to shim 28482a96e1c8c820172a3702fa2841b7c574d33bf5518ac98686ea495d292847" address="unix:///run/containerd/s/be5b0a87ed158ebb47c30e57ad905dc869241b89015919f2b7033533ca9b4c0f" namespace=k8s.io protocol=ttrpc version=3 May 13 12:56:18.549262 systemd[1]: Started cri-containerd-28482a96e1c8c820172a3702fa2841b7c574d33bf5518ac98686ea495d292847.scope - libcontainer container 28482a96e1c8c820172a3702fa2841b7c574d33bf5518ac98686ea495d292847. May 13 12:56:18.562313 systemd-resolved[1403]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 12:56:18.595854 containerd[1557]: time="2025-05-13T12:56:18.595801807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5559745f68-7jjmz,Uid:69edeedd-4476-4240-999e-ba555f61eb5e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"28482a96e1c8c820172a3702fa2841b7c574d33bf5518ac98686ea495d292847\"" May 13 12:56:18.610615 kubelet[2682]: E0513 12:56:18.610555 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:56:19.460451 systemd-networkd[1488]: cali48066e34173: Gained IPv6LL May 13 12:56:19.611919 kubelet[2682]: E0513 12:56:19.611876 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:56:20.228335 systemd-networkd[1488]: cali2e0747770c5: Gained IPv6LL May 13 12:56:20.380387 containerd[1557]: time="2025-05-13T12:56:20.380342720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ct2sc,Uid:99af3312-c9d6-477a-83b3-e903dd409646,Namespace:calico-system,Attempt:0,}" May 13 12:56:20.380909 containerd[1557]: time="2025-05-13T12:56:20.380471828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5559745f68-rjh79,Uid:1223f4b3-ae3d-43b8-824a-6a7efb5e24c8,Namespace:calico-apiserver,Attempt:0,}" May 13 12:56:20.792787 systemd-networkd[1488]: cali0f9046696f0: Link UP May 13 12:56:20.795065 systemd-networkd[1488]: cali0f9046696f0: Gained carrier May 13 12:56:20.811270 containerd[1557]: 2025-05-13 12:56:20.701 [INFO][5598] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--ct2sc-eth0 csi-node-driver- calico-system 99af3312-c9d6-477a-83b3-e903dd409646 592 0 2025-05-13 12:55:09 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5b5cc68cd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-ct2sc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0f9046696f0 [] []}} ContainerID="2d06620ed690627757582e2e0c509bbc0e739bfc4d02a70482a5f062fc448afd" Namespace="calico-system" Pod="csi-node-driver-ct2sc" WorkloadEndpoint="localhost-k8s-csi--node--driver--ct2sc-" May 13 12:56:20.811270 containerd[1557]: 2025-05-13 12:56:20.701 [INFO][5598] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2d06620ed690627757582e2e0c509bbc0e739bfc4d02a70482a5f062fc448afd" Namespace="calico-system" Pod="csi-node-driver-ct2sc" WorkloadEndpoint="localhost-k8s-csi--node--driver--ct2sc-eth0" May 13 12:56:20.811270 containerd[1557]: 2025-05-13 12:56:20.743 [INFO][5634] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2d06620ed690627757582e2e0c509bbc0e739bfc4d02a70482a5f062fc448afd" HandleID="k8s-pod-network.2d06620ed690627757582e2e0c509bbc0e739bfc4d02a70482a5f062fc448afd" Workload="localhost-k8s-csi--node--driver--ct2sc-eth0" May 13 12:56:20.811270 containerd[1557]: 2025-05-13 12:56:20.755 [INFO][5634] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2d06620ed690627757582e2e0c509bbc0e739bfc4d02a70482a5f062fc448afd" HandleID="k8s-pod-network.2d06620ed690627757582e2e0c509bbc0e739bfc4d02a70482a5f062fc448afd" Workload="localhost-k8s-csi--node--driver--ct2sc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000316650), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-ct2sc", "timestamp":"2025-05-13 12:56:20.743542674 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 12:56:20.811270 containerd[1557]: 2025-05-13 12:56:20.755 [INFO][5634] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 12:56:20.811270 containerd[1557]: 2025-05-13 12:56:20.755 [INFO][5634] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 12:56:20.811270 containerd[1557]: 2025-05-13 12:56:20.755 [INFO][5634] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 12:56:20.811270 containerd[1557]: 2025-05-13 12:56:20.757 [INFO][5634] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2d06620ed690627757582e2e0c509bbc0e739bfc4d02a70482a5f062fc448afd" host="localhost" May 13 12:56:20.811270 containerd[1557]: 2025-05-13 12:56:20.762 [INFO][5634] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 12:56:20.811270 containerd[1557]: 2025-05-13 12:56:20.767 [INFO][5634] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 12:56:20.811270 containerd[1557]: 2025-05-13 12:56:20.768 [INFO][5634] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 12:56:20.811270 containerd[1557]: 2025-05-13 12:56:20.771 [INFO][5634] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 12:56:20.811270 containerd[1557]: 2025-05-13 12:56:20.771 [INFO][5634] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2d06620ed690627757582e2e0c509bbc0e739bfc4d02a70482a5f062fc448afd" host="localhost" May 13 12:56:20.811270 containerd[1557]: 2025-05-13 12:56:20.773 [INFO][5634] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2d06620ed690627757582e2e0c509bbc0e739bfc4d02a70482a5f062fc448afd May 13 12:56:20.811270 containerd[1557]: 2025-05-13 12:56:20.777 [INFO][5634] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2d06620ed690627757582e2e0c509bbc0e739bfc4d02a70482a5f062fc448afd" host="localhost" May 13 12:56:20.811270 containerd[1557]: 2025-05-13 12:56:20.783 [INFO][5634] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.2d06620ed690627757582e2e0c509bbc0e739bfc4d02a70482a5f062fc448afd" host="localhost" May 13 12:56:20.811270 containerd[1557]: 2025-05-13 12:56:20.783 [INFO][5634] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.2d06620ed690627757582e2e0c509bbc0e739bfc4d02a70482a5f062fc448afd" host="localhost" May 13 12:56:20.811270 containerd[1557]: 2025-05-13 12:56:20.783 [INFO][5634] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 12:56:20.811270 containerd[1557]: 2025-05-13 12:56:20.783 [INFO][5634] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="2d06620ed690627757582e2e0c509bbc0e739bfc4d02a70482a5f062fc448afd" HandleID="k8s-pod-network.2d06620ed690627757582e2e0c509bbc0e739bfc4d02a70482a5f062fc448afd" Workload="localhost-k8s-csi--node--driver--ct2sc-eth0" May 13 12:56:20.811806 containerd[1557]: 2025-05-13 12:56:20.787 [INFO][5598] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2d06620ed690627757582e2e0c509bbc0e739bfc4d02a70482a5f062fc448afd" Namespace="calico-system" Pod="csi-node-driver-ct2sc" WorkloadEndpoint="localhost-k8s-csi--node--driver--ct2sc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ct2sc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"99af3312-c9d6-477a-83b3-e903dd409646", ResourceVersion:"592", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 12, 55, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-ct2sc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0f9046696f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 12:56:20.811806 containerd[1557]: 2025-05-13 12:56:20.788 [INFO][5598] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="2d06620ed690627757582e2e0c509bbc0e739bfc4d02a70482a5f062fc448afd" Namespace="calico-system" Pod="csi-node-driver-ct2sc" WorkloadEndpoint="localhost-k8s-csi--node--driver--ct2sc-eth0" May 13 12:56:20.811806 containerd[1557]: 2025-05-13 12:56:20.788 [INFO][5598] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0f9046696f0 ContainerID="2d06620ed690627757582e2e0c509bbc0e739bfc4d02a70482a5f062fc448afd" Namespace="calico-system" Pod="csi-node-driver-ct2sc" WorkloadEndpoint="localhost-k8s-csi--node--driver--ct2sc-eth0" May 13 12:56:20.811806 containerd[1557]: 2025-05-13 12:56:20.797 [INFO][5598] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2d06620ed690627757582e2e0c509bbc0e739bfc4d02a70482a5f062fc448afd" Namespace="calico-system" Pod="csi-node-driver-ct2sc" WorkloadEndpoint="localhost-k8s-csi--node--driver--ct2sc-eth0" May 13 12:56:20.811806 containerd[1557]: 2025-05-13 12:56:20.797 [INFO][5598] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2d06620ed690627757582e2e0c509bbc0e739bfc4d02a70482a5f062fc448afd" Namespace="calico-system" Pod="csi-node-driver-ct2sc" WorkloadEndpoint="localhost-k8s-csi--node--driver--ct2sc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ct2sc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"99af3312-c9d6-477a-83b3-e903dd409646", ResourceVersion:"592", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 12, 55, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2d06620ed690627757582e2e0c509bbc0e739bfc4d02a70482a5f062fc448afd", Pod:"csi-node-driver-ct2sc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0f9046696f0", MAC:"8a:61:cc:82:19:75", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 12:56:20.811806 containerd[1557]: 2025-05-13 12:56:20.808 [INFO][5598] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2d06620ed690627757582e2e0c509bbc0e739bfc4d02a70482a5f062fc448afd" Namespace="calico-system" Pod="csi-node-driver-ct2sc" WorkloadEndpoint="localhost-k8s-csi--node--driver--ct2sc-eth0" May 13 12:56:20.836751 containerd[1557]: time="2025-05-13T12:56:20.836340578Z" level=info msg="connecting to shim 2d06620ed690627757582e2e0c509bbc0e739bfc4d02a70482a5f062fc448afd" address="unix:///run/containerd/s/0b122f325f1e43b95c2fe56d4e399d697776e97a08a99ffbe886b885cc9b3a4a" namespace=k8s.io protocol=ttrpc version=3 May 13 12:56:20.869406 systemd[1]: Started cri-containerd-2d06620ed690627757582e2e0c509bbc0e739bfc4d02a70482a5f062fc448afd.scope - libcontainer container 2d06620ed690627757582e2e0c509bbc0e739bfc4d02a70482a5f062fc448afd. May 13 12:56:20.886029 systemd-resolved[1403]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 12:56:20.898100 systemd-networkd[1488]: cali4d4be89244f: Link UP May 13 12:56:20.898299 systemd-networkd[1488]: cali4d4be89244f: Gained carrier May 13 12:56:20.914596 containerd[1557]: 2025-05-13 12:56:20.701 [INFO][5604] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5559745f68--rjh79-eth0 calico-apiserver-5559745f68- calico-apiserver 1223f4b3-ae3d-43b8-824a-6a7efb5e24c8 710 0 2025-05-13 12:55:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5559745f68 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5559745f68-rjh79 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4d4be89244f [] []}} ContainerID="af6cabdfbfbff01566b147f8cd5ee297370d85e099c06f52437e05b0fa1c801d" Namespace="calico-apiserver" Pod="calico-apiserver-5559745f68-rjh79" WorkloadEndpoint="localhost-k8s-calico--apiserver--5559745f68--rjh79-" May 13 12:56:20.914596 containerd[1557]: 2025-05-13 12:56:20.701 [INFO][5604] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="af6cabdfbfbff01566b147f8cd5ee297370d85e099c06f52437e05b0fa1c801d" Namespace="calico-apiserver" Pod="calico-apiserver-5559745f68-rjh79" WorkloadEndpoint="localhost-k8s-calico--apiserver--5559745f68--rjh79-eth0" May 13 12:56:20.914596 containerd[1557]: 2025-05-13 12:56:20.746 [INFO][5632] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="af6cabdfbfbff01566b147f8cd5ee297370d85e099c06f52437e05b0fa1c801d" HandleID="k8s-pod-network.af6cabdfbfbff01566b147f8cd5ee297370d85e099c06f52437e05b0fa1c801d" Workload="localhost-k8s-calico--apiserver--5559745f68--rjh79-eth0" May 13 12:56:20.914596 containerd[1557]: 2025-05-13 12:56:20.755 [INFO][5632] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="af6cabdfbfbff01566b147f8cd5ee297370d85e099c06f52437e05b0fa1c801d" HandleID="k8s-pod-network.af6cabdfbfbff01566b147f8cd5ee297370d85e099c06f52437e05b0fa1c801d" Workload="localhost-k8s-calico--apiserver--5559745f68--rjh79-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003e1340), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5559745f68-rjh79", "timestamp":"2025-05-13 12:56:20.746283422 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 12:56:20.914596 containerd[1557]: 2025-05-13 12:56:20.755 [INFO][5632] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 12:56:20.914596 containerd[1557]: 2025-05-13 12:56:20.783 [INFO][5632] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 12:56:20.914596 containerd[1557]: 2025-05-13 12:56:20.783 [INFO][5632] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 12:56:20.914596 containerd[1557]: 2025-05-13 12:56:20.859 [INFO][5632] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.af6cabdfbfbff01566b147f8cd5ee297370d85e099c06f52437e05b0fa1c801d" host="localhost" May 13 12:56:20.914596 containerd[1557]: 2025-05-13 12:56:20.867 [INFO][5632] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 12:56:20.914596 containerd[1557]: 2025-05-13 12:56:20.872 [INFO][5632] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 12:56:20.914596 containerd[1557]: 2025-05-13 12:56:20.874 [INFO][5632] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 12:56:20.914596 containerd[1557]: 2025-05-13 12:56:20.877 [INFO][5632] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 12:56:20.914596 containerd[1557]: 2025-05-13 12:56:20.877 [INFO][5632] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.af6cabdfbfbff01566b147f8cd5ee297370d85e099c06f52437e05b0fa1c801d" host="localhost" May 13 12:56:20.914596 containerd[1557]: 2025-05-13 12:56:20.878 [INFO][5632] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.af6cabdfbfbff01566b147f8cd5ee297370d85e099c06f52437e05b0fa1c801d May 13 12:56:20.914596 containerd[1557]: 2025-05-13 12:56:20.882 [INFO][5632] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.af6cabdfbfbff01566b147f8cd5ee297370d85e099c06f52437e05b0fa1c801d" host="localhost" May 13 12:56:20.914596 containerd[1557]: 2025-05-13 12:56:20.888 [INFO][5632] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.af6cabdfbfbff01566b147f8cd5ee297370d85e099c06f52437e05b0fa1c801d" host="localhost" May 13 12:56:20.914596 containerd[1557]: 2025-05-13 12:56:20.888 [INFO][5632] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.af6cabdfbfbff01566b147f8cd5ee297370d85e099c06f52437e05b0fa1c801d" host="localhost" May 13 12:56:20.914596 containerd[1557]: 2025-05-13 12:56:20.888 [INFO][5632] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 12:56:20.914596 containerd[1557]: 2025-05-13 12:56:20.888 [INFO][5632] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="af6cabdfbfbff01566b147f8cd5ee297370d85e099c06f52437e05b0fa1c801d" HandleID="k8s-pod-network.af6cabdfbfbff01566b147f8cd5ee297370d85e099c06f52437e05b0fa1c801d" Workload="localhost-k8s-calico--apiserver--5559745f68--rjh79-eth0" May 13 12:56:20.915354 containerd[1557]: 2025-05-13 12:56:20.894 [INFO][5604] cni-plugin/k8s.go 386: Populated endpoint ContainerID="af6cabdfbfbff01566b147f8cd5ee297370d85e099c06f52437e05b0fa1c801d" Namespace="calico-apiserver" Pod="calico-apiserver-5559745f68-rjh79" WorkloadEndpoint="localhost-k8s-calico--apiserver--5559745f68--rjh79-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5559745f68--rjh79-eth0", GenerateName:"calico-apiserver-5559745f68-", Namespace:"calico-apiserver", SelfLink:"", UID:"1223f4b3-ae3d-43b8-824a-6a7efb5e24c8", ResourceVersion:"710", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 12, 55, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5559745f68", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5559745f68-rjh79", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4d4be89244f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 12:56:20.915354 containerd[1557]: 2025-05-13 12:56:20.894 [INFO][5604] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="af6cabdfbfbff01566b147f8cd5ee297370d85e099c06f52437e05b0fa1c801d" Namespace="calico-apiserver" Pod="calico-apiserver-5559745f68-rjh79" WorkloadEndpoint="localhost-k8s-calico--apiserver--5559745f68--rjh79-eth0" May 13 12:56:20.915354 containerd[1557]: 2025-05-13 12:56:20.895 [INFO][5604] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4d4be89244f ContainerID="af6cabdfbfbff01566b147f8cd5ee297370d85e099c06f52437e05b0fa1c801d" Namespace="calico-apiserver" Pod="calico-apiserver-5559745f68-rjh79" WorkloadEndpoint="localhost-k8s-calico--apiserver--5559745f68--rjh79-eth0" May 13 12:56:20.915354 containerd[1557]: 2025-05-13 12:56:20.898 [INFO][5604] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="af6cabdfbfbff01566b147f8cd5ee297370d85e099c06f52437e05b0fa1c801d" Namespace="calico-apiserver" Pod="calico-apiserver-5559745f68-rjh79" WorkloadEndpoint="localhost-k8s-calico--apiserver--5559745f68--rjh79-eth0" May 13 12:56:20.915354 containerd[1557]: 2025-05-13 12:56:20.898 [INFO][5604] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="af6cabdfbfbff01566b147f8cd5ee297370d85e099c06f52437e05b0fa1c801d" Namespace="calico-apiserver" Pod="calico-apiserver-5559745f68-rjh79" WorkloadEndpoint="localhost-k8s-calico--apiserver--5559745f68--rjh79-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5559745f68--rjh79-eth0", GenerateName:"calico-apiserver-5559745f68-", Namespace:"calico-apiserver", SelfLink:"", UID:"1223f4b3-ae3d-43b8-824a-6a7efb5e24c8", ResourceVersion:"710", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 12, 55, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5559745f68", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"af6cabdfbfbff01566b147f8cd5ee297370d85e099c06f52437e05b0fa1c801d", Pod:"calico-apiserver-5559745f68-rjh79", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4d4be89244f", MAC:"82:01:e9:41:2c:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 12:56:20.915354 containerd[1557]: 2025-05-13 12:56:20.909 [INFO][5604] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="af6cabdfbfbff01566b147f8cd5ee297370d85e099c06f52437e05b0fa1c801d" Namespace="calico-apiserver" Pod="calico-apiserver-5559745f68-rjh79" WorkloadEndpoint="localhost-k8s-calico--apiserver--5559745f68--rjh79-eth0" May 13 12:56:20.918155 containerd[1557]: time="2025-05-13T12:56:20.917788122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ct2sc,Uid:99af3312-c9d6-477a-83b3-e903dd409646,Namespace:calico-system,Attempt:0,} returns sandbox id \"2d06620ed690627757582e2e0c509bbc0e739bfc4d02a70482a5f062fc448afd\"" May 13 12:56:20.936273 systemd[1]: Started sshd@19-10.0.0.90:22-10.0.0.1:60830.service - OpenSSH per-connection server daemon (10.0.0.1:60830). May 13 12:56:20.948847 containerd[1557]: time="2025-05-13T12:56:20.948797584Z" level=info msg="connecting to shim af6cabdfbfbff01566b147f8cd5ee297370d85e099c06f52437e05b0fa1c801d" address="unix:///run/containerd/s/831faf2238b4cdbea3f7e56d43e12737a885dae7622aade956de5f8a0102f8f1" namespace=k8s.io protocol=ttrpc version=3 May 13 12:56:20.976269 systemd[1]: Started cri-containerd-af6cabdfbfbff01566b147f8cd5ee297370d85e099c06f52437e05b0fa1c801d.scope - libcontainer container af6cabdfbfbff01566b147f8cd5ee297370d85e099c06f52437e05b0fa1c801d. May 13 12:56:20.990977 sshd[5721]: Accepted publickey for core from 10.0.0.1 port 60830 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 12:56:20.991905 systemd-resolved[1403]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 12:56:20.993758 sshd-session[5721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:56:20.999101 systemd-logind[1539]: New session 20 of user core. May 13 12:56:21.006287 systemd[1]: Started session-20.scope - Session 20 of User core. May 13 12:56:21.030162 containerd[1557]: time="2025-05-13T12:56:21.029625323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5559745f68-rjh79,Uid:1223f4b3-ae3d-43b8-824a-6a7efb5e24c8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"af6cabdfbfbff01566b147f8cd5ee297370d85e099c06f52437e05b0fa1c801d\"" May 13 12:56:21.148879 sshd[5761]: Connection closed by 10.0.0.1 port 60830 May 13 12:56:21.150309 sshd-session[5721]: pam_unix(sshd:session): session closed for user core May 13 12:56:21.155203 systemd-logind[1539]: Session 20 logged out. Waiting for processes to exit. May 13 12:56:21.155548 systemd[1]: sshd@19-10.0.0.90:22-10.0.0.1:60830.service: Deactivated successfully. May 13 12:56:21.158362 systemd[1]: session-20.scope: Deactivated successfully. May 13 12:56:21.160296 systemd-logind[1539]: Removed session 20. May 13 12:56:21.196385 containerd[1557]: time="2025-05-13T12:56:21.196340050Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:56:21.197149 containerd[1557]: time="2025-05-13T12:56:21.197109258Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 13 12:56:21.198272 containerd[1557]: time="2025-05-13T12:56:21.198226082Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:56:21.199905 containerd[1557]: time="2025-05-13T12:56:21.199870482Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:56:21.200400 containerd[1557]: time="2025-05-13T12:56:21.200371695Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 3.565960499s" May 13 12:56:21.200442 containerd[1557]: time="2025-05-13T12:56:21.200401142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 13 12:56:21.201189 containerd[1557]: time="2025-05-13T12:56:21.201162865Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 12:56:21.207617 containerd[1557]: time="2025-05-13T12:56:21.207580253Z" level=info msg="CreateContainer within sandbox \"4369b44fa35855b8004d8f7ea316cf9de3697c810d68e179a692a8c65157bb37\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 13 12:56:21.213521 containerd[1557]: time="2025-05-13T12:56:21.213489296Z" level=info msg="Container 299329edc0efbbbde92a8cdb752659a7312945d3c8d5b27ff6e365a097f38974: CDI devices from CRI Config.CDIDevices: []" May 13 12:56:21.221051 containerd[1557]: time="2025-05-13T12:56:21.221020054Z" level=info msg="CreateContainer within sandbox \"4369b44fa35855b8004d8f7ea316cf9de3697c810d68e179a692a8c65157bb37\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"299329edc0efbbbde92a8cdb752659a7312945d3c8d5b27ff6e365a097f38974\"" May 13 12:56:21.221509 containerd[1557]: time="2025-05-13T12:56:21.221470279Z" level=info msg="StartContainer for \"299329edc0efbbbde92a8cdb752659a7312945d3c8d5b27ff6e365a097f38974\"" May 13 12:56:21.222517 containerd[1557]: time="2025-05-13T12:56:21.222489246Z" level=info msg="connecting to shim 299329edc0efbbbde92a8cdb752659a7312945d3c8d5b27ff6e365a097f38974" address="unix:///run/containerd/s/680e14b46c8cdf3742f3c2480ed758d66cb30499941bead6e0ea4f4d2b744ae9" protocol=ttrpc version=3 May 13 12:56:21.249286 systemd[1]: Started cri-containerd-299329edc0efbbbde92a8cdb752659a7312945d3c8d5b27ff6e365a097f38974.scope - libcontainer container 299329edc0efbbbde92a8cdb752659a7312945d3c8d5b27ff6e365a097f38974. May 13 12:56:21.305046 containerd[1557]: time="2025-05-13T12:56:21.304994052Z" level=info msg="StartContainer for \"299329edc0efbbbde92a8cdb752659a7312945d3c8d5b27ff6e365a097f38974\" returns successfully" May 13 12:56:21.659296 containerd[1557]: time="2025-05-13T12:56:21.659245238Z" level=info msg="TaskExit event in podsandbox handler container_id:\"299329edc0efbbbde92a8cdb752659a7312945d3c8d5b27ff6e365a097f38974\" id:\"b68250e8fffda18e437ee2e1f2d95d12445cf390bc41ed62dcc59aebb612aff3\" pid:5832 exited_at:{seconds:1747140981 nanos:658909172}" May 13 12:56:21.669995 kubelet[2682]: I0513 12:56:21.669568 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-857fbf49df-bgllm" podStartSLOduration=69.100323499 podStartE2EDuration="1m12.66954723s" podCreationTimestamp="2025-05-13 12:55:09 +0000 UTC" firstStartedPulling="2025-05-13 12:56:17.631833521 +0000 UTC m=+79.348487258" lastFinishedPulling="2025-05-13 12:56:21.201057252 +0000 UTC m=+82.917710989" observedRunningTime="2025-05-13 12:56:21.62835902 +0000 UTC m=+83.345012757" watchObservedRunningTime="2025-05-13 12:56:21.66954723 +0000 UTC m=+83.386200957" May 13 12:56:22.276299 systemd-networkd[1488]: cali4d4be89244f: Gained IPv6LL May 13 12:56:22.660358 systemd-networkd[1488]: cali0f9046696f0: Gained IPv6LL May 13 12:56:25.851197 containerd[1557]: time="2025-05-13T12:56:25.851120056Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:56:25.851808 containerd[1557]: time="2025-05-13T12:56:25.851784660Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" May 13 12:56:25.852931 containerd[1557]: time="2025-05-13T12:56:25.852894778Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:56:25.856686 containerd[1557]: time="2025-05-13T12:56:25.856632281Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 4.655440191s" May 13 12:56:25.856686 containerd[1557]: time="2025-05-13T12:56:25.856665835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 13 12:56:25.857201 containerd[1557]: time="2025-05-13T12:56:25.857175392Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:56:25.857987 containerd[1557]: time="2025-05-13T12:56:25.857850556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 13 12:56:25.858487 containerd[1557]: time="2025-05-13T12:56:25.858453743Z" level=info msg="CreateContainer within sandbox \"28482a96e1c8c820172a3702fa2841b7c574d33bf5518ac98686ea495d292847\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 12:56:25.867259 containerd[1557]: time="2025-05-13T12:56:25.867211348Z" level=info msg="Container 548b7e489f1ada5767fc216d58ba51b7538da3401199151020be6660c9562e2e: CDI devices from CRI Config.CDIDevices: []" May 13 12:56:25.872961 containerd[1557]: time="2025-05-13T12:56:25.872925009Z" level=info msg="CreateContainer within sandbox \"28482a96e1c8c820172a3702fa2841b7c574d33bf5518ac98686ea495d292847\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"548b7e489f1ada5767fc216d58ba51b7538da3401199151020be6660c9562e2e\"" May 13 12:56:25.873567 containerd[1557]: time="2025-05-13T12:56:25.873543203Z" level=info msg="StartContainer for \"548b7e489f1ada5767fc216d58ba51b7538da3401199151020be6660c9562e2e\"" May 13 12:56:25.874702 containerd[1557]: time="2025-05-13T12:56:25.874667818Z" level=info msg="connecting to shim 548b7e489f1ada5767fc216d58ba51b7538da3401199151020be6660c9562e2e" address="unix:///run/containerd/s/be5b0a87ed158ebb47c30e57ad905dc869241b89015919f2b7033533ca9b4c0f" protocol=ttrpc version=3 May 13 12:56:25.912292 systemd[1]: Started cri-containerd-548b7e489f1ada5767fc216d58ba51b7538da3401199151020be6660c9562e2e.scope - libcontainer container 548b7e489f1ada5767fc216d58ba51b7538da3401199151020be6660c9562e2e. May 13 12:56:26.029022 containerd[1557]: time="2025-05-13T12:56:26.028969039Z" level=info msg="StartContainer for \"548b7e489f1ada5767fc216d58ba51b7538da3401199151020be6660c9562e2e\" returns successfully" May 13 12:56:26.161436 systemd[1]: Started sshd@20-10.0.0.90:22-10.0.0.1:60840.service - OpenSSH per-connection server daemon (10.0.0.1:60840). May 13 12:56:26.218819 sshd[5892]: Accepted publickey for core from 10.0.0.1 port 60840 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 12:56:26.220357 sshd-session[5892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:56:26.224784 systemd-logind[1539]: New session 21 of user core. May 13 12:56:26.232260 systemd[1]: Started session-21.scope - Session 21 of User core. May 13 12:56:26.359538 sshd[5894]: Connection closed by 10.0.0.1 port 60840 May 13 12:56:26.359840 sshd-session[5892]: pam_unix(sshd:session): session closed for user core May 13 12:56:26.374953 systemd[1]: sshd@20-10.0.0.90:22-10.0.0.1:60840.service: Deactivated successfully. May 13 12:56:26.377131 systemd[1]: session-21.scope: Deactivated successfully. May 13 12:56:26.377897 systemd-logind[1539]: Session 21 logged out. Waiting for processes to exit. May 13 12:56:26.380775 systemd[1]: Started sshd@21-10.0.0.90:22-10.0.0.1:60852.service - OpenSSH per-connection server daemon (10.0.0.1:60852). May 13 12:56:26.381628 systemd-logind[1539]: Removed session 21. May 13 12:56:26.614158 sshd[5908]: Accepted publickey for core from 10.0.0.1 port 60852 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 12:56:26.615911 sshd-session[5908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:56:26.620436 systemd-logind[1539]: New session 22 of user core. May 13 12:56:26.631303 systemd[1]: Started session-22.scope - Session 22 of User core. May 13 12:56:26.840008 kubelet[2682]: I0513 12:56:26.839943 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5559745f68-7jjmz" podStartSLOduration=70.579621539 podStartE2EDuration="1m17.839927687s" podCreationTimestamp="2025-05-13 12:55:09 +0000 UTC" firstStartedPulling="2025-05-13 12:56:18.597065701 +0000 UTC m=+80.313719438" lastFinishedPulling="2025-05-13 12:56:25.857371858 +0000 UTC m=+87.574025586" observedRunningTime="2025-05-13 12:56:26.839817908 +0000 UTC m=+88.556471645" watchObservedRunningTime="2025-05-13 12:56:26.839927687 +0000 UTC m=+88.556581415" May 13 12:56:27.148851 sshd[5910]: Connection closed by 10.0.0.1 port 60852 May 13 12:56:27.149962 sshd-session[5908]: pam_unix(sshd:session): session closed for user core May 13 12:56:27.159387 systemd[1]: sshd@21-10.0.0.90:22-10.0.0.1:60852.service: Deactivated successfully. May 13 12:56:27.162087 systemd[1]: session-22.scope: Deactivated successfully. May 13 12:56:27.163119 systemd-logind[1539]: Session 22 logged out. Waiting for processes to exit. May 13 12:56:27.166736 systemd[1]: Started sshd@22-10.0.0.90:22-10.0.0.1:60854.service - OpenSSH per-connection server daemon (10.0.0.1:60854). May 13 12:56:27.167477 systemd-logind[1539]: Removed session 22. May 13 12:56:27.221795 sshd[5924]: Accepted publickey for core from 10.0.0.1 port 60854 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 12:56:27.223772 sshd-session[5924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:56:27.229760 systemd-logind[1539]: New session 23 of user core. May 13 12:56:27.237321 systemd[1]: Started session-23.scope - Session 23 of User core. May 13 12:56:28.884904 sshd[5928]: Connection closed by 10.0.0.1 port 60854 May 13 12:56:28.885235 sshd-session[5924]: pam_unix(sshd:session): session closed for user core May 13 12:56:28.894897 systemd[1]: sshd@22-10.0.0.90:22-10.0.0.1:60854.service: Deactivated successfully. May 13 12:56:28.897523 systemd[1]: session-23.scope: Deactivated successfully. May 13 12:56:28.898345 systemd-logind[1539]: Session 23 logged out. Waiting for processes to exit. May 13 12:56:28.902174 systemd[1]: Started sshd@23-10.0.0.90:22-10.0.0.1:45308.service - OpenSSH per-connection server daemon (10.0.0.1:45308). May 13 12:56:28.903809 systemd-logind[1539]: Removed session 23. May 13 12:56:29.094089 sshd[5948]: Accepted publickey for core from 10.0.0.1 port 45308 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 12:56:29.096818 sshd-session[5948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:56:29.108314 systemd-logind[1539]: New session 24 of user core. May 13 12:56:29.111330 systemd[1]: Started session-24.scope - Session 24 of User core. May 13 12:56:29.236732 containerd[1557]: time="2025-05-13T12:56:29.236606946Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:56:29.238312 containerd[1557]: time="2025-05-13T12:56:29.238266421Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 13 12:56:29.240704 containerd[1557]: time="2025-05-13T12:56:29.240674927Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:56:29.243255 containerd[1557]: time="2025-05-13T12:56:29.243189076Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:56:29.244681 containerd[1557]: time="2025-05-13T12:56:29.244643728Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 3.386763233s" May 13 12:56:29.244681 containerd[1557]: time="2025-05-13T12:56:29.244680328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 13 12:56:29.247747 containerd[1557]: time="2025-05-13T12:56:29.247692358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 12:56:29.247851 containerd[1557]: time="2025-05-13T12:56:29.247754478Z" level=info msg="CreateContainer within sandbox \"2d06620ed690627757582e2e0c509bbc0e739bfc4d02a70482a5f062fc448afd\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 13 12:56:29.272903 containerd[1557]: time="2025-05-13T12:56:29.272003528Z" level=info msg="Container 24d838f455490235012c0b394be679b04438aaa8224b2fdab89b0dbd219d04e3: CDI devices from CRI Config.CDIDevices: []" May 13 12:56:29.305677 containerd[1557]: time="2025-05-13T12:56:29.305604304Z" level=info msg="CreateContainer within sandbox \"2d06620ed690627757582e2e0c509bbc0e739bfc4d02a70482a5f062fc448afd\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"24d838f455490235012c0b394be679b04438aaa8224b2fdab89b0dbd219d04e3\"" May 13 12:56:29.311031 containerd[1557]: time="2025-05-13T12:56:29.310999262Z" level=info msg="StartContainer for \"24d838f455490235012c0b394be679b04438aaa8224b2fdab89b0dbd219d04e3\"" May 13 12:56:29.326261 containerd[1557]: time="2025-05-13T12:56:29.326196308Z" level=info msg="connecting to shim 24d838f455490235012c0b394be679b04438aaa8224b2fdab89b0dbd219d04e3" address="unix:///run/containerd/s/0b122f325f1e43b95c2fe56d4e399d697776e97a08a99ffbe886b885cc9b3a4a" protocol=ttrpc version=3 May 13 12:56:29.352554 systemd[1]: Started cri-containerd-24d838f455490235012c0b394be679b04438aaa8224b2fdab89b0dbd219d04e3.scope - libcontainer container 24d838f455490235012c0b394be679b04438aaa8224b2fdab89b0dbd219d04e3. May 13 12:56:29.393794 sshd[5955]: Connection closed by 10.0.0.1 port 45308 May 13 12:56:29.395351 sshd-session[5948]: pam_unix(sshd:session): session closed for user core May 13 12:56:29.406404 systemd[1]: sshd@23-10.0.0.90:22-10.0.0.1:45308.service: Deactivated successfully. May 13 12:56:29.408947 containerd[1557]: time="2025-05-13T12:56:29.408894440Z" level=info msg="StartContainer for \"24d838f455490235012c0b394be679b04438aaa8224b2fdab89b0dbd219d04e3\" returns successfully" May 13 12:56:29.409215 systemd[1]: session-24.scope: Deactivated successfully. May 13 12:56:29.411399 systemd-logind[1539]: Session 24 logged out. Waiting for processes to exit. May 13 12:56:29.414788 systemd[1]: Started sshd@24-10.0.0.90:22-10.0.0.1:45312.service - OpenSSH per-connection server daemon (10.0.0.1:45312). May 13 12:56:29.415896 systemd-logind[1539]: Removed session 24. May 13 12:56:29.476029 sshd[5997]: Accepted publickey for core from 10.0.0.1 port 45312 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 12:56:29.478065 sshd-session[5997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:56:29.483166 systemd-logind[1539]: New session 25 of user core. May 13 12:56:29.493364 systemd[1]: Started session-25.scope - Session 25 of User core. May 13 12:56:29.612194 sshd[5999]: Connection closed by 10.0.0.1 port 45312 May 13 12:56:29.612579 sshd-session[5997]: pam_unix(sshd:session): session closed for user core May 13 12:56:29.617636 systemd[1]: sshd@24-10.0.0.90:22-10.0.0.1:45312.service: Deactivated successfully. May 13 12:56:29.620328 systemd[1]: session-25.scope: Deactivated successfully. May 13 12:56:29.621230 systemd-logind[1539]: Session 25 logged out. Waiting for processes to exit. May 13 12:56:29.623086 systemd-logind[1539]: Removed session 25. May 13 12:56:29.681630 containerd[1557]: time="2025-05-13T12:56:29.681574788Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:56:29.682530 containerd[1557]: time="2025-05-13T12:56:29.682495899Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 13 12:56:29.684108 containerd[1557]: time="2025-05-13T12:56:29.684066894Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 436.280866ms" May 13 12:56:29.684108 containerd[1557]: time="2025-05-13T12:56:29.684105448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 13 12:56:29.685080 containerd[1557]: time="2025-05-13T12:56:29.685038783Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 13 12:56:29.686170 containerd[1557]: time="2025-05-13T12:56:29.686129850Z" level=info msg="CreateContainer within sandbox \"af6cabdfbfbff01566b147f8cd5ee297370d85e099c06f52437e05b0fa1c801d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 12:56:29.695263 containerd[1557]: time="2025-05-13T12:56:29.695217041Z" level=info msg="Container da4d479a1987742ff1302148de8b8dbdd16dae520471414e0f9a115afd9bb529: CDI devices from CRI Config.CDIDevices: []" May 13 12:56:29.704430 containerd[1557]: time="2025-05-13T12:56:29.704381780Z" level=info msg="CreateContainer within sandbox \"af6cabdfbfbff01566b147f8cd5ee297370d85e099c06f52437e05b0fa1c801d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"da4d479a1987742ff1302148de8b8dbdd16dae520471414e0f9a115afd9bb529\"" May 13 12:56:29.705107 containerd[1557]: time="2025-05-13T12:56:29.704875433Z" level=info msg="StartContainer for \"da4d479a1987742ff1302148de8b8dbdd16dae520471414e0f9a115afd9bb529\"" May 13 12:56:29.706225 containerd[1557]: time="2025-05-13T12:56:29.706203113Z" level=info msg="connecting to shim da4d479a1987742ff1302148de8b8dbdd16dae520471414e0f9a115afd9bb529" address="unix:///run/containerd/s/831faf2238b4cdbea3f7e56d43e12737a885dae7622aade956de5f8a0102f8f1" protocol=ttrpc version=3 May 13 12:56:29.737283 systemd[1]: Started cri-containerd-da4d479a1987742ff1302148de8b8dbdd16dae520471414e0f9a115afd9bb529.scope - libcontainer container da4d479a1987742ff1302148de8b8dbdd16dae520471414e0f9a115afd9bb529. May 13 12:56:29.788077 containerd[1557]: time="2025-05-13T12:56:29.787971256Z" level=info msg="StartContainer for \"da4d479a1987742ff1302148de8b8dbdd16dae520471414e0f9a115afd9bb529\" returns successfully" May 13 12:56:30.733150 kubelet[2682]: I0513 12:56:30.732852 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5559745f68-rjh79" podStartSLOduration=73.079302203 podStartE2EDuration="1m21.732837065s" podCreationTimestamp="2025-05-13 12:55:09 +0000 UTC" firstStartedPulling="2025-05-13 12:56:21.031367119 +0000 UTC m=+82.748020856" lastFinishedPulling="2025-05-13 12:56:29.684901981 +0000 UTC m=+91.401555718" observedRunningTime="2025-05-13 12:56:30.732518857 +0000 UTC m=+92.449172584" watchObservedRunningTime="2025-05-13 12:56:30.732837065 +0000 UTC m=+92.449490802" May 13 12:56:31.499608 containerd[1557]: time="2025-05-13T12:56:31.499555938Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:56:31.500400 containerd[1557]: time="2025-05-13T12:56:31.500338123Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 13 12:56:31.501826 containerd[1557]: time="2025-05-13T12:56:31.501720866Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:56:31.504131 containerd[1557]: time="2025-05-13T12:56:31.504088179Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:56:31.504864 containerd[1557]: time="2025-05-13T12:56:31.504819146Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 1.819743281s" May 13 12:56:31.504864 containerd[1557]: time="2025-05-13T12:56:31.504858130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 13 12:56:31.529919 containerd[1557]: time="2025-05-13T12:56:31.529852349Z" level=info msg="CreateContainer within sandbox \"2d06620ed690627757582e2e0c509bbc0e739bfc4d02a70482a5f062fc448afd\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 13 12:56:31.538528 containerd[1557]: time="2025-05-13T12:56:31.538484905Z" level=info msg="Container 51fc20a83d3221f8b522f3c8459fe3dd8b6551b416f8066973e41730e42e19a5: CDI devices from CRI Config.CDIDevices: []" May 13 12:56:31.548756 containerd[1557]: time="2025-05-13T12:56:31.548706687Z" level=info msg="CreateContainer within sandbox \"2d06620ed690627757582e2e0c509bbc0e739bfc4d02a70482a5f062fc448afd\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"51fc20a83d3221f8b522f3c8459fe3dd8b6551b416f8066973e41730e42e19a5\"" May 13 12:56:31.549404 containerd[1557]: time="2025-05-13T12:56:31.549343183Z" level=info msg="StartContainer for \"51fc20a83d3221f8b522f3c8459fe3dd8b6551b416f8066973e41730e42e19a5\"" May 13 12:56:31.551568 containerd[1557]: time="2025-05-13T12:56:31.551530041Z" level=info msg="connecting to shim 51fc20a83d3221f8b522f3c8459fe3dd8b6551b416f8066973e41730e42e19a5" address="unix:///run/containerd/s/0b122f325f1e43b95c2fe56d4e399d697776e97a08a99ffbe886b885cc9b3a4a" protocol=ttrpc version=3 May 13 12:56:31.575381 systemd[1]: Started cri-containerd-51fc20a83d3221f8b522f3c8459fe3dd8b6551b416f8066973e41730e42e19a5.scope - libcontainer container 51fc20a83d3221f8b522f3c8459fe3dd8b6551b416f8066973e41730e42e19a5. May 13 12:56:31.708813 containerd[1557]: time="2025-05-13T12:56:31.708408299Z" level=info msg="StartContainer for \"51fc20a83d3221f8b522f3c8459fe3dd8b6551b416f8066973e41730e42e19a5\" returns successfully" May 13 12:56:32.467619 kubelet[2682]: I0513 12:56:32.467575 2682 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 13 12:56:32.467619 kubelet[2682]: I0513 12:56:32.467607 2682 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 13 12:56:32.793379 kubelet[2682]: I0513 12:56:32.792729 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-ct2sc" podStartSLOduration=73.195608937 podStartE2EDuration="1m23.792711774s" podCreationTimestamp="2025-05-13 12:55:09 +0000 UTC" firstStartedPulling="2025-05-13 12:56:20.919648869 +0000 UTC m=+82.636302606" lastFinishedPulling="2025-05-13 12:56:31.516751706 +0000 UTC m=+93.233405443" observedRunningTime="2025-05-13 12:56:32.792502083 +0000 UTC m=+94.509155820" watchObservedRunningTime="2025-05-13 12:56:32.792711774 +0000 UTC m=+94.509365511" May 13 12:56:34.630045 systemd[1]: Started sshd@25-10.0.0.90:22-10.0.0.1:45324.service - OpenSSH per-connection server daemon (10.0.0.1:45324). May 13 12:56:34.699425 sshd[6098]: Accepted publickey for core from 10.0.0.1 port 45324 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 12:56:34.701190 sshd-session[6098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:56:34.705564 systemd-logind[1539]: New session 26 of user core. May 13 12:56:34.719277 systemd[1]: Started session-26.scope - Session 26 of User core. May 13 12:56:34.843613 sshd[6100]: Connection closed by 10.0.0.1 port 45324 May 13 12:56:34.843955 sshd-session[6098]: pam_unix(sshd:session): session closed for user core May 13 12:56:34.848671 systemd[1]: sshd@25-10.0.0.90:22-10.0.0.1:45324.service: Deactivated successfully. May 13 12:56:34.851105 systemd[1]: session-26.scope: Deactivated successfully. May 13 12:56:34.852023 systemd-logind[1539]: Session 26 logged out. Waiting for processes to exit. May 13 12:56:34.853421 systemd-logind[1539]: Removed session 26. May 13 12:56:39.860517 systemd[1]: Started sshd@26-10.0.0.90:22-10.0.0.1:41930.service - OpenSSH per-connection server daemon (10.0.0.1:41930). May 13 12:56:39.921848 sshd[6116]: Accepted publickey for core from 10.0.0.1 port 41930 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 12:56:39.923785 sshd-session[6116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:56:39.928727 systemd-logind[1539]: New session 27 of user core. May 13 12:56:39.940291 systemd[1]: Started session-27.scope - Session 27 of User core. May 13 12:56:40.072889 sshd[6118]: Connection closed by 10.0.0.1 port 41930 May 13 12:56:40.073801 sshd-session[6116]: pam_unix(sshd:session): session closed for user core May 13 12:56:40.079443 systemd[1]: sshd@26-10.0.0.90:22-10.0.0.1:41930.service: Deactivated successfully. May 13 12:56:40.081511 systemd[1]: session-27.scope: Deactivated successfully. May 13 12:56:40.082667 systemd-logind[1539]: Session 27 logged out. Waiting for processes to exit. May 13 12:56:40.084902 systemd-logind[1539]: Removed session 27. May 13 12:56:43.667895 containerd[1557]: time="2025-05-13T12:56:43.667855210Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f1e9f5bdcab78643f7b4c4b99dfc386d283dab99aea673955aa8b97f68c61944\" id:\"bce3b0153256854fca4bc88bc1300dceaa5e25f046314f48bd4e2685c8ec2aa8\" pid:6145 exited_at:{seconds:1747141003 nanos:667463255}" May 13 12:56:43.669595 kubelet[2682]: E0513 12:56:43.669562 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:56:45.090563 systemd[1]: Started sshd@27-10.0.0.90:22-10.0.0.1:41940.service - OpenSSH per-connection server daemon (10.0.0.1:41940). May 13 12:56:45.165219 sshd[6158]: Accepted publickey for core from 10.0.0.1 port 41940 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 12:56:45.166957 sshd-session[6158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:56:45.171684 systemd-logind[1539]: New session 28 of user core. May 13 12:56:45.177328 systemd[1]: Started session-28.scope - Session 28 of User core. May 13 12:56:45.300497 sshd[6160]: Connection closed by 10.0.0.1 port 41940 May 13 12:56:45.300878 sshd-session[6158]: pam_unix(sshd:session): session closed for user core May 13 12:56:45.305501 systemd[1]: sshd@27-10.0.0.90:22-10.0.0.1:41940.service: Deactivated successfully. May 13 12:56:45.307345 systemd[1]: session-28.scope: Deactivated successfully. May 13 12:56:45.308169 systemd-logind[1539]: Session 28 logged out. Waiting for processes to exit. May 13 12:56:45.309524 systemd-logind[1539]: Removed session 28. May 13 12:56:45.380316 kubelet[2682]: E0513 12:56:45.380227 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:56:50.317270 systemd[1]: Started sshd@28-10.0.0.90:22-10.0.0.1:33198.service - OpenSSH per-connection server daemon (10.0.0.1:33198). May 13 12:56:50.362955 sshd[6174]: Accepted publickey for core from 10.0.0.1 port 33198 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 12:56:50.364369 sshd-session[6174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:56:50.369026 systemd-logind[1539]: New session 29 of user core. May 13 12:56:50.374392 systemd[1]: Started session-29.scope - Session 29 of User core. May 13 12:56:50.523534 sshd[6176]: Connection closed by 10.0.0.1 port 33198 May 13 12:56:50.523909 sshd-session[6174]: pam_unix(sshd:session): session closed for user core May 13 12:56:50.529540 systemd[1]: sshd@28-10.0.0.90:22-10.0.0.1:33198.service: Deactivated successfully. May 13 12:56:50.531670 systemd[1]: session-29.scope: Deactivated successfully. May 13 12:56:50.532699 systemd-logind[1539]: Session 29 logged out. Waiting for processes to exit. May 13 12:56:50.533904 systemd-logind[1539]: Removed session 29. May 13 12:56:51.667449 containerd[1557]: time="2025-05-13T12:56:51.667406241Z" level=info msg="TaskExit event in podsandbox handler container_id:\"299329edc0efbbbde92a8cdb752659a7312945d3c8d5b27ff6e365a097f38974\" id:\"38ff375fa40858195536d56ef22ba3ec39ad19afbc3fd7e785901d14a26f1db9\" pid:6202 exited_at:{seconds:1747141011 nanos:666993538}"