Sep 9 21:55:52.798480 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Sep 9 19:55:16 -00 2025 Sep 9 21:55:52.798518 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f0ebd120fc09fb344715b1492c3f1d02e1457be2c9792ea5ffb3fe4b15efa812 Sep 9 21:55:52.798535 kernel: BIOS-provided physical RAM map: Sep 9 21:55:52.798545 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 9 21:55:52.798554 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 9 21:55:52.798563 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 9 21:55:52.798602 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 9 21:55:52.798613 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 9 21:55:52.798628 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 9 21:55:52.798643 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 9 21:55:52.798653 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Sep 9 21:55:52.798688 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 9 21:55:52.798697 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 9 21:55:52.798707 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 9 21:55:52.798720 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 9 21:55:52.798735 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 9 21:55:52.798774 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 9 21:55:52.798786 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 9 21:55:52.798796 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 9 21:55:52.798807 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 9 21:55:52.798817 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 9 21:55:52.798827 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 9 21:55:52.798838 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 9 21:55:52.798848 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 9 21:55:52.798858 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 9 21:55:52.798873 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 9 21:55:52.798883 kernel: NX (Execute Disable) protection: active Sep 9 21:55:52.798893 kernel: APIC: Static calls initialized Sep 9 21:55:52.798902 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Sep 9 21:55:52.798913 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Sep 9 21:55:52.798922 kernel: extended physical RAM map: Sep 9 21:55:52.798932 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 9 21:55:52.798942 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 9 21:55:52.798952 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 9 21:55:52.798961 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Sep 9 21:55:52.798972 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 9 21:55:52.798986 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 9 21:55:52.798996 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 9 21:55:52.799006 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Sep 9 21:55:52.799017 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Sep 9 21:55:52.799032 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Sep 9 21:55:52.799042 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Sep 9 21:55:52.799056 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Sep 9 21:55:52.799067 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 9 21:55:52.799078 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 9 21:55:52.799088 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 9 21:55:52.799126 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 9 21:55:52.799137 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 9 21:55:52.799148 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 9 21:55:52.799159 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 9 21:55:52.799170 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 9 21:55:52.799181 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 9 21:55:52.799197 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 9 21:55:52.799208 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 9 21:55:52.799219 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 9 21:55:52.799230 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 9 21:55:52.799240 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 9 21:55:52.799251 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 9 21:55:52.799293 kernel: efi: EFI v2.7 by EDK II Sep 9 21:55:52.799304 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Sep 9 21:55:52.799315 kernel: random: crng init done Sep 9 21:55:52.799329 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Sep 9 21:55:52.799340 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Sep 9 21:55:52.799376 kernel: secureboot: Secure boot disabled Sep 9 21:55:52.799387 kernel: SMBIOS 2.8 present. Sep 9 21:55:52.799397 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 9 21:55:52.799408 kernel: DMI: Memory slots populated: 1/1 Sep 9 21:55:52.799417 kernel: Hypervisor detected: KVM Sep 9 21:55:52.799428 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 9 21:55:52.799438 kernel: kvm-clock: using sched offset of 11798041078 cycles Sep 9 21:55:52.799450 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 9 21:55:52.799460 kernel: tsc: Detected 2794.748 MHz processor Sep 9 21:55:52.799471 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 9 21:55:52.799482 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 9 21:55:52.799497 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Sep 9 21:55:52.799508 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 9 21:55:52.799519 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 9 21:55:52.799530 kernel: Using GB pages for direct mapping Sep 9 21:55:52.799541 kernel: ACPI: Early table checksum verification disabled Sep 9 21:55:52.799552 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 9 21:55:52.799563 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 9 21:55:52.799575 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 21:55:52.799586 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 21:55:52.799806 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 9 21:55:52.799821 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 21:55:52.799832 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 21:55:52.799843 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 21:55:52.799854 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 21:55:52.799866 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 9 21:55:52.799878 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 9 21:55:52.799889 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 9 21:55:52.799906 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 9 21:55:52.799917 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 9 21:55:52.799927 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 9 21:55:52.799938 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 9 21:55:52.799948 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 9 21:55:52.799959 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 9 21:55:52.799969 kernel: No NUMA configuration found Sep 9 21:55:52.799980 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Sep 9 21:55:52.799990 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Sep 9 21:55:52.800001 kernel: Zone ranges: Sep 9 21:55:52.800016 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 9 21:55:52.800027 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Sep 9 21:55:52.800038 kernel: Normal empty Sep 9 21:55:52.800048 kernel: Device empty Sep 9 21:55:52.800059 kernel: Movable zone start for each node Sep 9 21:55:52.800069 kernel: Early memory node ranges Sep 9 21:55:52.800079 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 9 21:55:52.800090 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 9 21:55:52.800106 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 9 21:55:52.800121 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Sep 9 21:55:52.800132 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Sep 9 21:55:52.800874 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Sep 9 21:55:52.800889 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Sep 9 21:55:52.800901 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Sep 9 21:55:52.800912 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Sep 9 21:55:52.800924 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 21:55:52.800940 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 9 21:55:52.800967 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 9 21:55:52.800978 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 21:55:52.800988 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Sep 9 21:55:52.801000 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Sep 9 21:55:52.801014 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 9 21:55:52.801025 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 9 21:55:52.801037 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Sep 9 21:55:52.801048 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 9 21:55:52.801060 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 9 21:55:52.801075 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 9 21:55:52.801087 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 9 21:55:52.801098 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 9 21:55:52.801110 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 9 21:55:52.801121 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 9 21:55:52.801133 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 9 21:55:52.801145 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 9 21:55:52.801156 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 9 21:55:52.801384 kernel: TSC deadline timer available Sep 9 21:55:52.801406 kernel: CPU topo: Max. logical packages: 1 Sep 9 21:55:52.801418 kernel: CPU topo: Max. logical dies: 1 Sep 9 21:55:52.801431 kernel: CPU topo: Max. dies per package: 1 Sep 9 21:55:52.801442 kernel: CPU topo: Max. threads per core: 1 Sep 9 21:55:52.801454 kernel: CPU topo: Num. cores per package: 4 Sep 9 21:55:52.801466 kernel: CPU topo: Num. threads per package: 4 Sep 9 21:55:52.801477 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 9 21:55:52.801489 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 9 21:55:52.801501 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 9 21:55:52.801517 kernel: kvm-guest: setup PV sched yield Sep 9 21:55:52.801529 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 9 21:55:52.801541 kernel: Booting paravirtualized kernel on KVM Sep 9 21:55:52.801553 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 9 21:55:52.801566 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 9 21:55:52.801578 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 9 21:55:52.801590 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 9 21:55:52.801602 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 9 21:55:52.801614 kernel: kvm-guest: PV spinlocks enabled Sep 9 21:55:52.801630 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 9 21:55:52.801643 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f0ebd120fc09fb344715b1492c3f1d02e1457be2c9792ea5ffb3fe4b15efa812 Sep 9 21:55:52.801672 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 21:55:52.801684 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 21:55:52.801696 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 21:55:52.801707 kernel: Fallback order for Node 0: 0 Sep 9 21:55:52.801719 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Sep 9 21:55:52.801731 kernel: Policy zone: DMA32 Sep 9 21:55:52.801748 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 21:55:52.801761 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 21:55:52.801772 kernel: ftrace: allocating 40102 entries in 157 pages Sep 9 21:55:52.801783 kernel: ftrace: allocated 157 pages with 5 groups Sep 9 21:55:52.801794 kernel: Dynamic Preempt: voluntary Sep 9 21:55:52.801806 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 21:55:52.801819 kernel: rcu: RCU event tracing is enabled. Sep 9 21:55:52.801831 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 21:55:52.801842 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 21:55:52.801857 kernel: Rude variant of Tasks RCU enabled. Sep 9 21:55:52.801868 kernel: Tracing variant of Tasks RCU enabled. Sep 9 21:55:52.801880 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 21:55:52.801896 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 21:55:52.801907 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 21:55:52.801920 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 21:55:52.801931 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 21:55:52.801943 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 9 21:55:52.802991 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 21:55:52.803015 kernel: Console: colour dummy device 80x25 Sep 9 21:55:52.803027 kernel: printk: legacy console [ttyS0] enabled Sep 9 21:55:52.803040 kernel: ACPI: Core revision 20240827 Sep 9 21:55:52.803052 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 9 21:55:52.803063 kernel: APIC: Switch to symmetric I/O mode setup Sep 9 21:55:52.803075 kernel: x2apic enabled Sep 9 21:55:52.803086 kernel: APIC: Switched APIC routing to: physical x2apic Sep 9 21:55:52.803129 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 9 21:55:52.803141 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 9 21:55:52.803158 kernel: kvm-guest: setup PV IPIs Sep 9 21:55:52.803170 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 9 21:55:52.803182 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 9 21:55:52.803193 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 9 21:55:52.803205 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 9 21:55:52.803217 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 9 21:55:52.803228 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 9 21:55:52.803239 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 9 21:55:52.803251 kernel: Spectre V2 : Mitigation: Retpolines Sep 9 21:55:52.803266 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 9 21:55:52.803277 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 9 21:55:52.803289 kernel: active return thunk: retbleed_return_thunk Sep 9 21:55:52.803300 kernel: RETBleed: Mitigation: untrained return thunk Sep 9 21:55:52.803315 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 9 21:55:52.803328 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 9 21:55:52.803339 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 9 21:55:52.803367 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 9 21:55:52.803383 kernel: active return thunk: srso_return_thunk Sep 9 21:55:52.803395 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 9 21:55:52.803407 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 9 21:55:52.803419 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 9 21:55:52.803430 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 9 21:55:52.803441 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 9 21:55:52.804000 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 9 21:55:52.804012 kernel: Freeing SMP alternatives memory: 32K Sep 9 21:55:52.804024 kernel: pid_max: default: 32768 minimum: 301 Sep 9 21:55:52.804042 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 21:55:52.804054 kernel: landlock: Up and running. Sep 9 21:55:52.804067 kernel: SELinux: Initializing. Sep 9 21:55:52.804079 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 21:55:52.804091 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 21:55:52.804102 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 9 21:55:52.804508 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 9 21:55:52.804528 kernel: ... version: 0 Sep 9 21:55:52.804540 kernel: ... bit width: 48 Sep 9 21:55:52.804558 kernel: ... generic registers: 6 Sep 9 21:55:52.804570 kernel: ... value mask: 0000ffffffffffff Sep 9 21:55:52.804582 kernel: ... max period: 00007fffffffffff Sep 9 21:55:52.804594 kernel: ... fixed-purpose events: 0 Sep 9 21:55:52.804606 kernel: ... event mask: 000000000000003f Sep 9 21:55:52.804617 kernel: signal: max sigframe size: 1776 Sep 9 21:55:52.804664 kernel: rcu: Hierarchical SRCU implementation. Sep 9 21:55:52.804678 kernel: rcu: Max phase no-delay instances is 400. Sep 9 21:55:52.804695 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 9 21:55:52.804713 kernel: smp: Bringing up secondary CPUs ... Sep 9 21:55:52.804742 kernel: smpboot: x86: Booting SMP configuration: Sep 9 21:55:52.804754 kernel: .... node #0, CPUs: #1 #2 #3 Sep 9 21:55:52.804765 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 21:55:52.804777 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 9 21:55:52.804789 kernel: Memory: 2422672K/2565800K available (14336K kernel code, 2428K rwdata, 9988K rodata, 54092K init, 2876K bss, 137200K reserved, 0K cma-reserved) Sep 9 21:55:52.804800 kernel: devtmpfs: initialized Sep 9 21:55:52.804811 kernel: x86/mm: Memory block size: 128MB Sep 9 21:55:52.804823 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 9 21:55:52.804839 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 9 21:55:52.804850 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Sep 9 21:55:52.804862 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 9 21:55:52.804874 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Sep 9 21:55:52.804886 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 9 21:55:52.804897 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 21:55:52.804907 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 21:55:52.804919 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 21:55:52.804930 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 21:55:52.804945 kernel: audit: initializing netlink subsys (disabled) Sep 9 21:55:52.804957 kernel: audit: type=2000 audit(1757454941.930:1): state=initialized audit_enabled=0 res=1 Sep 9 21:55:52.804968 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 21:55:52.804979 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 9 21:55:52.804989 kernel: cpuidle: using governor menu Sep 9 21:55:52.805000 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 21:55:52.805010 kernel: dca service started, version 1.12.1 Sep 9 21:55:52.805022 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Sep 9 21:55:52.805033 kernel: PCI: Using configuration type 1 for base access Sep 9 21:55:52.805047 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 9 21:55:52.805058 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 21:55:52.805069 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 21:55:52.805080 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 21:55:52.805091 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 21:55:52.805103 kernel: ACPI: Added _OSI(Module Device) Sep 9 21:55:52.805114 kernel: ACPI: Added _OSI(Processor Device) Sep 9 21:55:52.805125 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 21:55:52.805136 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 21:55:52.805150 kernel: ACPI: Interpreter enabled Sep 9 21:55:52.805161 kernel: ACPI: PM: (supports S0 S3 S5) Sep 9 21:55:52.805171 kernel: ACPI: Using IOAPIC for interrupt routing Sep 9 21:55:52.805182 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 9 21:55:52.805194 kernel: PCI: Using E820 reservations for host bridge windows Sep 9 21:55:52.805205 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 9 21:55:52.805216 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 21:55:52.805623 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 21:55:52.806909 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 9 21:55:52.807092 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 9 21:55:52.807110 kernel: PCI host bridge to bus 0000:00 Sep 9 21:55:52.807303 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 9 21:55:52.807484 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 9 21:55:52.807643 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 9 21:55:52.810692 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 9 21:55:52.810872 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 9 21:55:52.811407 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 9 21:55:52.811534 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 21:55:52.811772 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 9 21:55:52.811931 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 9 21:55:52.812063 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Sep 9 21:55:52.812197 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Sep 9 21:55:52.812326 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 9 21:55:52.812476 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 9 21:55:52.812638 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 9 21:55:52.814888 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Sep 9 21:55:52.815075 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Sep 9 21:55:52.815246 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Sep 9 21:55:52.815495 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 9 21:55:52.815679 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Sep 9 21:55:52.815826 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Sep 9 21:55:52.815957 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Sep 9 21:55:52.816109 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 9 21:55:52.816241 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Sep 9 21:55:52.816387 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Sep 9 21:55:52.816525 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 9 21:55:52.818718 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Sep 9 21:55:52.818919 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 9 21:55:52.819057 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 9 21:55:52.819211 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 9 21:55:52.819343 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Sep 9 21:55:52.819491 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Sep 9 21:55:52.819649 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 9 21:55:52.819791 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Sep 9 21:55:52.819804 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 9 21:55:52.819813 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 9 21:55:52.819823 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 9 21:55:52.819832 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 9 21:55:52.819842 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 9 21:55:52.819855 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 9 21:55:52.819864 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 9 21:55:52.819874 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 9 21:55:52.819883 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 9 21:55:52.819893 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 9 21:55:52.819902 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 9 21:55:52.819912 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 9 21:55:52.819922 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 9 21:55:52.819932 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 9 21:55:52.819944 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 9 21:55:52.819953 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 9 21:55:52.819962 kernel: iommu: Default domain type: Translated Sep 9 21:55:52.819972 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 9 21:55:52.819981 kernel: efivars: Registered efivars operations Sep 9 21:55:52.819991 kernel: PCI: Using ACPI for IRQ routing Sep 9 21:55:52.820000 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 9 21:55:52.820010 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 9 21:55:52.820020 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Sep 9 21:55:52.820029 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Sep 9 21:55:52.820040 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Sep 9 21:55:52.820050 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Sep 9 21:55:52.820059 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Sep 9 21:55:52.820068 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Sep 9 21:55:52.820078 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Sep 9 21:55:52.820210 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 9 21:55:52.820340 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 9 21:55:52.820504 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 9 21:55:52.820519 kernel: vgaarb: loaded Sep 9 21:55:52.820528 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 9 21:55:52.820538 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 9 21:55:52.820547 kernel: clocksource: Switched to clocksource kvm-clock Sep 9 21:55:52.820557 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 21:55:52.820567 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 21:55:52.820577 kernel: pnp: PnP ACPI init Sep 9 21:55:52.822866 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 9 21:55:52.822898 kernel: pnp: PnP ACPI: found 6 devices Sep 9 21:55:52.822910 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 9 21:55:52.822920 kernel: NET: Registered PF_INET protocol family Sep 9 21:55:52.822930 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 21:55:52.822940 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 21:55:52.822950 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 21:55:52.822961 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 21:55:52.822971 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 21:55:52.822984 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 21:55:52.822995 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 21:55:52.823005 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 21:55:52.823015 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 21:55:52.823026 kernel: NET: Registered PF_XDP protocol family Sep 9 21:55:52.823181 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Sep 9 21:55:52.823316 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Sep 9 21:55:52.823475 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 9 21:55:52.823624 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 9 21:55:52.823872 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 9 21:55:52.823991 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 9 21:55:52.824423 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 9 21:55:52.824546 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 9 21:55:52.824559 kernel: PCI: CLS 0 bytes, default 64 Sep 9 21:55:52.824569 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 9 21:55:52.824579 kernel: Initialise system trusted keyrings Sep 9 21:55:52.824594 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 21:55:52.824604 kernel: Key type asymmetric registered Sep 9 21:55:52.824613 kernel: Asymmetric key parser 'x509' registered Sep 9 21:55:52.824623 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 9 21:55:52.824633 kernel: io scheduler mq-deadline registered Sep 9 21:55:52.824643 kernel: io scheduler kyber registered Sep 9 21:55:52.824653 kernel: io scheduler bfq registered Sep 9 21:55:52.826722 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 9 21:55:52.826734 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 9 21:55:52.826745 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 9 21:55:52.826755 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 9 21:55:52.826764 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 21:55:52.826775 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 9 21:55:52.826785 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 9 21:55:52.826795 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 9 21:55:52.826805 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 9 21:55:52.826990 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 9 21:55:52.827117 kernel: rtc_cmos 00:04: registered as rtc0 Sep 9 21:55:52.827240 kernel: rtc_cmos 00:04: setting system clock to 2025-09-09T21:55:51 UTC (1757454951) Sep 9 21:55:52.827253 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 9 21:55:52.827389 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 9 21:55:52.827402 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 9 21:55:52.827413 kernel: efifb: probing for efifb Sep 9 21:55:52.827426 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 9 21:55:52.827436 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 9 21:55:52.827446 kernel: efifb: scrolling: redraw Sep 9 21:55:52.827456 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 9 21:55:52.827465 kernel: Console: switching to colour frame buffer device 160x50 Sep 9 21:55:52.827475 kernel: fb0: EFI VGA frame buffer device Sep 9 21:55:52.827485 kernel: pstore: Using crash dump compression: deflate Sep 9 21:55:52.827495 kernel: pstore: Registered efi_pstore as persistent store backend Sep 9 21:55:52.827505 kernel: NET: Registered PF_INET6 protocol family Sep 9 21:55:52.827515 kernel: Segment Routing with IPv6 Sep 9 21:55:52.827527 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 21:55:52.827536 kernel: NET: Registered PF_PACKET protocol family Sep 9 21:55:52.827546 kernel: Key type dns_resolver registered Sep 9 21:55:52.827556 kernel: IPI shorthand broadcast: enabled Sep 9 21:55:52.827566 kernel: sched_clock: Marking stable (10588008635, 298769336)->(11093761362, -206983391) Sep 9 21:55:52.827576 kernel: registered taskstats version 1 Sep 9 21:55:52.827586 kernel: Loading compiled-in X.509 certificates Sep 9 21:55:52.827596 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: 003b39862f2a560eb5545d7d88a07fc5bdfce075' Sep 9 21:55:52.827606 kernel: Demotion targets for Node 0: null Sep 9 21:55:52.827618 kernel: Key type .fscrypt registered Sep 9 21:55:52.827627 kernel: Key type fscrypt-provisioning registered Sep 9 21:55:52.827637 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 21:55:52.827647 kernel: ima: Allocated hash algorithm: sha1 Sep 9 21:55:52.827665 kernel: ima: No architecture policies found Sep 9 21:55:52.827675 kernel: clk: Disabling unused clocks Sep 9 21:55:52.827684 kernel: Warning: unable to open an initial console. Sep 9 21:55:52.827695 kernel: Freeing unused kernel image (initmem) memory: 54092K Sep 9 21:55:52.827707 kernel: Write protecting the kernel read-only data: 24576k Sep 9 21:55:52.827717 kernel: Freeing unused kernel image (rodata/data gap) memory: 252K Sep 9 21:55:52.827727 kernel: Run /init as init process Sep 9 21:55:52.827736 kernel: with arguments: Sep 9 21:55:52.827747 kernel: /init Sep 9 21:55:52.827756 kernel: with environment: Sep 9 21:55:52.827766 kernel: HOME=/ Sep 9 21:55:52.827776 kernel: TERM=linux Sep 9 21:55:52.827785 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 21:55:52.827796 systemd[1]: Successfully made /usr/ read-only. Sep 9 21:55:52.827812 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 21:55:52.827824 systemd[1]: Detected virtualization kvm. Sep 9 21:55:52.827834 systemd[1]: Detected architecture x86-64. Sep 9 21:55:52.827844 systemd[1]: Running in initrd. Sep 9 21:55:52.827855 systemd[1]: No hostname configured, using default hostname. Sep 9 21:55:52.827865 systemd[1]: Hostname set to . Sep 9 21:55:52.827878 systemd[1]: Initializing machine ID from VM UUID. Sep 9 21:55:52.827888 systemd[1]: Queued start job for default target initrd.target. Sep 9 21:55:52.827899 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 21:55:52.827910 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 21:55:52.827921 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 21:55:52.827931 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 21:55:52.827941 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 21:55:52.827953 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 21:55:52.827967 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 21:55:52.827978 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 21:55:52.827989 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 21:55:52.828002 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 21:55:52.828013 systemd[1]: Reached target paths.target - Path Units. Sep 9 21:55:52.828023 systemd[1]: Reached target slices.target - Slice Units. Sep 9 21:55:52.828033 systemd[1]: Reached target swap.target - Swaps. Sep 9 21:55:52.828044 systemd[1]: Reached target timers.target - Timer Units. Sep 9 21:55:52.828057 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 21:55:52.828067 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 21:55:52.828080 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 21:55:52.828090 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 21:55:52.828103 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 21:55:52.828113 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 21:55:52.828124 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 21:55:52.828134 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 21:55:52.828147 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 21:55:52.828157 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 21:55:52.828167 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 21:55:52.828178 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 21:55:52.828189 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 21:55:52.828199 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 21:55:52.828210 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 21:55:52.828220 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 21:55:52.828243 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 21:55:52.828258 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 21:55:52.828269 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 21:55:52.828279 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 21:55:52.828332 systemd-journald[222]: Collecting audit messages is disabled. Sep 9 21:55:52.828373 systemd-journald[222]: Journal started Sep 9 21:55:52.828396 systemd-journald[222]: Runtime Journal (/run/log/journal/ee854084c086480d8ac9c916ebe6f475) is 6M, max 48.4M, 42.4M free. Sep 9 21:55:52.866754 systemd-modules-load[223]: Inserted module 'overlay' Sep 9 21:55:52.892419 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 21:55:52.892463 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 21:55:52.912932 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 21:55:52.934044 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 21:55:52.949592 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 21:55:52.972399 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 21:55:52.997150 systemd-tmpfiles[235]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 21:55:53.009130 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 21:55:53.044365 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 21:55:53.046110 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 21:55:53.057596 kernel: Bridge firewalling registered Sep 9 21:55:53.058607 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 21:55:53.061744 systemd-modules-load[223]: Inserted module 'br_netfilter' Sep 9 21:55:53.066898 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 21:55:53.086931 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 21:55:53.095519 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 21:55:53.194244 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 21:55:53.226613 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 21:55:53.246519 dracut-cmdline[256]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f0ebd120fc09fb344715b1492c3f1d02e1457be2c9792ea5ffb3fe4b15efa812 Sep 9 21:55:53.437925 systemd-resolved[273]: Positive Trust Anchors: Sep 9 21:55:53.445213 systemd-resolved[273]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 21:55:53.446723 systemd-resolved[273]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 21:55:53.450612 systemd-resolved[273]: Defaulting to hostname 'linux'. Sep 9 21:55:53.457927 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 21:55:53.469119 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 21:55:53.558431 kernel: SCSI subsystem initialized Sep 9 21:55:53.580412 kernel: Loading iSCSI transport class v2.0-870. Sep 9 21:55:53.610155 kernel: iscsi: registered transport (tcp) Sep 9 21:55:53.665695 kernel: iscsi: registered transport (qla4xxx) Sep 9 21:55:53.665792 kernel: QLogic iSCSI HBA Driver Sep 9 21:55:53.722439 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 21:55:53.772711 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 21:55:53.777960 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 21:55:53.979300 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 21:55:53.985516 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 21:55:54.117420 kernel: raid6: avx2x4 gen() 13435 MB/s Sep 9 21:55:54.134418 kernel: raid6: avx2x2 gen() 19933 MB/s Sep 9 21:55:54.154781 kernel: raid6: avx2x1 gen() 12870 MB/s Sep 9 21:55:54.154872 kernel: raid6: using algorithm avx2x2 gen() 19933 MB/s Sep 9 21:55:54.177227 kernel: raid6: .... xor() 8958 MB/s, rmw enabled Sep 9 21:55:54.177320 kernel: raid6: using avx2x2 recovery algorithm Sep 9 21:55:54.236426 kernel: xor: automatically using best checksumming function avx Sep 9 21:55:54.764887 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 21:55:54.794957 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 21:55:54.807191 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 21:55:54.891857 systemd-udevd[474]: Using default interface naming scheme 'v255'. Sep 9 21:55:54.907667 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 21:55:54.912485 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 21:55:54.972217 dracut-pre-trigger[478]: rd.md=0: removing MD RAID activation Sep 9 21:55:55.061311 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 21:55:55.068853 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 21:55:55.217067 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 21:55:55.224133 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 21:55:55.288390 kernel: cryptd: max_cpu_qlen set to 1000 Sep 9 21:55:55.301403 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 9 21:55:55.307418 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 21:55:55.315734 kernel: AES CTR mode by8 optimization enabled Sep 9 21:55:55.315811 kernel: libata version 3.00 loaded. Sep 9 21:55:55.316219 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 21:55:55.317833 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 21:55:55.321108 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 21:55:55.333400 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 9 21:55:55.346815 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 21:55:55.465301 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 21:55:55.465339 kernel: GPT:9289727 != 19775487 Sep 9 21:55:55.465378 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 21:55:55.465405 kernel: GPT:9289727 != 19775487 Sep 9 21:55:55.465420 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 21:55:55.465435 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 21:55:55.365469 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 21:55:55.471312 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 21:55:55.471509 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 21:55:55.487817 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 21:55:55.537723 kernel: ahci 0000:00:1f.2: version 3.0 Sep 9 21:55:55.547919 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 9 21:55:55.555093 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 9 21:55:55.555723 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 9 21:55:55.556155 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 9 21:55:55.563429 kernel: scsi host0: ahci Sep 9 21:55:55.565482 kernel: scsi host1: ahci Sep 9 21:55:55.568428 kernel: scsi host2: ahci Sep 9 21:55:55.571373 kernel: scsi host3: ahci Sep 9 21:55:55.573625 kernel: scsi host4: ahci Sep 9 21:55:55.573864 kernel: scsi host5: ahci Sep 9 21:55:55.577730 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Sep 9 21:55:55.577807 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Sep 9 21:55:55.577823 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Sep 9 21:55:55.577836 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Sep 9 21:55:55.577850 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Sep 9 21:55:55.577864 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Sep 9 21:55:55.591584 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 21:55:55.624058 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 9 21:55:55.640000 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 9 21:55:55.657484 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 9 21:55:55.660324 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 9 21:55:55.687271 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 21:55:55.708904 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 21:55:55.754381 disk-uuid[637]: Primary Header is updated. Sep 9 21:55:55.754381 disk-uuid[637]: Secondary Entries is updated. Sep 9 21:55:55.754381 disk-uuid[637]: Secondary Header is updated. Sep 9 21:55:55.767894 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 21:55:55.886415 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 9 21:55:55.886486 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 9 21:55:55.901335 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 9 21:55:55.901434 kernel: ata3.00: LPM support broken, forcing max_power Sep 9 21:55:55.901471 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 9 21:55:55.901487 kernel: ata3.00: applying bridge limits Sep 9 21:55:55.901502 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 9 21:55:55.901517 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 9 21:55:55.905629 kernel: ata3.00: LPM support broken, forcing max_power Sep 9 21:55:55.905701 kernel: ata3.00: configured for UDMA/100 Sep 9 21:55:55.908708 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 9 21:55:55.914421 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 9 21:55:56.051848 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 9 21:55:56.055499 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 9 21:55:56.078162 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 9 21:55:56.644696 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 21:55:56.653224 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 21:55:56.654938 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 21:55:56.660877 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 21:55:56.667531 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 21:55:56.737098 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 21:55:56.852446 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 21:55:56.857700 disk-uuid[638]: The operation has completed successfully. Sep 9 21:55:56.985695 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 21:55:56.985865 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 21:55:57.053863 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 21:55:57.109428 sh[668]: Success Sep 9 21:55:57.159111 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 21:55:57.159214 kernel: device-mapper: uevent: version 1.0.3 Sep 9 21:55:57.159234 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 21:55:57.224619 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 9 21:55:57.362061 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 21:55:57.375721 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 21:55:57.380124 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 21:55:57.425329 kernel: BTRFS: device fsid f72d0a81-8b28-47a3-b3ab-bf6ecd8938f0 devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (680) Sep 9 21:55:57.433656 kernel: BTRFS info (device dm-0): first mount of filesystem f72d0a81-8b28-47a3-b3ab-bf6ecd8938f0 Sep 9 21:55:57.433744 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 9 21:55:57.492593 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 21:55:57.492725 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 21:55:57.497828 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 21:55:57.504479 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 21:55:57.511299 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 21:55:57.520898 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 21:55:57.523884 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 21:55:57.585572 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (714) Sep 9 21:55:57.590816 kernel: BTRFS info (device vda6): first mount of filesystem 0420e4c2-e4f2-4134-b76b-6a7c4e652ed7 Sep 9 21:55:57.590893 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 21:55:57.606456 kernel: BTRFS info (device vda6): turning on async discard Sep 9 21:55:57.606559 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 21:55:57.619271 kernel: BTRFS info (device vda6): last unmount of filesystem 0420e4c2-e4f2-4134-b76b-6a7c4e652ed7 Sep 9 21:55:57.626993 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 21:55:57.633588 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 21:55:57.985328 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 21:55:58.018213 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 21:55:58.504271 ignition[766]: Ignition 2.22.0 Sep 9 21:55:58.505548 ignition[766]: Stage: fetch-offline Sep 9 21:55:58.505611 ignition[766]: no configs at "/usr/lib/ignition/base.d" Sep 9 21:55:58.505622 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 21:55:58.505834 ignition[766]: parsed url from cmdline: "" Sep 9 21:55:58.508993 systemd-networkd[849]: lo: Link UP Sep 9 21:55:58.505841 ignition[766]: no config URL provided Sep 9 21:55:58.508999 systemd-networkd[849]: lo: Gained carrier Sep 9 21:55:58.505854 ignition[766]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 21:55:58.505871 ignition[766]: no config at "/usr/lib/ignition/user.ign" Sep 9 21:55:58.505912 ignition[766]: op(1): [started] loading QEMU firmware config module Sep 9 21:55:58.505932 ignition[766]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 21:55:58.520209 systemd-networkd[849]: Enumeration completed Sep 9 21:55:58.520410 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 21:55:58.524188 systemd-networkd[849]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 21:55:58.524193 systemd-networkd[849]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 21:55:58.537975 systemd-networkd[849]: eth0: Link UP Sep 9 21:55:58.545903 systemd[1]: Reached target network.target - Network. Sep 9 21:55:58.547949 systemd-networkd[849]: eth0: Gained carrier Sep 9 21:55:58.547972 systemd-networkd[849]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 21:55:58.636630 systemd-networkd[849]: eth0: DHCPv4 address 10.0.0.35/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 21:55:58.676784 ignition[766]: op(1): [finished] loading QEMU firmware config module Sep 9 21:55:58.676841 ignition[766]: QEMU firmware config was not found. Ignoring... Sep 9 21:55:58.757285 ignition[766]: parsing config with SHA512: d9dbb14f18a0e338fc3bb14208a5152d8e88aa0902091f24a811c937cbee112fe97f699e49a0316977e78c38036514be3e803da278c995d3aaaacabcd2ff0555 Sep 9 21:55:58.766824 unknown[766]: fetched base config from "system" Sep 9 21:55:58.767319 ignition[766]: fetch-offline: fetch-offline passed Sep 9 21:55:58.766852 unknown[766]: fetched user config from "qemu" Sep 9 21:55:58.767448 ignition[766]: Ignition finished successfully Sep 9 21:55:58.780891 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 21:55:58.803043 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 21:55:58.811065 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 21:55:59.044270 ignition[862]: Ignition 2.22.0 Sep 9 21:55:59.046881 ignition[862]: Stage: kargs Sep 9 21:55:59.052984 ignition[862]: no configs at "/usr/lib/ignition/base.d" Sep 9 21:55:59.053002 ignition[862]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 21:55:59.054919 ignition[862]: kargs: kargs passed Sep 9 21:55:59.063631 ignition[862]: Ignition finished successfully Sep 9 21:55:59.080203 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 21:55:59.094903 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 21:55:59.176631 ignition[870]: Ignition 2.22.0 Sep 9 21:55:59.177084 ignition[870]: Stage: disks Sep 9 21:55:59.178410 ignition[870]: no configs at "/usr/lib/ignition/base.d" Sep 9 21:55:59.178426 ignition[870]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 21:55:59.180689 ignition[870]: disks: disks passed Sep 9 21:55:59.180772 ignition[870]: Ignition finished successfully Sep 9 21:55:59.197831 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 21:55:59.201732 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 21:55:59.206599 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 21:55:59.212724 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 21:55:59.217519 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 21:55:59.221438 systemd[1]: Reached target basic.target - Basic System. Sep 9 21:55:59.239120 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 21:55:59.340001 systemd-fsck[879]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 9 21:55:59.368808 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 21:55:59.388229 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 21:55:59.768560 systemd-networkd[849]: eth0: Gained IPv6LL Sep 9 21:55:59.879399 kernel: EXT4-fs (vda9): mounted filesystem b54acc07-9600-49db-baed-d5fd6f41a1a5 r/w with ordered data mode. Quota mode: none. Sep 9 21:55:59.881046 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 21:55:59.884990 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 21:55:59.904225 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 21:55:59.907653 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 21:55:59.908968 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 21:55:59.910558 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 21:55:59.910613 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 21:55:59.937236 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 21:55:59.943838 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 21:55:59.958246 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (887) Sep 9 21:55:59.968724 kernel: BTRFS info (device vda6): first mount of filesystem 0420e4c2-e4f2-4134-b76b-6a7c4e652ed7 Sep 9 21:55:59.968815 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 21:55:59.987423 kernel: BTRFS info (device vda6): turning on async discard Sep 9 21:55:59.987527 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 21:55:59.990729 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 21:56:00.086735 initrd-setup-root[911]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 21:56:00.106377 initrd-setup-root[918]: cut: /sysroot/etc/group: No such file or directory Sep 9 21:56:00.120867 initrd-setup-root[925]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 21:56:00.135321 initrd-setup-root[932]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 21:56:00.511954 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 21:56:00.526736 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 21:56:00.561630 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 21:56:00.597569 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 21:56:00.606746 kernel: BTRFS info (device vda6): last unmount of filesystem 0420e4c2-e4f2-4134-b76b-6a7c4e652ed7 Sep 9 21:56:00.669089 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 21:56:00.697611 ignition[1001]: INFO : Ignition 2.22.0 Sep 9 21:56:00.697611 ignition[1001]: INFO : Stage: mount Sep 9 21:56:00.701939 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 21:56:00.701939 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 21:56:00.701939 ignition[1001]: INFO : mount: mount passed Sep 9 21:56:00.701939 ignition[1001]: INFO : Ignition finished successfully Sep 9 21:56:00.715939 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 21:56:00.723069 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 21:56:00.894489 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 21:56:00.969509 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1014) Sep 9 21:56:00.975094 kernel: BTRFS info (device vda6): first mount of filesystem 0420e4c2-e4f2-4134-b76b-6a7c4e652ed7 Sep 9 21:56:00.975164 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 21:56:00.995016 kernel: BTRFS info (device vda6): turning on async discard Sep 9 21:56:00.995112 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 21:56:01.006995 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 21:56:01.102614 ignition[1031]: INFO : Ignition 2.22.0 Sep 9 21:56:01.102614 ignition[1031]: INFO : Stage: files Sep 9 21:56:01.102614 ignition[1031]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 21:56:01.102614 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 21:56:01.115849 ignition[1031]: DEBUG : files: compiled without relabeling support, skipping Sep 9 21:56:01.115849 ignition[1031]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 21:56:01.115849 ignition[1031]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 21:56:01.128106 ignition[1031]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 21:56:01.130785 ignition[1031]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 21:56:01.143035 unknown[1031]: wrote ssh authorized keys file for user: core Sep 9 21:56:01.144963 ignition[1031]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 21:56:01.155832 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 9 21:56:01.155832 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 9 21:56:01.259653 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 21:56:02.050815 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 9 21:56:02.058052 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 9 21:56:02.058052 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 21:56:02.058052 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 21:56:02.058052 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 21:56:02.058052 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 21:56:02.058052 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 21:56:02.058052 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 21:56:02.058052 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 21:56:02.144657 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 21:56:02.163555 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 21:56:02.163555 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 21:56:02.207486 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 21:56:02.207486 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 21:56:02.236690 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 9 21:56:02.775651 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 9 21:56:05.427582 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 21:56:05.427582 ignition[1031]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 9 21:56:05.439706 ignition[1031]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 21:56:05.560081 ignition[1031]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 21:56:05.560081 ignition[1031]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 9 21:56:05.560081 ignition[1031]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 9 21:56:05.577859 ignition[1031]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 21:56:05.577859 ignition[1031]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 21:56:05.577859 ignition[1031]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 9 21:56:05.577859 ignition[1031]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 21:56:05.791605 ignition[1031]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 21:56:05.832008 ignition[1031]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 21:56:05.832008 ignition[1031]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 21:56:05.832008 ignition[1031]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 9 21:56:05.832008 ignition[1031]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 21:56:05.832008 ignition[1031]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 21:56:05.832008 ignition[1031]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 21:56:05.832008 ignition[1031]: INFO : files: files passed Sep 9 21:56:05.832008 ignition[1031]: INFO : Ignition finished successfully Sep 9 21:56:05.843403 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 21:56:05.850375 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 21:56:05.863931 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 21:56:05.905223 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 21:56:05.905546 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 21:56:05.925795 initrd-setup-root-after-ignition[1061]: grep: /sysroot/oem/oem-release: No such file or directory Sep 9 21:56:05.943391 initrd-setup-root-after-ignition[1063]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 21:56:05.943391 initrd-setup-root-after-ignition[1063]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 21:56:05.954550 initrd-setup-root-after-ignition[1067]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 21:56:05.964654 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 21:56:05.973524 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 21:56:05.997555 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 21:56:06.179185 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 21:56:06.183821 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 21:56:06.204698 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 21:56:06.227238 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 21:56:06.250853 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 21:56:06.258671 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 21:56:06.358341 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 21:56:06.388063 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 21:56:06.451646 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 21:56:06.454590 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 21:56:06.460625 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 21:56:06.468549 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 21:56:06.468793 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 21:56:06.470600 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 21:56:06.471973 systemd[1]: Stopped target basic.target - Basic System. Sep 9 21:56:06.473158 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 21:56:06.485341 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 21:56:06.542413 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 21:56:06.557098 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 21:56:06.571563 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 21:56:06.585748 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 21:56:06.595301 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 21:56:06.597081 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 21:56:06.609304 systemd[1]: Stopped target swap.target - Swaps. Sep 9 21:56:06.650528 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 21:56:06.651120 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 21:56:06.681004 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 21:56:06.690710 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 21:56:06.708654 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 21:56:06.708944 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 21:56:06.719638 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 21:56:06.719879 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 21:56:06.793584 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 21:56:06.793866 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 21:56:06.804916 systemd[1]: Stopped target paths.target - Path Units. Sep 9 21:56:06.807129 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 21:56:06.807547 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 21:56:06.856229 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 21:56:06.860275 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 21:56:06.867238 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 21:56:06.868935 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 21:56:06.882847 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 21:56:06.883007 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 21:56:06.883231 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 21:56:06.883430 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 21:56:06.883621 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 21:56:06.883760 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 21:56:06.885508 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 21:56:06.885580 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 21:56:06.885737 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 21:56:06.886868 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 21:56:06.886940 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 21:56:06.887095 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 21:56:06.887273 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 21:56:06.887430 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 21:56:06.945036 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 21:56:06.948330 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 21:56:07.047747 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 21:56:07.170263 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 21:56:07.172308 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 21:56:07.208022 ignition[1087]: INFO : Ignition 2.22.0 Sep 9 21:56:07.208022 ignition[1087]: INFO : Stage: umount Sep 9 21:56:07.215264 ignition[1087]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 21:56:07.215264 ignition[1087]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 21:56:07.215264 ignition[1087]: INFO : umount: umount passed Sep 9 21:56:07.237107 ignition[1087]: INFO : Ignition finished successfully Sep 9 21:56:07.262787 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 21:56:07.273648 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 21:56:07.287929 systemd[1]: Stopped target network.target - Network. Sep 9 21:56:07.298640 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 21:56:07.299342 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 21:56:07.329674 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 21:56:07.330530 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 21:56:07.364286 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 21:56:07.366560 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 21:56:07.384189 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 21:56:07.385656 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 21:56:07.410146 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 21:56:07.410305 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 21:56:07.424898 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 21:56:07.433769 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 21:56:07.478658 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 21:56:07.478852 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 21:56:07.496389 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 21:56:07.496783 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 21:56:07.499584 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 21:56:07.520745 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 21:56:07.525184 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 21:56:07.529167 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 21:56:07.529742 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 21:56:07.538031 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 21:56:07.558468 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 21:56:07.559741 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 21:56:07.561385 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 21:56:07.561739 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 21:56:07.577819 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 21:56:07.577947 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 21:56:07.579832 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 21:56:07.579913 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 21:56:07.591131 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 21:56:07.596928 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 21:56:07.598431 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 21:56:07.612616 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 21:56:07.617602 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 21:56:07.624881 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 21:56:07.624947 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 21:56:07.630435 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 21:56:07.630500 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 21:56:07.634334 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 21:56:07.634452 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 21:56:07.641807 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 21:56:07.641921 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 21:56:07.647168 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 21:56:07.647282 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 21:56:07.660496 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 21:56:07.674961 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 21:56:07.675113 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 21:56:07.677569 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 21:56:07.677641 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 21:56:07.700807 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 21:56:07.700909 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 21:56:07.763011 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 9 21:56:07.763095 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 9 21:56:07.763156 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 21:56:07.765801 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 21:56:07.766155 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 21:56:07.776879 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 21:56:07.777029 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 21:56:07.801839 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 21:56:07.840623 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 21:56:07.882001 systemd[1]: Switching root. Sep 9 21:56:07.957256 systemd-journald[222]: Journal stopped Sep 9 21:56:11.490158 systemd-journald[222]: Received SIGTERM from PID 1 (systemd). Sep 9 21:56:11.490255 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 21:56:11.490277 kernel: SELinux: policy capability open_perms=1 Sep 9 21:56:11.490294 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 21:56:11.490310 kernel: SELinux: policy capability always_check_network=0 Sep 9 21:56:11.490330 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 21:56:11.490369 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 21:56:11.490387 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 21:56:11.490404 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 21:56:11.490420 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 21:56:11.490436 kernel: audit: type=1403 audit(1757454968.680:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 21:56:11.490454 systemd[1]: Successfully loaded SELinux policy in 118.260ms. Sep 9 21:56:11.490490 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.376ms. Sep 9 21:56:11.490511 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 21:56:11.490530 systemd[1]: Detected virtualization kvm. Sep 9 21:56:11.490551 systemd[1]: Detected architecture x86-64. Sep 9 21:56:11.490575 systemd[1]: Detected first boot. Sep 9 21:56:11.490593 systemd[1]: Initializing machine ID from VM UUID. Sep 9 21:56:11.490611 zram_generator::config[1134]: No configuration found. Sep 9 21:56:11.490695 kernel: Guest personality initialized and is inactive Sep 9 21:56:11.490715 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 9 21:56:11.490731 kernel: Initialized host personality Sep 9 21:56:11.490747 kernel: NET: Registered PF_VSOCK protocol family Sep 9 21:56:11.490770 systemd[1]: Populated /etc with preset unit settings. Sep 9 21:56:11.490788 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 21:56:11.490818 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 21:56:11.490834 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 21:56:11.490851 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 21:56:11.490871 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 21:56:11.490888 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 21:56:11.490911 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 21:56:11.490928 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 21:56:11.490951 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 21:56:11.490968 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 21:56:11.490985 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 21:56:11.491002 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 21:56:11.491018 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 21:56:11.491036 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 21:56:11.491053 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 21:56:11.491070 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 21:56:11.491091 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 21:56:11.491108 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 21:56:11.491138 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 9 21:56:11.491155 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 21:56:11.491172 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 21:56:11.491188 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 21:56:11.491205 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 21:56:11.491222 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 21:56:11.491242 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 21:56:11.491258 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 21:56:11.491275 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 21:56:11.491292 systemd[1]: Reached target slices.target - Slice Units. Sep 9 21:56:11.491308 systemd[1]: Reached target swap.target - Swaps. Sep 9 21:56:11.491325 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 21:56:11.491342 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 21:56:11.491377 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 21:56:11.491393 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 21:56:11.491414 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 21:56:11.491431 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 21:56:11.491448 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 21:56:11.491464 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 21:56:11.491481 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 21:56:11.491498 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 21:56:11.491515 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 21:56:11.491535 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 21:56:11.491552 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 21:56:11.491572 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 21:56:11.491589 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 21:56:11.491606 systemd[1]: Reached target machines.target - Containers. Sep 9 21:56:11.491623 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 21:56:11.491640 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 21:56:11.491657 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 21:56:11.491687 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 21:56:11.491706 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 21:56:11.491723 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 21:56:11.491747 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 21:56:11.491764 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 21:56:11.491781 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 21:56:11.491798 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 21:56:11.491821 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 21:56:11.491838 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 21:56:11.491855 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 21:56:11.491871 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 21:56:11.491895 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 21:56:11.491913 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 21:56:11.491929 kernel: fuse: init (API version 7.41) Sep 9 21:56:11.491945 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 21:56:11.491961 kernel: loop: module loaded Sep 9 21:56:11.491977 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 21:56:11.491995 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 21:56:11.492012 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 21:56:11.492029 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 21:56:11.492050 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 21:56:11.492066 systemd[1]: Stopped verity-setup.service. Sep 9 21:56:11.492084 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 21:56:11.492225 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 21:56:11.492247 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 21:56:11.492264 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 21:56:11.492281 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 21:56:11.492298 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 21:56:11.492315 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 21:56:11.492332 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 21:56:11.492371 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 21:56:11.492388 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 21:56:11.492416 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 21:56:11.492453 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 21:56:11.492470 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 21:56:11.492486 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 21:56:11.492503 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 21:56:11.492519 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 21:56:11.492536 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 21:56:11.492557 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 21:56:11.492574 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 21:56:11.492591 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 21:56:11.492608 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 21:56:11.492625 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 21:56:11.492642 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 21:56:11.492659 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 21:56:11.492676 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 21:56:11.492729 systemd-journald[1219]: Collecting audit messages is disabled. Sep 9 21:56:11.492764 kernel: ACPI: bus type drm_connector registered Sep 9 21:56:11.492781 systemd-journald[1219]: Journal started Sep 9 21:56:11.492816 systemd-journald[1219]: Runtime Journal (/run/log/journal/ee854084c086480d8ac9c916ebe6f475) is 6M, max 48.4M, 42.4M free. Sep 9 21:56:11.501230 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 21:56:10.373027 systemd[1]: Queued start job for default target multi-user.target. Sep 9 21:56:10.403088 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 9 21:56:10.408059 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 21:56:11.508209 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 21:56:11.508811 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 21:56:11.518257 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 21:56:11.528198 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 21:56:11.528294 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 21:56:11.546877 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 21:56:11.546996 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 21:56:11.556672 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 21:56:11.570265 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 21:56:11.596524 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 21:56:11.614532 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 21:56:11.623771 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 21:56:11.635329 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 21:56:11.633086 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 21:56:11.633440 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 21:56:11.648471 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 21:56:11.657083 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 21:56:11.668212 kernel: loop0: detected capacity change from 0 to 128016 Sep 9 21:56:11.665879 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 21:56:11.670311 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 21:56:11.765172 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 21:56:11.794800 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 21:56:11.809196 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 21:56:11.825565 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 21:56:11.838164 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 21:56:11.855256 systemd-journald[1219]: Time spent on flushing to /var/log/journal/ee854084c086480d8ac9c916ebe6f475 is 45.816ms for 1080 entries. Sep 9 21:56:11.855256 systemd-journald[1219]: System Journal (/var/log/journal/ee854084c086480d8ac9c916ebe6f475) is 8M, max 195.6M, 187.6M free. Sep 9 21:56:11.950389 systemd-journald[1219]: Received client request to flush runtime journal. Sep 9 21:56:11.950456 kernel: loop1: detected capacity change from 0 to 229808 Sep 9 21:56:11.861662 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 21:56:11.878694 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 21:56:11.952792 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 21:56:12.017523 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Sep 9 21:56:12.017549 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Sep 9 21:56:12.028779 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 21:56:12.041456 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 21:56:12.049516 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 21:56:12.086424 kernel: loop2: detected capacity change from 0 to 110984 Sep 9 21:56:12.222836 kernel: loop3: detected capacity change from 0 to 128016 Sep 9 21:56:12.287011 kernel: loop4: detected capacity change from 0 to 229808 Sep 9 21:56:12.393409 kernel: loop5: detected capacity change from 0 to 110984 Sep 9 21:56:12.484567 (sd-merge)[1275]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 9 21:56:12.485476 (sd-merge)[1275]: Merged extensions into '/usr'. Sep 9 21:56:12.513537 systemd[1]: Reload requested from client PID 1234 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 21:56:12.513573 systemd[1]: Reloading... Sep 9 21:56:12.854399 zram_generator::config[1298]: No configuration found. Sep 9 21:56:13.804744 systemd[1]: Reloading finished in 1290 ms. Sep 9 21:56:13.988158 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 21:56:14.021286 ldconfig[1230]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 21:56:14.026887 systemd[1]: Starting ensure-sysext.service... Sep 9 21:56:14.052186 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 21:56:14.111551 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 21:56:14.137398 systemd[1]: Reload requested from client PID 1337 ('systemctl') (unit ensure-sysext.service)... Sep 9 21:56:14.137603 systemd[1]: Reloading... Sep 9 21:56:14.142961 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 21:56:14.143014 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 21:56:14.149739 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 21:56:14.150132 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 21:56:14.151512 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 21:56:14.151944 systemd-tmpfiles[1338]: ACLs are not supported, ignoring. Sep 9 21:56:14.158109 systemd-tmpfiles[1338]: ACLs are not supported, ignoring. Sep 9 21:56:14.177231 systemd-tmpfiles[1338]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 21:56:14.177249 systemd-tmpfiles[1338]: Skipping /boot Sep 9 21:56:14.221302 systemd-tmpfiles[1338]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 21:56:14.221543 systemd-tmpfiles[1338]: Skipping /boot Sep 9 21:56:14.426473 zram_generator::config[1366]: No configuration found. Sep 9 21:56:14.905808 systemd[1]: Reloading finished in 766 ms. Sep 9 21:56:14.933933 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 21:56:14.960735 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 21:56:14.981726 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 21:56:14.994479 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 21:56:15.013493 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 21:56:15.029589 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 21:56:15.042341 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 21:56:15.057445 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 21:56:15.092326 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 21:56:15.092622 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 21:56:15.095323 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 21:56:15.113934 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 21:56:15.130956 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 21:56:15.134456 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 21:56:15.134813 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 21:56:15.142262 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 21:56:15.144660 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 21:56:15.156236 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 21:56:15.159995 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 21:56:15.163960 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 21:56:15.167932 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 21:56:15.168383 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 21:56:15.175863 systemd-udevd[1410]: Using default interface naming scheme 'v255'. Sep 9 21:56:15.176457 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 21:56:15.178645 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 21:56:15.202567 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 21:56:15.209084 systemd[1]: Finished ensure-sysext.service. Sep 9 21:56:15.214098 augenrules[1438]: No rules Sep 9 21:56:15.320440 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 21:56:15.326075 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 21:56:15.326506 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 21:56:15.328785 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 21:56:15.351275 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 21:56:15.351546 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 21:56:15.357873 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 21:56:15.375435 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 21:56:15.387675 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 21:56:15.397974 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 21:56:15.404670 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 21:56:15.404746 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 21:56:15.413209 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 21:56:15.443264 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 21:56:15.445500 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 21:56:15.449149 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 21:56:15.449197 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 21:56:15.450323 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 21:56:15.450729 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 21:56:15.455870 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 21:56:15.456199 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 21:56:15.456721 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 21:56:15.457038 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 21:56:15.482944 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 21:56:15.507663 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 21:56:15.508027 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 21:56:15.512964 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 21:56:15.530419 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 9 21:56:15.540463 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 21:56:15.551373 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 21:56:15.950455 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 21:56:15.956382 kernel: mousedev: PS/2 mouse device common for all mice Sep 9 21:56:15.962253 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 21:56:16.002612 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Sep 9 21:56:16.013064 kernel: ACPI: button: Power Button [PWRF] Sep 9 21:56:16.192745 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 21:56:16.234105 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 9 21:56:16.234661 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 9 21:56:16.234884 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 9 21:56:16.690802 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 21:56:16.734611 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 21:56:16.735010 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 21:56:16.743493 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 21:56:17.001195 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 21:56:17.005440 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 21:56:17.041061 systemd-resolved[1408]: Positive Trust Anchors: Sep 9 21:56:17.041687 systemd-resolved[1408]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 21:56:17.041805 systemd-resolved[1408]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 21:56:17.056832 systemd-resolved[1408]: Defaulting to hostname 'linux'. Sep 9 21:56:17.060746 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 21:56:17.065029 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 21:56:17.070864 systemd-networkd[1478]: lo: Link UP Sep 9 21:56:17.070878 systemd-networkd[1478]: lo: Gained carrier Sep 9 21:56:17.076331 systemd-networkd[1478]: Enumeration completed Sep 9 21:56:17.076538 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 21:56:17.076820 systemd[1]: Reached target network.target - Network. Sep 9 21:56:17.079862 systemd-networkd[1478]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 21:56:17.079973 systemd-networkd[1478]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 21:56:17.082586 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 21:56:17.086815 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 21:56:17.089745 systemd-networkd[1478]: eth0: Link UP Sep 9 21:56:17.090223 systemd-networkd[1478]: eth0: Gained carrier Sep 9 21:56:17.090260 systemd-networkd[1478]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 21:56:17.145131 systemd-networkd[1478]: eth0: DHCPv4 address 10.0.0.35/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 21:56:17.149158 systemd-timesyncd[1479]: Network configuration changed, trying to establish connection. Sep 9 21:56:18.037198 systemd-resolved[1408]: Clock change detected. Flushing caches. Sep 9 21:56:18.037424 systemd-timesyncd[1479]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 21:56:18.037572 systemd-timesyncd[1479]: Initial clock synchronization to Tue 2025-09-09 21:56:18.037114 UTC. Sep 9 21:56:18.039261 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 21:56:18.057867 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 21:56:18.059896 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 21:56:18.063569 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 21:56:18.065776 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 21:56:18.069292 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 9 21:56:18.072510 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 21:56:18.074537 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 21:56:18.076492 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 21:56:18.078890 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 21:56:18.078948 systemd[1]: Reached target paths.target - Path Units. Sep 9 21:56:18.081210 systemd[1]: Reached target timers.target - Timer Units. Sep 9 21:56:18.093401 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 21:56:18.105735 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 21:56:18.119395 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 21:56:18.127273 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 21:56:18.134065 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 21:56:18.151020 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 21:56:18.153610 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 21:56:18.162411 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 21:56:18.165313 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 21:56:18.169523 systemd[1]: Reached target basic.target - Basic System. Sep 9 21:56:18.177556 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 21:56:18.177815 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 21:56:18.186319 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 21:56:18.203915 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 21:56:18.220982 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 21:56:18.228807 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 21:56:18.243047 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 21:56:18.250868 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 21:56:18.257179 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 9 21:56:18.320798 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 21:56:18.336950 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 21:56:18.346678 jq[1539]: false Sep 9 21:56:18.364345 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Refreshing passwd entry cache Sep 9 21:56:18.353313 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 21:56:18.352002 oslogin_cache_refresh[1541]: Refreshing passwd entry cache Sep 9 21:56:18.382469 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Failure getting users, quitting Sep 9 21:56:18.382469 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 21:56:18.382469 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Refreshing group entry cache Sep 9 21:56:18.382674 extend-filesystems[1540]: Found /dev/vda6 Sep 9 21:56:18.377167 oslogin_cache_refresh[1541]: Failure getting users, quitting Sep 9 21:56:18.384323 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 21:56:18.377264 oslogin_cache_refresh[1541]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 21:56:18.377366 oslogin_cache_refresh[1541]: Refreshing group entry cache Sep 9 21:56:18.397617 extend-filesystems[1540]: Found /dev/vda9 Sep 9 21:56:18.403527 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Failure getting groups, quitting Sep 9 21:56:18.403527 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 21:56:18.403401 oslogin_cache_refresh[1541]: Failure getting groups, quitting Sep 9 21:56:18.403419 oslogin_cache_refresh[1541]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 21:56:18.406641 extend-filesystems[1540]: Checking size of /dev/vda9 Sep 9 21:56:18.417044 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 21:56:18.422722 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 21:56:18.425546 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 21:56:18.426939 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 21:56:18.450922 extend-filesystems[1540]: Resized partition /dev/vda9 Sep 9 21:56:18.454161 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 21:56:18.468247 extend-filesystems[1566]: resize2fs 1.47.3 (8-Jul-2025) Sep 9 21:56:18.487020 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 21:56:18.491934 update_engine[1559]: I20250909 21:56:18.490942 1559 main.cc:92] Flatcar Update Engine starting Sep 9 21:56:18.500285 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 21:56:18.498304 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 21:56:18.498633 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 21:56:18.499023 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 9 21:56:18.499464 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 9 21:56:18.503906 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 21:56:18.504212 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 21:56:18.519716 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 21:56:18.520929 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 21:56:18.536450 jq[1564]: true Sep 9 21:56:18.576399 (ntainerd)[1576]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 21:56:18.610560 kernel: kvm_amd: TSC scaling supported Sep 9 21:56:18.610674 kernel: kvm_amd: Nested Virtualization enabled Sep 9 21:56:18.610692 kernel: kvm_amd: Nested Paging enabled Sep 9 21:56:18.610717 kernel: kvm_amd: LBR virtualization supported Sep 9 21:56:18.615272 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 21:56:18.664464 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 9 21:56:18.664757 kernel: kvm_amd: Virtual GIF supported Sep 9 21:56:18.664799 jq[1579]: true Sep 9 21:56:18.871802 extend-filesystems[1566]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 21:56:18.871802 extend-filesystems[1566]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 21:56:18.871802 extend-filesystems[1566]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 21:56:18.955181 tar[1569]: linux-amd64/LICENSE Sep 9 21:56:18.955181 tar[1569]: linux-amd64/helm Sep 9 21:56:18.673275 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 21:56:18.962062 extend-filesystems[1540]: Resized filesystem in /dev/vda9 Sep 9 21:56:18.675837 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 21:56:18.973379 dbus-daemon[1537]: [system] SELinux support is enabled Sep 9 21:56:18.975303 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 21:56:18.982953 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 21:56:18.983009 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 21:56:18.985742 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 21:56:18.985784 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 21:56:19.013674 update_engine[1559]: I20250909 21:56:19.013345 1559 update_check_scheduler.cc:74] Next update check in 3m6s Sep 9 21:56:19.169370 systemd-logind[1557]: Watching system buttons on /dev/input/event2 (Power Button) Sep 9 21:56:19.169445 systemd-logind[1557]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 9 21:56:19.171153 systemd-logind[1557]: New seat seat0. Sep 9 21:56:19.185068 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 21:56:19.190515 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 21:56:19.193778 systemd[1]: Started update-engine.service - Update Engine. Sep 9 21:56:19.215277 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 21:56:19.234575 sshd_keygen[1565]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 21:56:19.473151 systemd-networkd[1478]: eth0: Gained IPv6LL Sep 9 21:56:19.481834 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 21:56:19.496314 bash[1602]: Updated "/home/core/.ssh/authorized_keys" Sep 9 21:56:19.495993 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 21:56:19.509355 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 9 21:56:19.527921 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:56:19.546831 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 21:56:19.601151 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 21:56:19.606152 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 21:56:19.643005 locksmithd[1601]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 21:56:19.644482 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 21:56:19.651274 systemd[1]: Started sshd@0-10.0.0.35:22-10.0.0.1:38802.service - OpenSSH per-connection server daemon (10.0.0.1:38802). Sep 9 21:56:19.656541 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 21:56:19.921531 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 21:56:19.923091 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 21:56:19.952983 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 21:56:19.976651 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 21:56:19.977274 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 9 21:56:19.985449 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 21:56:20.113136 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 21:56:20.210939 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 21:56:20.236740 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 21:56:20.243739 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 9 21:56:20.247402 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 21:56:20.271399 kernel: EDAC MC: Ver: 3.0.0 Sep 9 21:56:20.490500 containerd[1576]: time="2025-09-09T21:56:20Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 21:56:20.491249 containerd[1576]: time="2025-09-09T21:56:20.491205679Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 9 21:56:20.497647 sshd[1628]: Accepted publickey for core from 10.0.0.1 port 38802 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:56:20.501825 sshd-session[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:56:20.517942 containerd[1576]: time="2025-09-09T21:56:20.517873471Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.872µs" Sep 9 21:56:20.518136 containerd[1576]: time="2025-09-09T21:56:20.518117388Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 21:56:20.518203 containerd[1576]: time="2025-09-09T21:56:20.518188461Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 21:56:20.518661 containerd[1576]: time="2025-09-09T21:56:20.518635800Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 21:56:20.518738 containerd[1576]: time="2025-09-09T21:56:20.518723234Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 21:56:20.518890 containerd[1576]: time="2025-09-09T21:56:20.518868346Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 21:56:20.519040 containerd[1576]: time="2025-09-09T21:56:20.519019971Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 21:56:20.519113 containerd[1576]: time="2025-09-09T21:56:20.519098017Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 21:56:20.520205 containerd[1576]: time="2025-09-09T21:56:20.520170378Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 21:56:20.520438 containerd[1576]: time="2025-09-09T21:56:20.520277709Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 21:56:20.520529 containerd[1576]: time="2025-09-09T21:56:20.520507190Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 21:56:20.520601 containerd[1576]: time="2025-09-09T21:56:20.520584314Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 21:56:20.520830 containerd[1576]: time="2025-09-09T21:56:20.520806241Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 21:56:20.521259 containerd[1576]: time="2025-09-09T21:56:20.521233422Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 21:56:20.521267 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 21:56:20.525070 containerd[1576]: time="2025-09-09T21:56:20.525023840Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 21:56:20.525159 containerd[1576]: time="2025-09-09T21:56:20.525137724Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 21:56:20.525385 containerd[1576]: time="2025-09-09T21:56:20.525313153Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 21:56:20.529707 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 21:56:20.545011 containerd[1576]: time="2025-09-09T21:56:20.544153976Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 21:56:20.545011 containerd[1576]: time="2025-09-09T21:56:20.544379910Z" level=info msg="metadata content store policy set" policy=shared Sep 9 21:56:20.551144 systemd-logind[1557]: New session 1 of user core. Sep 9 21:56:20.583951 containerd[1576]: time="2025-09-09T21:56:20.581811648Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 21:56:20.583951 containerd[1576]: time="2025-09-09T21:56:20.581933516Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 21:56:20.583951 containerd[1576]: time="2025-09-09T21:56:20.581957190Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 21:56:20.583951 containerd[1576]: time="2025-09-09T21:56:20.581975254Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 21:56:20.583951 containerd[1576]: time="2025-09-09T21:56:20.581992837Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 21:56:20.583951 containerd[1576]: time="2025-09-09T21:56:20.582007334Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 21:56:20.583951 containerd[1576]: time="2025-09-09T21:56:20.582025408Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 21:56:20.583951 containerd[1576]: time="2025-09-09T21:56:20.582042691Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 21:56:20.583951 containerd[1576]: time="2025-09-09T21:56:20.582059302Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 21:56:20.583951 containerd[1576]: time="2025-09-09T21:56:20.582074060Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 21:56:20.583951 containerd[1576]: time="2025-09-09T21:56:20.582090551Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 21:56:20.583951 containerd[1576]: time="2025-09-09T21:56:20.582109055Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 21:56:20.583951 containerd[1576]: time="2025-09-09T21:56:20.582921880Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 21:56:20.583951 containerd[1576]: time="2025-09-09T21:56:20.582958288Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 21:56:20.584534 containerd[1576]: time="2025-09-09T21:56:20.582977674Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 21:56:20.584534 containerd[1576]: time="2025-09-09T21:56:20.583001278Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 21:56:20.584534 containerd[1576]: time="2025-09-09T21:56:20.583015285Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 21:56:20.584534 containerd[1576]: time="2025-09-09T21:56:20.583035272Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 21:56:20.584534 containerd[1576]: time="2025-09-09T21:56:20.583052384Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 21:56:20.584534 containerd[1576]: time="2025-09-09T21:56:20.583066070Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 21:56:20.584534 containerd[1576]: time="2025-09-09T21:56:20.583238063Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 21:56:20.584534 containerd[1576]: time="2025-09-09T21:56:20.584298591Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 21:56:20.584534 containerd[1576]: time="2025-09-09T21:56:20.584317747Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 21:56:20.584534 containerd[1576]: time="2025-09-09T21:56:20.584430459Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 21:56:20.584534 containerd[1576]: time="2025-09-09T21:56:20.584457870Z" level=info msg="Start snapshots syncer" Sep 9 21:56:20.584534 containerd[1576]: time="2025-09-09T21:56:20.584495741Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 21:56:20.585570 containerd[1576]: time="2025-09-09T21:56:20.584801404Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 21:56:20.585570 containerd[1576]: time="2025-09-09T21:56:20.584862158Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 21:56:20.585901 containerd[1576]: time="2025-09-09T21:56:20.584938532Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 21:56:20.589136 containerd[1576]: time="2025-09-09T21:56:20.588992735Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 21:56:20.589136 containerd[1576]: time="2025-09-09T21:56:20.589081802Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 21:56:20.589136 containerd[1576]: time="2025-09-09T21:56:20.589116126Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 21:56:20.589136 containerd[1576]: time="2025-09-09T21:56:20.589140722Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 21:56:20.589136 containerd[1576]: time="2025-09-09T21:56:20.589159047Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 21:56:20.589487 containerd[1576]: time="2025-09-09T21:56:20.589184905Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 21:56:20.589487 containerd[1576]: time="2025-09-09T21:56:20.589211876Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 21:56:20.589487 containerd[1576]: time="2025-09-09T21:56:20.589261418Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 21:56:20.589487 containerd[1576]: time="2025-09-09T21:56:20.589280234Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 21:56:20.589487 containerd[1576]: time="2025-09-09T21:56:20.589345456Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 21:56:20.589487 containerd[1576]: time="2025-09-09T21:56:20.589419375Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 21:56:20.589487 containerd[1576]: time="2025-09-09T21:56:20.589441747Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 21:56:20.589487 containerd[1576]: time="2025-09-09T21:56:20.589454240Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 21:56:20.589487 containerd[1576]: time="2025-09-09T21:56:20.589468106Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 21:56:20.589487 containerd[1576]: time="2025-09-09T21:56:20.589479988Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 21:56:20.589487 containerd[1576]: time="2025-09-09T21:56:20.589497020Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 21:56:20.589788 containerd[1576]: time="2025-09-09T21:56:20.589523139Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 21:56:20.589788 containerd[1576]: time="2025-09-09T21:56:20.589552294Z" level=info msg="runtime interface created" Sep 9 21:56:20.589788 containerd[1576]: time="2025-09-09T21:56:20.589560018Z" level=info msg="created NRI interface" Sep 9 21:56:20.589788 containerd[1576]: time="2025-09-09T21:56:20.589572812Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 21:56:20.589788 containerd[1576]: time="2025-09-09T21:56:20.589596517Z" level=info msg="Connect containerd service" Sep 9 21:56:20.589788 containerd[1576]: time="2025-09-09T21:56:20.589638145Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 21:56:20.592174 containerd[1576]: time="2025-09-09T21:56:20.591889647Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 21:56:20.613822 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 21:56:20.632156 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 21:56:21.137325 (systemd)[1660]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 21:56:21.144183 systemd-logind[1557]: New session c1 of user core. Sep 9 21:56:21.287888 containerd[1576]: time="2025-09-09T21:56:21.268091728Z" level=info msg="Start subscribing containerd event" Sep 9 21:56:21.287888 containerd[1576]: time="2025-09-09T21:56:21.268195452Z" level=info msg="Start recovering state" Sep 9 21:56:21.287888 containerd[1576]: time="2025-09-09T21:56:21.268378375Z" level=info msg="Start event monitor" Sep 9 21:56:21.287888 containerd[1576]: time="2025-09-09T21:56:21.268401208Z" level=info msg="Start cni network conf syncer for default" Sep 9 21:56:21.287888 containerd[1576]: time="2025-09-09T21:56:21.268427077Z" level=info msg="Start streaming server" Sep 9 21:56:21.287888 containerd[1576]: time="2025-09-09T21:56:21.268447625Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 21:56:21.287888 containerd[1576]: time="2025-09-09T21:56:21.268456772Z" level=info msg="runtime interface starting up..." Sep 9 21:56:21.287888 containerd[1576]: time="2025-09-09T21:56:21.268463505Z" level=info msg="starting plugins..." Sep 9 21:56:21.287888 containerd[1576]: time="2025-09-09T21:56:21.268484304Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 21:56:21.287888 containerd[1576]: time="2025-09-09T21:56:21.268851823Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 21:56:21.287888 containerd[1576]: time="2025-09-09T21:56:21.268909441Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 21:56:21.287888 containerd[1576]: time="2025-09-09T21:56:21.269025228Z" level=info msg="containerd successfully booted in 0.779384s" Sep 9 21:56:21.290073 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 21:56:21.696080 systemd[1660]: Queued start job for default target default.target. Sep 9 21:56:21.791166 systemd[1660]: Created slice app.slice - User Application Slice. Sep 9 21:56:21.791457 systemd[1660]: Reached target paths.target - Paths. Sep 9 21:56:21.791672 systemd[1660]: Reached target timers.target - Timers. Sep 9 21:56:21.810519 systemd[1660]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 21:56:21.964784 systemd[1660]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 21:56:21.966048 systemd[1660]: Reached target sockets.target - Sockets. Sep 9 21:56:21.966126 systemd[1660]: Reached target basic.target - Basic System. Sep 9 21:56:21.966171 systemd[1660]: Reached target default.target - Main User Target. Sep 9 21:56:21.966213 systemd[1660]: Startup finished in 747ms. Sep 9 21:56:21.976367 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 21:56:21.997346 tar[1569]: linux-amd64/README.md Sep 9 21:56:22.009161 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 21:56:22.171400 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 21:56:22.263664 systemd[1]: Started sshd@1-10.0.0.35:22-10.0.0.1:34954.service - OpenSSH per-connection server daemon (10.0.0.1:34954). Sep 9 21:56:22.396530 sshd[1686]: Accepted publickey for core from 10.0.0.1 port 34954 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:56:22.400757 sshd-session[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:56:22.417938 systemd-logind[1557]: New session 2 of user core. Sep 9 21:56:22.443255 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 21:56:22.530307 sshd[1689]: Connection closed by 10.0.0.1 port 34954 Sep 9 21:56:22.528367 sshd-session[1686]: pam_unix(sshd:session): session closed for user core Sep 9 21:56:22.553973 systemd[1]: sshd@1-10.0.0.35:22-10.0.0.1:34954.service: Deactivated successfully. Sep 9 21:56:22.558267 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 21:56:22.569240 systemd[1]: Started sshd@2-10.0.0.35:22-10.0.0.1:34966.service - OpenSSH per-connection server daemon (10.0.0.1:34966). Sep 9 21:56:22.571354 systemd-logind[1557]: Session 2 logged out. Waiting for processes to exit. Sep 9 21:56:22.591335 systemd-logind[1557]: Removed session 2. Sep 9 21:56:22.763288 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 34966 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:56:22.767355 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:56:22.785566 systemd-logind[1557]: New session 3 of user core. Sep 9 21:56:22.798247 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 21:56:22.927565 sshd[1698]: Connection closed by 10.0.0.1 port 34966 Sep 9 21:56:22.928406 sshd-session[1695]: pam_unix(sshd:session): session closed for user core Sep 9 21:56:23.018931 systemd[1]: sshd@2-10.0.0.35:22-10.0.0.1:34966.service: Deactivated successfully. Sep 9 21:56:23.029164 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 21:56:23.040192 systemd-logind[1557]: Session 3 logged out. Waiting for processes to exit. Sep 9 21:56:23.061272 systemd-logind[1557]: Removed session 3. Sep 9 21:56:24.570421 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:56:24.575516 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 21:56:24.582425 systemd[1]: Startup finished in 10.760s (kernel) + 16.827s (initrd) + 15.128s (userspace) = 42.715s. Sep 9 21:56:24.588558 (kubelet)[1708]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 21:56:27.302953 kubelet[1708]: E0909 21:56:27.302469 1708 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 21:56:27.318698 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 21:56:27.318982 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 21:56:27.319499 systemd[1]: kubelet.service: Consumed 3.874s CPU time, 271.6M memory peak. Sep 9 21:56:32.976476 systemd[1]: Started sshd@3-10.0.0.35:22-10.0.0.1:47478.service - OpenSSH per-connection server daemon (10.0.0.1:47478). Sep 9 21:56:33.252679 sshd[1721]: Accepted publickey for core from 10.0.0.1 port 47478 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:56:33.258475 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:56:33.298506 systemd-logind[1557]: New session 4 of user core. Sep 9 21:56:33.321289 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 21:56:33.453988 sshd[1724]: Connection closed by 10.0.0.1 port 47478 Sep 9 21:56:33.457117 sshd-session[1721]: pam_unix(sshd:session): session closed for user core Sep 9 21:56:33.489928 systemd[1]: sshd@3-10.0.0.35:22-10.0.0.1:47478.service: Deactivated successfully. Sep 9 21:56:33.503331 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 21:56:33.513307 systemd-logind[1557]: Session 4 logged out. Waiting for processes to exit. Sep 9 21:56:33.537653 systemd[1]: Started sshd@4-10.0.0.35:22-10.0.0.1:47486.service - OpenSSH per-connection server daemon (10.0.0.1:47486). Sep 9 21:56:33.539711 systemd-logind[1557]: Removed session 4. Sep 9 21:56:33.850689 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 47486 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:56:33.853798 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:56:33.882600 systemd-logind[1557]: New session 5 of user core. Sep 9 21:56:33.908379 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 21:56:34.003196 sshd[1733]: Connection closed by 10.0.0.1 port 47486 Sep 9 21:56:34.004177 sshd-session[1730]: pam_unix(sshd:session): session closed for user core Sep 9 21:56:34.043739 systemd[1]: sshd@4-10.0.0.35:22-10.0.0.1:47486.service: Deactivated successfully. Sep 9 21:56:34.054567 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 21:56:34.058058 systemd-logind[1557]: Session 5 logged out. Waiting for processes to exit. Sep 9 21:56:34.081364 systemd[1]: Started sshd@5-10.0.0.35:22-10.0.0.1:47494.service - OpenSSH per-connection server daemon (10.0.0.1:47494). Sep 9 21:56:34.083563 systemd-logind[1557]: Removed session 5. Sep 9 21:56:34.233479 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 47494 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:56:34.239283 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:56:34.277459 systemd-logind[1557]: New session 6 of user core. Sep 9 21:56:34.298145 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 21:56:34.425381 sshd[1742]: Connection closed by 10.0.0.1 port 47494 Sep 9 21:56:34.425203 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Sep 9 21:56:34.451592 systemd[1]: sshd@5-10.0.0.35:22-10.0.0.1:47494.service: Deactivated successfully. Sep 9 21:56:34.455063 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 21:56:34.464060 systemd-logind[1557]: Session 6 logged out. Waiting for processes to exit. Sep 9 21:56:34.479376 systemd[1]: Started sshd@6-10.0.0.35:22-10.0.0.1:47506.service - OpenSSH per-connection server daemon (10.0.0.1:47506). Sep 9 21:56:34.484540 systemd-logind[1557]: Removed session 6. Sep 9 21:56:34.560142 sshd[1748]: Accepted publickey for core from 10.0.0.1 port 47506 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:56:34.562944 sshd-session[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:56:34.593100 systemd-logind[1557]: New session 7 of user core. Sep 9 21:56:34.615156 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 21:56:34.736144 sudo[1752]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 21:56:34.739548 sudo[1752]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 21:56:34.794734 sudo[1752]: pam_unix(sudo:session): session closed for user root Sep 9 21:56:34.802233 sshd[1751]: Connection closed by 10.0.0.1 port 47506 Sep 9 21:56:34.803336 sshd-session[1748]: pam_unix(sshd:session): session closed for user core Sep 9 21:56:34.839470 systemd[1]: sshd@6-10.0.0.35:22-10.0.0.1:47506.service: Deactivated successfully. Sep 9 21:56:34.849704 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 21:56:34.853433 systemd-logind[1557]: Session 7 logged out. Waiting for processes to exit. Sep 9 21:56:34.880336 systemd[1]: Started sshd@7-10.0.0.35:22-10.0.0.1:47516.service - OpenSSH per-connection server daemon (10.0.0.1:47516). Sep 9 21:56:34.882367 systemd-logind[1557]: Removed session 7. Sep 9 21:56:35.023391 sshd[1758]: Accepted publickey for core from 10.0.0.1 port 47516 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:56:35.025664 sshd-session[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:56:35.044000 systemd-logind[1557]: New session 8 of user core. Sep 9 21:56:35.065197 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 21:56:35.154437 sudo[1763]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 21:56:35.155718 sudo[1763]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 21:56:35.178305 sudo[1763]: pam_unix(sudo:session): session closed for user root Sep 9 21:56:35.188467 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 21:56:35.190042 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 21:56:35.234316 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 21:56:35.396536 augenrules[1785]: No rules Sep 9 21:56:35.407483 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 21:56:35.414405 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 21:56:35.422303 sudo[1762]: pam_unix(sudo:session): session closed for user root Sep 9 21:56:35.429985 sshd[1761]: Connection closed by 10.0.0.1 port 47516 Sep 9 21:56:35.431945 sshd-session[1758]: pam_unix(sshd:session): session closed for user core Sep 9 21:56:35.454920 systemd[1]: sshd@7-10.0.0.35:22-10.0.0.1:47516.service: Deactivated successfully. Sep 9 21:56:35.458834 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 21:56:35.473156 systemd[1]: Started sshd@8-10.0.0.35:22-10.0.0.1:47526.service - OpenSSH per-connection server daemon (10.0.0.1:47526). Sep 9 21:56:35.475294 systemd-logind[1557]: Session 8 logged out. Waiting for processes to exit. Sep 9 21:56:35.504562 systemd-logind[1557]: Removed session 8. Sep 9 21:56:35.615560 sshd[1794]: Accepted publickey for core from 10.0.0.1 port 47526 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:56:35.612800 sshd-session[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:56:35.664400 systemd-logind[1557]: New session 9 of user core. Sep 9 21:56:35.683616 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 21:56:35.774484 sudo[1799]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 21:56:35.779749 sudo[1799]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 21:56:37.533399 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 21:56:37.541198 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:56:38.445821 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 21:56:38.473013 (dockerd)[1822]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 21:56:38.623395 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:56:38.656512 (kubelet)[1827]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 21:56:39.050759 kubelet[1827]: E0909 21:56:39.050642 1827 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 21:56:39.075516 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 21:56:39.076846 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 21:56:39.077814 systemd[1]: kubelet.service: Consumed 816ms CPU time, 110.6M memory peak. Sep 9 21:56:40.604497 dockerd[1822]: time="2025-09-09T21:56:40.597018385Z" level=info msg="Starting up" Sep 9 21:56:40.604497 dockerd[1822]: time="2025-09-09T21:56:40.602121064Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 21:56:40.751034 dockerd[1822]: time="2025-09-09T21:56:40.750939121Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 9 21:56:41.174467 dockerd[1822]: time="2025-09-09T21:56:41.174353364Z" level=info msg="Loading containers: start." Sep 9 21:56:41.202674 kernel: Initializing XFRM netlink socket Sep 9 21:56:42.592081 systemd-networkd[1478]: docker0: Link UP Sep 9 21:56:42.762592 dockerd[1822]: time="2025-09-09T21:56:42.755452479Z" level=info msg="Loading containers: done." Sep 9 21:56:42.852166 dockerd[1822]: time="2025-09-09T21:56:42.851482429Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 21:56:42.852166 dockerd[1822]: time="2025-09-09T21:56:42.851631018Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 9 21:56:42.852166 dockerd[1822]: time="2025-09-09T21:56:42.851804493Z" level=info msg="Initializing buildkit" Sep 9 21:56:42.973985 dockerd[1822]: time="2025-09-09T21:56:42.973897511Z" level=info msg="Completed buildkit initialization" Sep 9 21:56:42.983404 dockerd[1822]: time="2025-09-09T21:56:42.983317357Z" level=info msg="Daemon has completed initialization" Sep 9 21:56:42.987706 dockerd[1822]: time="2025-09-09T21:56:42.983675759Z" level=info msg="API listen on /run/docker.sock" Sep 9 21:56:42.983723 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 21:56:45.880413 containerd[1576]: time="2025-09-09T21:56:45.879951891Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 9 21:56:47.577927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1224228274.mount: Deactivated successfully. Sep 9 21:56:49.218026 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 21:56:49.226736 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:56:50.014205 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:56:50.036955 (kubelet)[2090]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 21:56:50.638499 kubelet[2090]: E0909 21:56:50.631994 2090 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 21:56:50.656369 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 21:56:50.657150 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 21:56:50.657720 systemd[1]: kubelet.service: Consumed 813ms CPU time, 111M memory peak. Sep 9 21:56:55.546259 containerd[1576]: time="2025-09-09T21:56:55.546153843Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:56:55.551893 containerd[1576]: time="2025-09-09T21:56:55.551682042Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=30078664" Sep 9 21:56:55.561054 containerd[1576]: time="2025-09-09T21:56:55.560044289Z" level=info msg="ImageCreate event name:\"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:56:55.568691 containerd[1576]: time="2025-09-09T21:56:55.567965873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:56:55.569841 containerd[1576]: time="2025-09-09T21:56:55.569124228Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"30075464\" in 9.689107468s" Sep 9 21:56:55.569841 containerd[1576]: time="2025-09-09T21:56:55.569168785Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\"" Sep 9 21:56:55.571046 containerd[1576]: time="2025-09-09T21:56:55.570509353Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 9 21:57:00.718947 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 9 21:57:00.742397 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:57:01.442465 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:57:01.462439 (kubelet)[2139]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 21:57:01.645340 kubelet[2139]: E0909 21:57:01.644591 2139 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 21:57:01.658691 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 21:57:01.658979 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 21:57:01.659537 systemd[1]: kubelet.service: Consumed 530ms CPU time, 108.4M memory peak. Sep 9 21:57:03.882868 update_engine[1559]: I20250909 21:57:03.882285 1559 update_attempter.cc:509] Updating boot flags... Sep 9 21:57:05.204535 containerd[1576]: time="2025-09-09T21:57:05.203235756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:57:05.249244 containerd[1576]: time="2025-09-09T21:57:05.249112596Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=26018066" Sep 9 21:57:05.264269 containerd[1576]: time="2025-09-09T21:57:05.260810487Z" level=info msg="ImageCreate event name:\"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:57:05.376478 containerd[1576]: time="2025-09-09T21:57:05.372468714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:57:05.376478 containerd[1576]: time="2025-09-09T21:57:05.373702504Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"27646961\" in 9.803148192s" Sep 9 21:57:05.384309 containerd[1576]: time="2025-09-09T21:57:05.379497673Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\"" Sep 9 21:57:05.385038 containerd[1576]: time="2025-09-09T21:57:05.384689112Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 9 21:57:10.004026 containerd[1576]: time="2025-09-09T21:57:10.003874618Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:57:10.009628 containerd[1576]: time="2025-09-09T21:57:10.009518308Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=20153911" Sep 9 21:57:10.027371 containerd[1576]: time="2025-09-09T21:57:10.027218392Z" level=info msg="ImageCreate event name:\"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:57:10.042699 kernel: hrtimer: interrupt took 2186937 ns Sep 9 21:57:10.059165 containerd[1576]: time="2025-09-09T21:57:10.058960030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:57:10.072055 containerd[1576]: time="2025-09-09T21:57:10.069983210Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"21782824\" in 4.684963892s" Sep 9 21:57:10.073729 containerd[1576]: time="2025-09-09T21:57:10.072312196Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\"" Sep 9 21:57:10.075594 containerd[1576]: time="2025-09-09T21:57:10.075243163Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 9 21:57:11.776361 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 9 21:57:11.794726 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:57:12.584553 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:57:12.615405 (kubelet)[2183]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 21:57:12.918709 kubelet[2183]: E0909 21:57:12.917927 2183 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 21:57:12.938279 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 21:57:12.938587 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 21:57:12.948021 systemd[1]: kubelet.service: Consumed 553ms CPU time, 110.6M memory peak. Sep 9 21:57:13.708063 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount367650687.mount: Deactivated successfully. Sep 9 21:57:17.836187 containerd[1576]: time="2025-09-09T21:57:17.835248969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:57:17.840359 containerd[1576]: time="2025-09-09T21:57:17.840267226Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=31899626" Sep 9 21:57:17.852348 containerd[1576]: time="2025-09-09T21:57:17.852171008Z" level=info msg="ImageCreate event name:\"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:57:17.869554 containerd[1576]: time="2025-09-09T21:57:17.865446188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:57:17.869554 containerd[1576]: time="2025-09-09T21:57:17.868734157Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"31898645\" in 7.793436974s" Sep 9 21:57:17.872120 containerd[1576]: time="2025-09-09T21:57:17.870933585Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\"" Sep 9 21:57:17.877966 containerd[1576]: time="2025-09-09T21:57:17.877474208Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 9 21:57:18.821974 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount839668094.mount: Deactivated successfully. Sep 9 21:57:22.968074 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 9 21:57:22.991057 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:57:23.098233 containerd[1576]: time="2025-09-09T21:57:23.094805710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:57:23.098233 containerd[1576]: time="2025-09-09T21:57:23.097675886Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 9 21:57:23.100029 containerd[1576]: time="2025-09-09T21:57:23.099947667Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:57:23.121799 containerd[1576]: time="2025-09-09T21:57:23.120976899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:57:23.129112 containerd[1576]: time="2025-09-09T21:57:23.127561534Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 5.250016426s" Sep 9 21:57:23.129112 containerd[1576]: time="2025-09-09T21:57:23.128030231Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 9 21:57:23.135807 containerd[1576]: time="2025-09-09T21:57:23.135576814Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 21:57:23.568006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:57:23.597456 (kubelet)[2257]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 21:57:23.966810 kubelet[2257]: E0909 21:57:23.966418 2257 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 21:57:23.977693 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 21:57:23.977946 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 21:57:23.978523 systemd[1]: kubelet.service: Consumed 418ms CPU time, 111.1M memory peak. Sep 9 21:57:24.520115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3599486716.mount: Deactivated successfully. Sep 9 21:57:24.556948 containerd[1576]: time="2025-09-09T21:57:24.555572297Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 21:57:24.558855 containerd[1576]: time="2025-09-09T21:57:24.558543917Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 9 21:57:24.567925 containerd[1576]: time="2025-09-09T21:57:24.567068243Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 21:57:24.577066 containerd[1576]: time="2025-09-09T21:57:24.575941964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 21:57:24.579915 containerd[1576]: time="2025-09-09T21:57:24.577873028Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.442248335s" Sep 9 21:57:24.579915 containerd[1576]: time="2025-09-09T21:57:24.577920446Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 9 21:57:24.580642 containerd[1576]: time="2025-09-09T21:57:24.580253383Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 9 21:57:25.303784 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2580923452.mount: Deactivated successfully. Sep 9 21:57:34.218729 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Sep 9 21:57:34.326636 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:57:35.321211 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:57:35.343510 (kubelet)[2332]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 21:57:35.699522 kubelet[2332]: E0909 21:57:35.693011 2332 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 21:57:35.711425 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 21:57:35.714440 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 21:57:35.717020 systemd[1]: kubelet.service: Consumed 675ms CPU time, 109.5M memory peak. Sep 9 21:57:36.607303 containerd[1576]: time="2025-09-09T21:57:36.606005804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:57:36.614975 containerd[1576]: time="2025-09-09T21:57:36.613430393Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58377871" Sep 9 21:57:36.616615 containerd[1576]: time="2025-09-09T21:57:36.616478905Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:57:36.622996 containerd[1576]: time="2025-09-09T21:57:36.622888813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:57:36.631753 containerd[1576]: time="2025-09-09T21:57:36.630367793Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 12.050058548s" Sep 9 21:57:36.631753 containerd[1576]: time="2025-09-09T21:57:36.630474543Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 9 21:57:43.372809 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:57:43.373045 systemd[1]: kubelet.service: Consumed 675ms CPU time, 109.5M memory peak. Sep 9 21:57:43.377484 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:57:43.436459 systemd[1]: Reload requested from client PID 2372 ('systemctl') (unit session-9.scope)... Sep 9 21:57:43.437706 systemd[1]: Reloading... Sep 9 21:57:43.653603 zram_generator::config[2415]: No configuration found. Sep 9 21:57:44.391586 systemd[1]: Reloading finished in 952 ms. Sep 9 21:57:44.546059 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 21:57:44.546404 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 21:57:44.549386 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:57:44.549478 systemd[1]: kubelet.service: Consumed 222ms CPU time, 98.2M memory peak. Sep 9 21:57:44.553738 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:57:44.971074 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:57:45.001158 (kubelet)[2462]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 21:57:45.114060 kubelet[2462]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 21:57:45.114060 kubelet[2462]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 21:57:45.115558 kubelet[2462]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 21:57:45.115558 kubelet[2462]: I0909 21:57:45.114878 2462 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 21:57:46.285730 kubelet[2462]: I0909 21:57:46.282077 2462 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 9 21:57:46.285730 kubelet[2462]: I0909 21:57:46.282129 2462 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 21:57:46.285730 kubelet[2462]: I0909 21:57:46.282463 2462 server.go:956] "Client rotation is on, will bootstrap in background" Sep 9 21:57:46.431334 kubelet[2462]: I0909 21:57:46.430073 2462 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 21:57:46.432104 kubelet[2462]: E0909 21:57:46.432037 2462 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 9 21:57:46.450242 kubelet[2462]: I0909 21:57:46.450165 2462 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 21:57:46.467087 kubelet[2462]: I0909 21:57:46.467009 2462 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 21:57:46.469071 kubelet[2462]: I0909 21:57:46.469011 2462 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 21:57:46.469566 kubelet[2462]: I0909 21:57:46.469064 2462 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 21:57:46.469566 kubelet[2462]: I0909 21:57:46.469356 2462 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 21:57:46.469566 kubelet[2462]: I0909 21:57:46.469372 2462 container_manager_linux.go:303] "Creating device plugin manager" Sep 9 21:57:46.469833 kubelet[2462]: I0909 21:57:46.469634 2462 state_mem.go:36] "Initialized new in-memory state store" Sep 9 21:57:46.474806 kubelet[2462]: I0909 21:57:46.474703 2462 kubelet.go:480] "Attempting to sync node with API server" Sep 9 21:57:46.474806 kubelet[2462]: I0909 21:57:46.474803 2462 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 21:57:46.475068 kubelet[2462]: I0909 21:57:46.474872 2462 kubelet.go:386] "Adding apiserver pod source" Sep 9 21:57:46.475068 kubelet[2462]: I0909 21:57:46.474903 2462 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 21:57:46.485564 kubelet[2462]: I0909 21:57:46.485502 2462 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 21:57:46.513583 kubelet[2462]: I0909 21:57:46.513450 2462 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 9 21:57:46.513958 kubelet[2462]: E0909 21:57:46.513899 2462 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 9 21:57:46.514161 kubelet[2462]: E0909 21:57:46.514088 2462 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 9 21:57:46.516753 kubelet[2462]: W0909 21:57:46.515441 2462 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 21:57:46.526684 kubelet[2462]: I0909 21:57:46.523863 2462 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 21:57:46.526684 kubelet[2462]: I0909 21:57:46.523989 2462 server.go:1289] "Started kubelet" Sep 9 21:57:46.529309 kubelet[2462]: I0909 21:57:46.527164 2462 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 21:57:46.536920 kubelet[2462]: I0909 21:57:46.533279 2462 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 21:57:46.536920 kubelet[2462]: I0909 21:57:46.533471 2462 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 21:57:46.543229 kubelet[2462]: I0909 21:57:46.540486 2462 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 21:57:46.544629 kubelet[2462]: I0909 21:57:46.543559 2462 server.go:317] "Adding debug handlers to kubelet server" Sep 9 21:57:46.544629 kubelet[2462]: I0909 21:57:46.544578 2462 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 21:57:46.549520 kubelet[2462]: I0909 21:57:46.547918 2462 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 21:57:46.549520 kubelet[2462]: I0909 21:57:46.548108 2462 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 21:57:46.549520 kubelet[2462]: I0909 21:57:46.548224 2462 reconciler.go:26] "Reconciler: start to sync state" Sep 9 21:57:46.549520 kubelet[2462]: E0909 21:57:46.548879 2462 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 9 21:57:46.549520 kubelet[2462]: E0909 21:57:46.549164 2462 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 21:57:46.549520 kubelet[2462]: E0909 21:57:46.549269 2462 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="200ms" Sep 9 21:57:46.564031 kubelet[2462]: E0909 21:57:46.560300 2462 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.35:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.35:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863bc1254de63a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 21:57:46.523919265 +0000 UTC m=+1.504960551,LastTimestamp:2025-09-09 21:57:46.523919265 +0000 UTC m=+1.504960551,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 21:57:46.564031 kubelet[2462]: E0909 21:57:46.563787 2462 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 21:57:46.564927 kubelet[2462]: I0909 21:57:46.564905 2462 factory.go:223] Registration of the containerd container factory successfully Sep 9 21:57:46.565009 kubelet[2462]: I0909 21:57:46.564997 2462 factory.go:223] Registration of the systemd container factory successfully Sep 9 21:57:46.565235 kubelet[2462]: I0909 21:57:46.565209 2462 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 21:57:46.569509 kubelet[2462]: I0909 21:57:46.569276 2462 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 9 21:57:46.647593 kubelet[2462]: I0909 21:57:46.644721 2462 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 21:57:46.647593 kubelet[2462]: I0909 21:57:46.644782 2462 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 21:57:46.647593 kubelet[2462]: I0909 21:57:46.644828 2462 state_mem.go:36] "Initialized new in-memory state store" Sep 9 21:57:46.652992 kubelet[2462]: E0909 21:57:46.652912 2462 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 21:57:46.673719 kubelet[2462]: I0909 21:57:46.670035 2462 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 9 21:57:46.673719 kubelet[2462]: I0909 21:57:46.672615 2462 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 9 21:57:46.681170 kubelet[2462]: I0909 21:57:46.675671 2462 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 21:57:46.681170 kubelet[2462]: I0909 21:57:46.680950 2462 kubelet.go:2436] "Starting kubelet main sync loop" Sep 9 21:57:46.681754 kubelet[2462]: E0909 21:57:46.681486 2462 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 21:57:46.682059 kubelet[2462]: E0909 21:57:46.681984 2462 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 9 21:57:46.750281 kubelet[2462]: E0909 21:57:46.750186 2462 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="400ms" Sep 9 21:57:46.756672 kubelet[2462]: E0909 21:57:46.753736 2462 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 21:57:46.781921 kubelet[2462]: E0909 21:57:46.781693 2462 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 21:57:46.861233 kubelet[2462]: E0909 21:57:46.857789 2462 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 21:57:46.861233 kubelet[2462]: I0909 21:57:46.857839 2462 policy_none.go:49] "None policy: Start" Sep 9 21:57:46.862804 kubelet[2462]: I0909 21:57:46.862641 2462 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 21:57:46.865715 kubelet[2462]: I0909 21:57:46.863708 2462 state_mem.go:35] "Initializing new in-memory state store" Sep 9 21:57:46.929979 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 21:57:46.961610 kubelet[2462]: E0909 21:57:46.960848 2462 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 21:57:46.966726 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 21:57:46.983406 kubelet[2462]: E0909 21:57:46.983324 2462 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 21:57:46.984425 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 21:57:47.014572 kubelet[2462]: E0909 21:57:47.013027 2462 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 9 21:57:47.014572 kubelet[2462]: I0909 21:57:47.013417 2462 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 21:57:47.014572 kubelet[2462]: I0909 21:57:47.013443 2462 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 21:57:47.014572 kubelet[2462]: I0909 21:57:47.014007 2462 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 21:57:47.015913 kubelet[2462]: E0909 21:57:47.015883 2462 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 21:57:47.015978 kubelet[2462]: E0909 21:57:47.015943 2462 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 21:57:47.116832 kubelet[2462]: I0909 21:57:47.115303 2462 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 21:57:47.118548 kubelet[2462]: E0909 21:57:47.117903 2462 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Sep 9 21:57:47.158574 kubelet[2462]: E0909 21:57:47.157565 2462 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="800ms" Sep 9 21:57:47.324122 kubelet[2462]: I0909 21:57:47.323845 2462 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 21:57:47.324669 kubelet[2462]: E0909 21:57:47.324320 2462 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Sep 9 21:57:47.441039 systemd[1]: Created slice kubepods-burstable-pod988ac88b39c26310991bba481adde95a.slice - libcontainer container kubepods-burstable-pod988ac88b39c26310991bba481adde95a.slice. Sep 9 21:57:47.471225 kubelet[2462]: I0909 21:57:47.470595 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/988ac88b39c26310991bba481adde95a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"988ac88b39c26310991bba481adde95a\") " pod="kube-system/kube-apiserver-localhost" Sep 9 21:57:47.471225 kubelet[2462]: I0909 21:57:47.470652 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:57:47.471225 kubelet[2462]: I0909 21:57:47.470678 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:57:47.471225 kubelet[2462]: I0909 21:57:47.470700 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:57:47.471225 kubelet[2462]: I0909 21:57:47.470723 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:57:47.471592 kubelet[2462]: I0909 21:57:47.470742 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:57:47.471592 kubelet[2462]: I0909 21:57:47.470800 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/988ac88b39c26310991bba481adde95a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"988ac88b39c26310991bba481adde95a\") " pod="kube-system/kube-apiserver-localhost" Sep 9 21:57:47.471592 kubelet[2462]: I0909 21:57:47.470831 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/988ac88b39c26310991bba481adde95a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"988ac88b39c26310991bba481adde95a\") " pod="kube-system/kube-apiserver-localhost" Sep 9 21:57:47.471592 kubelet[2462]: E0909 21:57:47.471088 2462 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 9 21:57:47.481103 kubelet[2462]: E0909 21:57:47.480435 2462 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 21:57:47.533870 systemd[1]: Created slice kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice - libcontainer container kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice. Sep 9 21:57:47.547172 kubelet[2462]: E0909 21:57:47.545973 2462 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 21:57:47.573734 kubelet[2462]: I0909 21:57:47.573196 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 9 21:57:47.629101 systemd[1]: Created slice kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice - libcontainer container kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice. Sep 9 21:57:47.635671 kubelet[2462]: E0909 21:57:47.633276 2462 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 21:57:47.726427 kubelet[2462]: I0909 21:57:47.726367 2462 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 21:57:47.726907 kubelet[2462]: E0909 21:57:47.726879 2462 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Sep 9 21:57:47.781259 kubelet[2462]: E0909 21:57:47.781166 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:57:47.787180 containerd[1576]: time="2025-09-09T21:57:47.785961342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:988ac88b39c26310991bba481adde95a,Namespace:kube-system,Attempt:0,}" Sep 9 21:57:47.821997 kubelet[2462]: E0909 21:57:47.821228 2462 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 9 21:57:47.851401 kubelet[2462]: E0909 21:57:47.851065 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:57:47.851401 kubelet[2462]: E0909 21:57:47.851345 2462 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 9 21:57:47.851725 containerd[1576]: time="2025-09-09T21:57:47.851680371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,}" Sep 9 21:57:47.918158 containerd[1576]: time="2025-09-09T21:57:47.915535674Z" level=info msg="connecting to shim 03201e5ac571f0a21a6bbafe46096bf0a1b7e71e7006232c289834e1d74c6ef0" address="unix:///run/containerd/s/f032136489946156f8db27ba8d7090441d2d8a3451fd6671b3a72bdc6edb4de2" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:57:47.922563 kubelet[2462]: E0909 21:57:47.922446 2462 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 9 21:57:47.942961 kubelet[2462]: E0909 21:57:47.942912 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:57:47.947551 containerd[1576]: time="2025-09-09T21:57:47.944583857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,}" Sep 9 21:57:47.961415 kubelet[2462]: E0909 21:57:47.960063 2462 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="1.6s" Sep 9 21:57:48.081999 systemd[1]: Started cri-containerd-03201e5ac571f0a21a6bbafe46096bf0a1b7e71e7006232c289834e1d74c6ef0.scope - libcontainer container 03201e5ac571f0a21a6bbafe46096bf0a1b7e71e7006232c289834e1d74c6ef0. Sep 9 21:57:48.092123 containerd[1576]: time="2025-09-09T21:57:48.092039734Z" level=info msg="connecting to shim 9391700ef561082e776f90a299719c24aad7cb102a1e98c06c4f042108c75f85" address="unix:///run/containerd/s/2428319ecc90ab394f22bdfe9811ccaa9068afbaf4bcccd4c4951e1a38a746c2" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:57:48.106845 containerd[1576]: time="2025-09-09T21:57:48.105416038Z" level=info msg="connecting to shim 7e11c7b737f2ff1b5f0738eacb0542b45a50a80eae7154904dd80b1176599b4f" address="unix:///run/containerd/s/112ea22da023301f64d197836d305fff3cc3d468d153c4029a8e9aeb16f5629a" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:57:48.339922 systemd[1]: Started cri-containerd-7e11c7b737f2ff1b5f0738eacb0542b45a50a80eae7154904dd80b1176599b4f.scope - libcontainer container 7e11c7b737f2ff1b5f0738eacb0542b45a50a80eae7154904dd80b1176599b4f. Sep 9 21:57:48.357067 systemd[1]: Started cri-containerd-9391700ef561082e776f90a299719c24aad7cb102a1e98c06c4f042108c75f85.scope - libcontainer container 9391700ef561082e776f90a299719c24aad7cb102a1e98c06c4f042108c75f85. Sep 9 21:57:48.511086 kubelet[2462]: E0909 21:57:48.510918 2462 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 9 21:57:48.532408 kubelet[2462]: I0909 21:57:48.531417 2462 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 21:57:48.532408 kubelet[2462]: E0909 21:57:48.531970 2462 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Sep 9 21:57:49.239545 kubelet[2462]: E0909 21:57:49.239384 2462 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 9 21:57:49.846001 kubelet[2462]: E0909 21:57:49.563978 2462 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="3.2s" Sep 9 21:57:49.846001 kubelet[2462]: E0909 21:57:49.823945 2462 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 9 21:57:50.032956 kubelet[2462]: E0909 21:57:50.032895 2462 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 9 21:57:50.139402 kubelet[2462]: I0909 21:57:50.138337 2462 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 21:57:50.139402 kubelet[2462]: E0909 21:57:50.138875 2462 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Sep 9 21:57:50.310978 containerd[1576]: time="2025-09-09T21:57:50.310867050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,} returns sandbox id \"9391700ef561082e776f90a299719c24aad7cb102a1e98c06c4f042108c75f85\"" Sep 9 21:57:50.316017 kubelet[2462]: E0909 21:57:50.315484 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:57:50.405210 containerd[1576]: time="2025-09-09T21:57:50.404976655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:988ac88b39c26310991bba481adde95a,Namespace:kube-system,Attempt:0,} returns sandbox id \"03201e5ac571f0a21a6bbafe46096bf0a1b7e71e7006232c289834e1d74c6ef0\"" Sep 9 21:57:50.414638 kubelet[2462]: E0909 21:57:50.409117 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:57:50.414876 containerd[1576]: time="2025-09-09T21:57:50.409662480Z" level=info msg="CreateContainer within sandbox \"9391700ef561082e776f90a299719c24aad7cb102a1e98c06c4f042108c75f85\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 21:57:50.446478 containerd[1576]: time="2025-09-09T21:57:50.444547792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e11c7b737f2ff1b5f0738eacb0542b45a50a80eae7154904dd80b1176599b4f\"" Sep 9 21:57:50.446672 kubelet[2462]: E0909 21:57:50.445741 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:57:50.560489 containerd[1576]: time="2025-09-09T21:57:50.557744513Z" level=info msg="CreateContainer within sandbox \"03201e5ac571f0a21a6bbafe46096bf0a1b7e71e7006232c289834e1d74c6ef0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 21:57:50.751254 containerd[1576]: time="2025-09-09T21:57:50.745819272Z" level=info msg="CreateContainer within sandbox \"7e11c7b737f2ff1b5f0738eacb0542b45a50a80eae7154904dd80b1176599b4f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 21:57:50.769593 kubelet[2462]: E0909 21:57:50.763214 2462 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 9 21:57:50.949948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount737653429.mount: Deactivated successfully. Sep 9 21:57:50.955528 containerd[1576]: time="2025-09-09T21:57:50.951795119Z" level=info msg="Container 64b3f0ef93dcd41a4481ecf19715389737ceedb4b2228ef51fc299c5075756e3: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:57:51.016804 containerd[1576]: time="2025-09-09T21:57:51.016593284Z" level=info msg="Container de6cda6d763494a2a0a4b1e5fdf5c255a8d4431f61f0e789c0a5c17f5075c51b: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:57:51.034860 containerd[1576]: time="2025-09-09T21:57:51.034589976Z" level=info msg="CreateContainer within sandbox \"9391700ef561082e776f90a299719c24aad7cb102a1e98c06c4f042108c75f85\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"64b3f0ef93dcd41a4481ecf19715389737ceedb4b2228ef51fc299c5075756e3\"" Sep 9 21:57:51.038428 containerd[1576]: time="2025-09-09T21:57:51.035845374Z" level=info msg="StartContainer for \"64b3f0ef93dcd41a4481ecf19715389737ceedb4b2228ef51fc299c5075756e3\"" Sep 9 21:57:51.050647 containerd[1576]: time="2025-09-09T21:57:51.050015097Z" level=info msg="connecting to shim 64b3f0ef93dcd41a4481ecf19715389737ceedb4b2228ef51fc299c5075756e3" address="unix:///run/containerd/s/2428319ecc90ab394f22bdfe9811ccaa9068afbaf4bcccd4c4951e1a38a746c2" protocol=ttrpc version=3 Sep 9 21:57:51.093536 containerd[1576]: time="2025-09-09T21:57:51.093469201Z" level=info msg="Container f1f6d3fac2695519db7fd2b55ec8c10dc4c7a3fb9249a00f3caa75bb1a606fc1: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:57:51.191191 systemd[1]: Started cri-containerd-64b3f0ef93dcd41a4481ecf19715389737ceedb4b2228ef51fc299c5075756e3.scope - libcontainer container 64b3f0ef93dcd41a4481ecf19715389737ceedb4b2228ef51fc299c5075756e3. Sep 9 21:57:51.679869 containerd[1576]: time="2025-09-09T21:57:51.677235804Z" level=info msg="StartContainer for \"64b3f0ef93dcd41a4481ecf19715389737ceedb4b2228ef51fc299c5075756e3\" returns successfully" Sep 9 21:57:51.702456 containerd[1576]: time="2025-09-09T21:57:51.697744586Z" level=info msg="CreateContainer within sandbox \"03201e5ac571f0a21a6bbafe46096bf0a1b7e71e7006232c289834e1d74c6ef0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"de6cda6d763494a2a0a4b1e5fdf5c255a8d4431f61f0e789c0a5c17f5075c51b\"" Sep 9 21:57:51.702456 containerd[1576]: time="2025-09-09T21:57:51.698677431Z" level=info msg="StartContainer for \"de6cda6d763494a2a0a4b1e5fdf5c255a8d4431f61f0e789c0a5c17f5075c51b\"" Sep 9 21:57:51.702456 containerd[1576]: time="2025-09-09T21:57:51.700265242Z" level=info msg="connecting to shim de6cda6d763494a2a0a4b1e5fdf5c255a8d4431f61f0e789c0a5c17f5075c51b" address="unix:///run/containerd/s/f032136489946156f8db27ba8d7090441d2d8a3451fd6671b3a72bdc6edb4de2" protocol=ttrpc version=3 Sep 9 21:57:51.767318 containerd[1576]: time="2025-09-09T21:57:51.766396396Z" level=info msg="CreateContainer within sandbox \"7e11c7b737f2ff1b5f0738eacb0542b45a50a80eae7154904dd80b1176599b4f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f1f6d3fac2695519db7fd2b55ec8c10dc4c7a3fb9249a00f3caa75bb1a606fc1\"" Sep 9 21:57:51.783161 containerd[1576]: time="2025-09-09T21:57:51.773317366Z" level=info msg="StartContainer for \"f1f6d3fac2695519db7fd2b55ec8c10dc4c7a3fb9249a00f3caa75bb1a606fc1\"" Sep 9 21:57:51.783161 containerd[1576]: time="2025-09-09T21:57:51.777117405Z" level=info msg="connecting to shim f1f6d3fac2695519db7fd2b55ec8c10dc4c7a3fb9249a00f3caa75bb1a606fc1" address="unix:///run/containerd/s/112ea22da023301f64d197836d305fff3cc3d468d153c4029a8e9aeb16f5629a" protocol=ttrpc version=3 Sep 9 21:57:51.785910 kubelet[2462]: E0909 21:57:51.785865 2462 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 21:57:51.786647 kubelet[2462]: E0909 21:57:51.786622 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:57:51.959005 systemd[1]: Started cri-containerd-de6cda6d763494a2a0a4b1e5fdf5c255a8d4431f61f0e789c0a5c17f5075c51b.scope - libcontainer container de6cda6d763494a2a0a4b1e5fdf5c255a8d4431f61f0e789c0a5c17f5075c51b. Sep 9 21:57:52.029752 systemd[1]: Started cri-containerd-f1f6d3fac2695519db7fd2b55ec8c10dc4c7a3fb9249a00f3caa75bb1a606fc1.scope - libcontainer container f1f6d3fac2695519db7fd2b55ec8c10dc4c7a3fb9249a00f3caa75bb1a606fc1. Sep 9 21:57:52.418401 containerd[1576]: time="2025-09-09T21:57:52.418118408Z" level=info msg="StartContainer for \"de6cda6d763494a2a0a4b1e5fdf5c255a8d4431f61f0e789c0a5c17f5075c51b\" returns successfully" Sep 9 21:57:52.589822 containerd[1576]: time="2025-09-09T21:57:52.584046369Z" level=info msg="StartContainer for \"f1f6d3fac2695519db7fd2b55ec8c10dc4c7a3fb9249a00f3caa75bb1a606fc1\" returns successfully" Sep 9 21:57:52.765411 kubelet[2462]: E0909 21:57:52.765335 2462 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="6.4s" Sep 9 21:57:52.839207 kubelet[2462]: E0909 21:57:52.839154 2462 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 21:57:52.843385 kubelet[2462]: E0909 21:57:52.840016 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:57:52.845308 kubelet[2462]: E0909 21:57:52.844961 2462 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 21:57:52.845308 kubelet[2462]: E0909 21:57:52.845170 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:57:52.851268 kubelet[2462]: E0909 21:57:52.850602 2462 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 21:57:52.851268 kubelet[2462]: E0909 21:57:52.850826 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:57:52.858698 kubelet[2462]: E0909 21:57:52.858625 2462 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 9 21:57:52.881004 kubelet[2462]: E0909 21:57:52.880934 2462 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 9 21:57:53.344378 kubelet[2462]: I0909 21:57:53.343576 2462 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 21:57:53.828720 kubelet[2462]: E0909 21:57:53.828668 2462 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 21:57:53.828970 kubelet[2462]: E0909 21:57:53.828883 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:57:53.829203 kubelet[2462]: E0909 21:57:53.829179 2462 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 21:57:53.829411 kubelet[2462]: E0909 21:57:53.829383 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:57:54.832456 kubelet[2462]: E0909 21:57:54.832169 2462 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 21:57:54.832456 kubelet[2462]: E0909 21:57:54.832352 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:57:54.833000 kubelet[2462]: E0909 21:57:54.832881 2462 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 21:57:54.833000 kubelet[2462]: E0909 21:57:54.832978 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:57:55.837306 kubelet[2462]: E0909 21:57:55.837254 2462 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 21:57:55.838320 kubelet[2462]: E0909 21:57:55.838221 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:57:57.016135 kubelet[2462]: E0909 21:57:57.016087 2462 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 21:57:57.594395 kubelet[2462]: E0909 21:57:57.592007 2462 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 21:57:57.594395 kubelet[2462]: E0909 21:57:57.592174 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:57:57.721354 kubelet[2462]: E0909 21:57:57.721290 2462 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 21:57:57.721551 kubelet[2462]: E0909 21:57:57.721474 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:57:58.515045 kubelet[2462]: I0909 21:57:58.514728 2462 apiserver.go:52] "Watching apiserver" Sep 9 21:57:58.548706 kubelet[2462]: I0909 21:57:58.548573 2462 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 21:57:58.753411 kubelet[2462]: E0909 21:57:58.753202 2462 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1863bc1254de63a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 21:57:46.523919265 +0000 UTC m=+1.504960551,LastTimestamp:2025-09-09 21:57:46.523919265 +0000 UTC m=+1.504960551,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 21:57:58.760390 kubelet[2462]: I0909 21:57:58.760250 2462 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 21:57:58.760390 kubelet[2462]: E0909 21:57:58.760304 2462 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 9 21:57:58.858579 kubelet[2462]: I0909 21:57:58.854960 2462 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 21:57:59.103486 kubelet[2462]: E0909 21:57:59.101009 2462 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 9 21:57:59.103486 kubelet[2462]: I0909 21:57:59.101070 2462 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 21:57:59.116102 kubelet[2462]: E0909 21:57:59.115542 2462 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 9 21:57:59.116102 kubelet[2462]: I0909 21:57:59.115585 2462 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 21:57:59.118263 kubelet[2462]: E0909 21:57:59.118219 2462 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 9 21:58:05.680895 kubelet[2462]: I0909 21:58:05.680844 2462 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 21:58:06.108325 kubelet[2462]: E0909 21:58:06.102904 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:58:06.617226 kubelet[2462]: I0909 21:58:06.617003 2462 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.616951502 podStartE2EDuration="1.616951502s" podCreationTimestamp="2025-09-09 21:58:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 21:58:06.616875976 +0000 UTC m=+21.597917272" watchObservedRunningTime="2025-09-09 21:58:06.616951502 +0000 UTC m=+21.597992788" Sep 9 21:58:06.912533 kubelet[2462]: E0909 21:58:06.911839 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:58:06.973661 systemd[1]: Reload requested from client PID 2752 ('systemctl') (unit session-9.scope)... Sep 9 21:58:06.973693 systemd[1]: Reloading... Sep 9 21:58:07.277827 zram_generator::config[2799]: No configuration found. Sep 9 21:58:07.598139 kubelet[2462]: I0909 21:58:07.597486 2462 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 21:58:07.630160 kubelet[2462]: E0909 21:58:07.628065 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:58:07.762590 kubelet[2462]: I0909 21:58:07.762542 2462 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 21:58:07.807448 kubelet[2462]: E0909 21:58:07.807046 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:58:07.919219 kubelet[2462]: I0909 21:58:07.903366 2462 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.903344379 podStartE2EDuration="903.344379ms" podCreationTimestamp="2025-09-09 21:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 21:58:07.847492147 +0000 UTC m=+22.828533453" watchObservedRunningTime="2025-09-09 21:58:07.903344379 +0000 UTC m=+22.884385695" Sep 9 21:58:07.919219 kubelet[2462]: I0909 21:58:07.903528 2462 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.90352187 podStartE2EDuration="903.52187ms" podCreationTimestamp="2025-09-09 21:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 21:58:07.902919825 +0000 UTC m=+22.883961131" watchObservedRunningTime="2025-09-09 21:58:07.90352187 +0000 UTC m=+22.884563156" Sep 9 21:58:07.919219 kubelet[2462]: E0909 21:58:07.907672 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:58:07.919219 kubelet[2462]: I0909 21:58:07.909227 2462 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 21:58:07.919219 kubelet[2462]: E0909 21:58:07.909718 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:58:07.950098 kubelet[2462]: E0909 21:58:07.948285 2462 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 9 21:58:07.950280 kubelet[2462]: E0909 21:58:07.950165 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:58:08.010285 systemd[1]: Reloading finished in 1036 ms. Sep 9 21:58:08.100946 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:58:08.140507 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 21:58:08.140974 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:58:08.141054 systemd[1]: kubelet.service: Consumed 3.155s CPU time, 136.4M memory peak. Sep 9 21:58:08.159552 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:58:09.342599 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:58:09.360698 (kubelet)[2840]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 21:58:09.526338 kubelet[2840]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 21:58:09.527336 kubelet[2840]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 21:58:09.527336 kubelet[2840]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 21:58:09.527336 kubelet[2840]: I0909 21:58:09.527253 2840 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 21:58:09.551364 kubelet[2840]: I0909 21:58:09.551304 2840 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 9 21:58:09.551611 kubelet[2840]: I0909 21:58:09.551575 2840 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 21:58:09.552173 kubelet[2840]: I0909 21:58:09.552132 2840 server.go:956] "Client rotation is on, will bootstrap in background" Sep 9 21:58:09.555547 kubelet[2840]: I0909 21:58:09.555507 2840 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 9 21:58:09.562322 kubelet[2840]: I0909 21:58:09.561723 2840 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 21:58:09.599028 kubelet[2840]: I0909 21:58:09.594650 2840 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 21:58:09.622830 kubelet[2840]: I0909 21:58:09.619281 2840 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 21:58:09.622830 kubelet[2840]: I0909 21:58:09.619721 2840 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 21:58:09.622830 kubelet[2840]: I0909 21:58:09.619757 2840 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 21:58:09.622830 kubelet[2840]: I0909 21:58:09.620023 2840 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 21:58:09.623227 kubelet[2840]: I0909 21:58:09.620035 2840 container_manager_linux.go:303] "Creating device plugin manager" Sep 9 21:58:09.623227 kubelet[2840]: I0909 21:58:09.620100 2840 state_mem.go:36] "Initialized new in-memory state store" Sep 9 21:58:09.623227 kubelet[2840]: I0909 21:58:09.620348 2840 kubelet.go:480] "Attempting to sync node with API server" Sep 9 21:58:09.623227 kubelet[2840]: I0909 21:58:09.620372 2840 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 21:58:09.623227 kubelet[2840]: I0909 21:58:09.620405 2840 kubelet.go:386] "Adding apiserver pod source" Sep 9 21:58:09.623227 kubelet[2840]: I0909 21:58:09.620427 2840 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 21:58:09.636148 kubelet[2840]: I0909 21:58:09.630314 2840 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 21:58:09.636148 kubelet[2840]: I0909 21:58:09.630864 2840 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 9 21:58:09.655899 kubelet[2840]: I0909 21:58:09.650214 2840 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 21:58:09.655899 kubelet[2840]: I0909 21:58:09.650310 2840 server.go:1289] "Started kubelet" Sep 9 21:58:09.655899 kubelet[2840]: I0909 21:58:09.650420 2840 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 21:58:09.655899 kubelet[2840]: I0909 21:58:09.652362 2840 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 21:58:09.657219 kubelet[2840]: I0909 21:58:09.657096 2840 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 21:58:09.657219 kubelet[2840]: I0909 21:58:09.657210 2840 server.go:317] "Adding debug handlers to kubelet server" Sep 9 21:58:09.690752 kubelet[2840]: I0909 21:58:09.689954 2840 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 21:58:09.690752 kubelet[2840]: I0909 21:58:09.690518 2840 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 21:58:09.696863 kubelet[2840]: I0909 21:58:09.694849 2840 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 21:58:09.696863 kubelet[2840]: E0909 21:58:09.696088 2840 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 21:58:09.696863 kubelet[2840]: I0909 21:58:09.696634 2840 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 21:58:09.698791 kubelet[2840]: I0909 21:58:09.697087 2840 reconciler.go:26] "Reconciler: start to sync state" Sep 9 21:58:09.707311 kubelet[2840]: I0909 21:58:09.707241 2840 factory.go:223] Registration of the containerd container factory successfully Sep 9 21:58:09.707828 kubelet[2840]: I0909 21:58:09.707396 2840 factory.go:223] Registration of the systemd container factory successfully Sep 9 21:58:09.707828 kubelet[2840]: I0909 21:58:09.707792 2840 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 21:58:09.708254 kubelet[2840]: E0909 21:58:09.708230 2840 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 21:58:09.720960 kubelet[2840]: I0909 21:58:09.720890 2840 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 9 21:58:09.723796 kubelet[2840]: I0909 21:58:09.723734 2840 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 9 21:58:09.723796 kubelet[2840]: I0909 21:58:09.723803 2840 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 9 21:58:09.723994 kubelet[2840]: I0909 21:58:09.723834 2840 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 21:58:09.723994 kubelet[2840]: I0909 21:58:09.723847 2840 kubelet.go:2436] "Starting kubelet main sync loop" Sep 9 21:58:09.723994 kubelet[2840]: E0909 21:58:09.723906 2840 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 21:58:09.777094 kubelet[2840]: I0909 21:58:09.776954 2840 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 21:58:09.777094 kubelet[2840]: I0909 21:58:09.776981 2840 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 21:58:09.777094 kubelet[2840]: I0909 21:58:09.777013 2840 state_mem.go:36] "Initialized new in-memory state store" Sep 9 21:58:09.777311 kubelet[2840]: I0909 21:58:09.777198 2840 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 21:58:09.777311 kubelet[2840]: I0909 21:58:09.777217 2840 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 21:58:09.777311 kubelet[2840]: I0909 21:58:09.777240 2840 policy_none.go:49] "None policy: Start" Sep 9 21:58:09.777311 kubelet[2840]: I0909 21:58:09.777252 2840 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 21:58:09.777311 kubelet[2840]: I0909 21:58:09.777276 2840 state_mem.go:35] "Initializing new in-memory state store" Sep 9 21:58:09.777463 kubelet[2840]: I0909 21:58:09.777441 2840 state_mem.go:75] "Updated machine memory state" Sep 9 21:58:09.788206 kubelet[2840]: E0909 21:58:09.788157 2840 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 9 21:58:09.788471 kubelet[2840]: I0909 21:58:09.788441 2840 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 21:58:09.788522 kubelet[2840]: I0909 21:58:09.788460 2840 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 21:58:09.789490 kubelet[2840]: I0909 21:58:09.788742 2840 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 21:58:09.790601 kubelet[2840]: E0909 21:58:09.790552 2840 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 21:58:09.832241 kubelet[2840]: I0909 21:58:09.827853 2840 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 21:58:09.832241 kubelet[2840]: I0909 21:58:09.830050 2840 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 21:58:09.832241 kubelet[2840]: I0909 21:58:09.830394 2840 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 21:58:09.915602 kubelet[2840]: I0909 21:58:09.907858 2840 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 21:58:09.916783 kubelet[2840]: I0909 21:58:09.909231 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/988ac88b39c26310991bba481adde95a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"988ac88b39c26310991bba481adde95a\") " pod="kube-system/kube-apiserver-localhost" Sep 9 21:58:09.916995 kubelet[2840]: I0909 21:58:09.916970 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/988ac88b39c26310991bba481adde95a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"988ac88b39c26310991bba481adde95a\") " pod="kube-system/kube-apiserver-localhost" Sep 9 21:58:09.918316 kubelet[2840]: I0909 21:58:09.918277 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:58:09.918420 kubelet[2840]: I0909 21:58:09.918319 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:58:09.918420 kubelet[2840]: I0909 21:58:09.918353 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:58:09.918420 kubelet[2840]: I0909 21:58:09.918397 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/988ac88b39c26310991bba481adde95a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"988ac88b39c26310991bba481adde95a\") " pod="kube-system/kube-apiserver-localhost" Sep 9 21:58:09.920014 kubelet[2840]: I0909 21:58:09.918428 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:58:09.920014 kubelet[2840]: I0909 21:58:09.918455 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:58:09.920014 kubelet[2840]: I0909 21:58:09.918474 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 9 21:58:10.127389 kubelet[2840]: E0909 21:58:10.126984 2840 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 9 21:58:10.127389 kubelet[2840]: E0909 21:58:10.127275 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:58:10.147810 kubelet[2840]: E0909 21:58:10.145662 2840 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 21:58:10.147810 kubelet[2840]: E0909 21:58:10.146000 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:58:10.147810 kubelet[2840]: E0909 21:58:10.146128 2840 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 9 21:58:10.147810 kubelet[2840]: E0909 21:58:10.146259 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:58:10.409711 kubelet[2840]: I0909 21:58:10.406956 2840 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 9 21:58:10.409711 kubelet[2840]: I0909 21:58:10.407100 2840 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 21:58:10.629529 kubelet[2840]: I0909 21:58:10.627199 2840 apiserver.go:52] "Watching apiserver" Sep 9 21:58:10.697857 kubelet[2840]: I0909 21:58:10.697654 2840 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 21:58:10.753565 kubelet[2840]: E0909 21:58:10.752553 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:58:10.753565 kubelet[2840]: I0909 21:58:10.753235 2840 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 21:58:10.753957 kubelet[2840]: I0909 21:58:10.753922 2840 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 21:58:11.204196 kubelet[2840]: E0909 21:58:11.201873 2840 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 21:58:11.204196 kubelet[2840]: E0909 21:58:11.202142 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:58:11.204196 kubelet[2840]: E0909 21:58:11.202273 2840 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 9 21:58:11.204196 kubelet[2840]: E0909 21:58:11.202384 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:58:11.347858 kubelet[2840]: I0909 21:58:11.346836 2840 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 21:58:11.351123 containerd[1576]: time="2025-09-09T21:58:11.351057464Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 21:58:11.354033 kubelet[2840]: I0909 21:58:11.353968 2840 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 21:58:11.757316 kubelet[2840]: E0909 21:58:11.757054 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:58:11.757316 kubelet[2840]: E0909 21:58:11.757306 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:58:11.757981 kubelet[2840]: E0909 21:58:11.757338 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:58:12.433205 kubelet[2840]: I0909 21:58:12.433153 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c77dede4-fb19-4d40-b399-af09a43fc8d4-xtables-lock\") pod \"kube-proxy-tvd4t\" (UID: \"c77dede4-fb19-4d40-b399-af09a43fc8d4\") " pod="kube-system/kube-proxy-tvd4t" Sep 9 21:58:12.440175 kubelet[2840]: I0909 21:58:12.439861 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c77dede4-fb19-4d40-b399-af09a43fc8d4-kube-proxy\") pod \"kube-proxy-tvd4t\" (UID: \"c77dede4-fb19-4d40-b399-af09a43fc8d4\") " pod="kube-system/kube-proxy-tvd4t" Sep 9 21:58:12.440175 kubelet[2840]: I0909 21:58:12.440005 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c77dede4-fb19-4d40-b399-af09a43fc8d4-lib-modules\") pod \"kube-proxy-tvd4t\" (UID: \"c77dede4-fb19-4d40-b399-af09a43fc8d4\") " pod="kube-system/kube-proxy-tvd4t" Sep 9 21:58:12.440175 kubelet[2840]: I0909 21:58:12.440030 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5tsl\" (UniqueName: \"kubernetes.io/projected/c77dede4-fb19-4d40-b399-af09a43fc8d4-kube-api-access-p5tsl\") pod \"kube-proxy-tvd4t\" (UID: \"c77dede4-fb19-4d40-b399-af09a43fc8d4\") " pod="kube-system/kube-proxy-tvd4t" Sep 9 21:58:12.458040 systemd[1]: Created slice kubepods-besteffort-podc77dede4_fb19_4d40_b399_af09a43fc8d4.slice - libcontainer container kubepods-besteffort-podc77dede4_fb19_4d40_b399_af09a43fc8d4.slice. Sep 9 21:58:12.772795 kubelet[2840]: E0909 21:58:12.772074 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:58:12.780573 kubelet[2840]: E0909 21:58:12.779941 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:58:12.781963 containerd[1576]: time="2025-09-09T21:58:12.780849367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tvd4t,Uid:c77dede4-fb19-4d40-b399-af09a43fc8d4,Namespace:kube-system,Attempt:0,}" Sep 9 21:58:13.017240 containerd[1576]: time="2025-09-09T21:58:13.017128803Z" level=info msg="connecting to shim 54e179134996c67bb1c707537f964a8adb289d3adbb2e4f124b3b7a85c67016b" address="unix:///run/containerd/s/1e7cc34e2f44452818cdf696adf719470c046b081333ed019a627acad112fee4" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:58:13.125440 systemd[1]: Started cri-containerd-54e179134996c67bb1c707537f964a8adb289d3adbb2e4f124b3b7a85c67016b.scope - libcontainer container 54e179134996c67bb1c707537f964a8adb289d3adbb2e4f124b3b7a85c67016b. Sep 9 21:58:13.255065 containerd[1576]: time="2025-09-09T21:58:13.254973889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tvd4t,Uid:c77dede4-fb19-4d40-b399-af09a43fc8d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"54e179134996c67bb1c707537f964a8adb289d3adbb2e4f124b3b7a85c67016b\"" Sep 9 21:58:13.264384 kubelet[2840]: E0909 21:58:13.264338 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:58:13.301268 containerd[1576]: time="2025-09-09T21:58:13.301205870Z" level=info msg="CreateContainer within sandbox \"54e179134996c67bb1c707537f964a8adb289d3adbb2e4f124b3b7a85c67016b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 21:58:13.392141 containerd[1576]: time="2025-09-09T21:58:13.391257016Z" level=info msg="Container d50c86c1808331ea7d9ac48e5a39d191a5bc2b8f3b6bab72787b05c7206e9a56: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:58:13.430708 containerd[1576]: time="2025-09-09T21:58:13.429984946Z" level=info msg="CreateContainer within sandbox \"54e179134996c67bb1c707537f964a8adb289d3adbb2e4f124b3b7a85c67016b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d50c86c1808331ea7d9ac48e5a39d191a5bc2b8f3b6bab72787b05c7206e9a56\"" Sep 9 21:58:13.431680 containerd[1576]: time="2025-09-09T21:58:13.431456140Z" level=info msg="StartContainer for \"d50c86c1808331ea7d9ac48e5a39d191a5bc2b8f3b6bab72787b05c7206e9a56\"" Sep 9 21:58:13.436540 containerd[1576]: time="2025-09-09T21:58:13.435440659Z" level=info msg="connecting to shim d50c86c1808331ea7d9ac48e5a39d191a5bc2b8f3b6bab72787b05c7206e9a56" address="unix:///run/containerd/s/1e7cc34e2f44452818cdf696adf719470c046b081333ed019a627acad112fee4" protocol=ttrpc version=3 Sep 9 21:58:13.499030 systemd[1]: Started cri-containerd-d50c86c1808331ea7d9ac48e5a39d191a5bc2b8f3b6bab72787b05c7206e9a56.scope - libcontainer container d50c86c1808331ea7d9ac48e5a39d191a5bc2b8f3b6bab72787b05c7206e9a56. Sep 9 21:58:13.669671 containerd[1576]: time="2025-09-09T21:58:13.663101630Z" level=info msg="StartContainer for \"d50c86c1808331ea7d9ac48e5a39d191a5bc2b8f3b6bab72787b05c7206e9a56\" returns successfully" Sep 9 21:58:13.775128 kubelet[2840]: E0909 21:58:13.775057 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:58:13.827092 kubelet[2840]: I0909 21:58:13.826743 2840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tvd4t" podStartSLOduration=1.826718217 podStartE2EDuration="1.826718217s" podCreationTimestamp="2025-09-09 21:58:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 21:58:13.826552901 +0000 UTC m=+4.454405995" watchObservedRunningTime="2025-09-09 21:58:13.826718217 +0000 UTC m=+4.454571311" Sep 9 21:58:15.304448 systemd[1]: Created slice kubepods-besteffort-pod8c7b4b02_9544_426d_ad58_ff8c2234ea16.slice - libcontainer container kubepods-besteffort-pod8c7b4b02_9544_426d_ad58_ff8c2234ea16.slice. Sep 9 21:58:15.430684 kubelet[2840]: I0909 21:58:15.428912 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8c7b4b02-9544-426d-ad58-ff8c2234ea16-var-lib-calico\") pod \"tigera-operator-755d956888-z4txl\" (UID: \"8c7b4b02-9544-426d-ad58-ff8c2234ea16\") " pod="tigera-operator/tigera-operator-755d956888-z4txl" Sep 9 21:58:15.430684 kubelet[2840]: I0909 21:58:15.428979 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzc8p\" (UniqueName: \"kubernetes.io/projected/8c7b4b02-9544-426d-ad58-ff8c2234ea16-kube-api-access-gzc8p\") pod \"tigera-operator-755d956888-z4txl\" (UID: \"8c7b4b02-9544-426d-ad58-ff8c2234ea16\") " pod="tigera-operator/tigera-operator-755d956888-z4txl" Sep 9 21:58:15.616640 containerd[1576]: time="2025-09-09T21:58:15.616227953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-z4txl,Uid:8c7b4b02-9544-426d-ad58-ff8c2234ea16,Namespace:tigera-operator,Attempt:0,}" Sep 9 21:58:15.884578 containerd[1576]: time="2025-09-09T21:58:15.884418730Z" level=info msg="connecting to shim 5938bf9d1cb4842f7a9584b15f2e06bedae322c3acda73b5056813851c152ca3" address="unix:///run/containerd/s/2a0f12ed4bb9e45e621a114e2462134e97c1a0304afde6a8dfcd7ec4624e36e2" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:58:16.113211 systemd[1]: Started cri-containerd-5938bf9d1cb4842f7a9584b15f2e06bedae322c3acda73b5056813851c152ca3.scope - libcontainer container 5938bf9d1cb4842f7a9584b15f2e06bedae322c3acda73b5056813851c152ca3. Sep 9 21:58:16.330479 containerd[1576]: time="2025-09-09T21:58:16.330411989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-z4txl,Uid:8c7b4b02-9544-426d-ad58-ff8c2234ea16,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5938bf9d1cb4842f7a9584b15f2e06bedae322c3acda73b5056813851c152ca3\"" Sep 9 21:58:16.340013 containerd[1576]: time="2025-09-09T21:58:16.339355698Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 9 21:58:17.752975 kubelet[2840]: E0909 21:58:17.746881 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:58:17.808563 kubelet[2840]: E0909 21:58:17.808522 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:58:18.296940 kubelet[2840]: E0909 21:58:18.296352 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:58:18.479897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3958782543.mount: Deactivated successfully. Sep 9 21:58:18.832052 kubelet[2840]: E0909 21:58:18.831999 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:58:18.838828 kubelet[2840]: E0909 21:58:18.838588 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:58:22.775799 containerd[1576]: time="2025-09-09T21:58:22.775704754Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:58:22.806683 containerd[1576]: time="2025-09-09T21:58:22.806593331Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 9 21:58:22.849411 containerd[1576]: time="2025-09-09T21:58:22.847933836Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:58:22.896604 containerd[1576]: time="2025-09-09T21:58:22.894797162Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:58:22.908286 containerd[1576]: time="2025-09-09T21:58:22.908195963Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 6.568780542s" Sep 9 21:58:22.908286 containerd[1576]: time="2025-09-09T21:58:22.908263141Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 9 21:58:22.960112 containerd[1576]: time="2025-09-09T21:58:22.960031019Z" level=info msg="CreateContainer within sandbox \"5938bf9d1cb4842f7a9584b15f2e06bedae322c3acda73b5056813851c152ca3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 9 21:58:23.186885 containerd[1576]: time="2025-09-09T21:58:23.185884835Z" level=info msg="Container c324498bd4f5335fbb285a58222db5480433e50fc9f75a35f058e2135fd1f9a9: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:58:23.402061 containerd[1576]: time="2025-09-09T21:58:23.399883457Z" level=info msg="CreateContainer within sandbox \"5938bf9d1cb4842f7a9584b15f2e06bedae322c3acda73b5056813851c152ca3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c324498bd4f5335fbb285a58222db5480433e50fc9f75a35f058e2135fd1f9a9\"" Sep 9 21:58:23.402061 containerd[1576]: time="2025-09-09T21:58:23.400802607Z" level=info msg="StartContainer for \"c324498bd4f5335fbb285a58222db5480433e50fc9f75a35f058e2135fd1f9a9\"" Sep 9 21:58:23.402061 containerd[1576]: time="2025-09-09T21:58:23.401963367Z" level=info msg="connecting to shim c324498bd4f5335fbb285a58222db5480433e50fc9f75a35f058e2135fd1f9a9" address="unix:///run/containerd/s/2a0f12ed4bb9e45e621a114e2462134e97c1a0304afde6a8dfcd7ec4624e36e2" protocol=ttrpc version=3 Sep 9 21:58:23.516144 systemd[1]: Started cri-containerd-c324498bd4f5335fbb285a58222db5480433e50fc9f75a35f058e2135fd1f9a9.scope - libcontainer container c324498bd4f5335fbb285a58222db5480433e50fc9f75a35f058e2135fd1f9a9. Sep 9 21:58:23.677610 containerd[1576]: time="2025-09-09T21:58:23.672329748Z" level=info msg="StartContainer for \"c324498bd4f5335fbb285a58222db5480433e50fc9f75a35f058e2135fd1f9a9\" returns successfully" Sep 9 21:58:23.965164 kubelet[2840]: I0909 21:58:23.964924 2840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-z4txl" podStartSLOduration=2.38950704 podStartE2EDuration="8.964878076s" podCreationTimestamp="2025-09-09 21:58:15 +0000 UTC" firstStartedPulling="2025-09-09 21:58:16.337133304 +0000 UTC m=+6.964986398" lastFinishedPulling="2025-09-09 21:58:22.91250434 +0000 UTC m=+13.540357434" observedRunningTime="2025-09-09 21:58:23.964746967 +0000 UTC m=+14.592600061" watchObservedRunningTime="2025-09-09 21:58:23.964878076 +0000 UTC m=+14.592731170" Sep 9 21:58:32.442178 sudo[1799]: pam_unix(sudo:session): session closed for user root Sep 9 21:58:32.452282 sshd[1798]: Connection closed by 10.0.0.1 port 47526 Sep 9 21:58:32.455324 sshd-session[1794]: pam_unix(sshd:session): session closed for user core Sep 9 21:58:32.473252 systemd[1]: sshd@8-10.0.0.35:22-10.0.0.1:47526.service: Deactivated successfully. Sep 9 21:58:32.485965 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 21:58:32.486696 systemd[1]: session-9.scope: Consumed 10.524s CPU time, 224.9M memory peak. Sep 9 21:58:32.491273 systemd-logind[1557]: Session 9 logged out. Waiting for processes to exit. Sep 9 21:58:32.508846 systemd-logind[1557]: Removed session 9. Sep 9 21:58:45.453695 systemd[1]: Created slice kubepods-besteffort-pod9b0b9347_db73_483f_b7fc_0059c803b40e.slice - libcontainer container kubepods-besteffort-pod9b0b9347_db73_483f_b7fc_0059c803b40e.slice. Sep 9 21:58:45.555872 kubelet[2840]: I0909 21:58:45.555147 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b0b9347-db73-483f-b7fc-0059c803b40e-tigera-ca-bundle\") pod \"calico-typha-849f69cb5f-pwlpw\" (UID: \"9b0b9347-db73-483f-b7fc-0059c803b40e\") " pod="calico-system/calico-typha-849f69cb5f-pwlpw" Sep 9 21:58:45.555872 kubelet[2840]: I0909 21:58:45.555304 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9b0b9347-db73-483f-b7fc-0059c803b40e-typha-certs\") pod \"calico-typha-849f69cb5f-pwlpw\" (UID: \"9b0b9347-db73-483f-b7fc-0059c803b40e\") " pod="calico-system/calico-typha-849f69cb5f-pwlpw" Sep 9 21:58:45.555872 kubelet[2840]: I0909 21:58:45.555362 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmfpg\" (UniqueName: \"kubernetes.io/projected/9b0b9347-db73-483f-b7fc-0059c803b40e-kube-api-access-wmfpg\") pod \"calico-typha-849f69cb5f-pwlpw\" (UID: \"9b0b9347-db73-483f-b7fc-0059c803b40e\") " pod="calico-system/calico-typha-849f69cb5f-pwlpw" Sep 9 21:58:45.760717 kubelet[2840]: E0909 21:58:45.758780 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:58:45.763448 containerd[1576]: time="2025-09-09T21:58:45.762631426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-849f69cb5f-pwlpw,Uid:9b0b9347-db73-483f-b7fc-0059c803b40e,Namespace:calico-system,Attempt:0,}" Sep 9 21:58:45.886019 systemd[1]: Created slice kubepods-besteffort-pod9012f736_b3ec_4a0b_a40c_1caaf8e5c472.slice - libcontainer container kubepods-besteffort-pod9012f736_b3ec_4a0b_a40c_1caaf8e5c472.slice. Sep 9 21:58:45.890201 containerd[1576]: time="2025-09-09T21:58:45.890107880Z" level=info msg="connecting to shim 8a350c9582495738969422bc8ae68402700c5124eb3b7acf7c9f8b2908e5a376" address="unix:///run/containerd/s/514c51dd662aeeeef0a81b6b6950a73b20a4bed0dd92eba45f34dfb28021f982" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:58:45.970449 kubelet[2840]: I0909 21:58:45.969101 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9012f736-b3ec-4a0b-a40c-1caaf8e5c472-var-run-calico\") pod \"calico-node-wvsbv\" (UID: \"9012f736-b3ec-4a0b-a40c-1caaf8e5c472\") " pod="calico-system/calico-node-wvsbv" Sep 9 21:58:45.970449 kubelet[2840]: I0909 21:58:45.969173 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9012f736-b3ec-4a0b-a40c-1caaf8e5c472-var-lib-calico\") pod \"calico-node-wvsbv\" (UID: \"9012f736-b3ec-4a0b-a40c-1caaf8e5c472\") " pod="calico-system/calico-node-wvsbv" Sep 9 21:58:45.970449 kubelet[2840]: I0909 21:58:45.969228 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9012f736-b3ec-4a0b-a40c-1caaf8e5c472-cni-bin-dir\") pod \"calico-node-wvsbv\" (UID: \"9012f736-b3ec-4a0b-a40c-1caaf8e5c472\") " pod="calico-system/calico-node-wvsbv" Sep 9 21:58:45.971162 kubelet[2840]: I0909 21:58:45.969256 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9012f736-b3ec-4a0b-a40c-1caaf8e5c472-cni-net-dir\") pod \"calico-node-wvsbv\" (UID: \"9012f736-b3ec-4a0b-a40c-1caaf8e5c472\") " pod="calico-system/calico-node-wvsbv" Sep 9 21:58:45.971162 kubelet[2840]: I0909 21:58:45.970858 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9012f736-b3ec-4a0b-a40c-1caaf8e5c472-policysync\") pod \"calico-node-wvsbv\" (UID: \"9012f736-b3ec-4a0b-a40c-1caaf8e5c472\") " pod="calico-system/calico-node-wvsbv" Sep 9 21:58:45.971162 kubelet[2840]: I0909 21:58:45.970884 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvxj7\" (UniqueName: \"kubernetes.io/projected/9012f736-b3ec-4a0b-a40c-1caaf8e5c472-kube-api-access-zvxj7\") pod \"calico-node-wvsbv\" (UID: \"9012f736-b3ec-4a0b-a40c-1caaf8e5c472\") " pod="calico-system/calico-node-wvsbv" Sep 9 21:58:45.971162 kubelet[2840]: I0909 21:58:45.970909 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9012f736-b3ec-4a0b-a40c-1caaf8e5c472-cni-log-dir\") pod \"calico-node-wvsbv\" (UID: \"9012f736-b3ec-4a0b-a40c-1caaf8e5c472\") " pod="calico-system/calico-node-wvsbv" Sep 9 21:58:45.971162 kubelet[2840]: I0909 21:58:45.970927 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9012f736-b3ec-4a0b-a40c-1caaf8e5c472-lib-modules\") pod \"calico-node-wvsbv\" (UID: \"9012f736-b3ec-4a0b-a40c-1caaf8e5c472\") " pod="calico-system/calico-node-wvsbv" Sep 9 21:58:45.975113 kubelet[2840]: I0909 21:58:45.970957 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9012f736-b3ec-4a0b-a40c-1caaf8e5c472-tigera-ca-bundle\") pod \"calico-node-wvsbv\" (UID: \"9012f736-b3ec-4a0b-a40c-1caaf8e5c472\") " pod="calico-system/calico-node-wvsbv" Sep 9 21:58:45.975113 kubelet[2840]: I0909 21:58:45.970993 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9012f736-b3ec-4a0b-a40c-1caaf8e5c472-flexvol-driver-host\") pod \"calico-node-wvsbv\" (UID: \"9012f736-b3ec-4a0b-a40c-1caaf8e5c472\") " pod="calico-system/calico-node-wvsbv" Sep 9 21:58:45.975113 kubelet[2840]: I0909 21:58:45.971020 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9012f736-b3ec-4a0b-a40c-1caaf8e5c472-xtables-lock\") pod \"calico-node-wvsbv\" (UID: \"9012f736-b3ec-4a0b-a40c-1caaf8e5c472\") " pod="calico-system/calico-node-wvsbv" Sep 9 21:58:45.975113 kubelet[2840]: I0909 21:58:45.971042 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9012f736-b3ec-4a0b-a40c-1caaf8e5c472-node-certs\") pod \"calico-node-wvsbv\" (UID: \"9012f736-b3ec-4a0b-a40c-1caaf8e5c472\") " pod="calico-system/calico-node-wvsbv" Sep 9 21:58:46.066805 systemd[1]: Started cri-containerd-8a350c9582495738969422bc8ae68402700c5124eb3b7acf7c9f8b2908e5a376.scope - libcontainer container 8a350c9582495738969422bc8ae68402700c5124eb3b7acf7c9f8b2908e5a376. Sep 9 21:58:46.093031 kubelet[2840]: E0909 21:58:46.092911 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.093031 kubelet[2840]: W0909 21:58:46.092943 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.093031 kubelet[2840]: E0909 21:58:46.092975 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.105157 kubelet[2840]: E0909 21:58:46.105079 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nzhdk" podUID="a3597f1e-72d0-40eb-b831-78b87602d9ab" Sep 9 21:58:46.116673 kubelet[2840]: E0909 21:58:46.116630 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.116937 kubelet[2840]: W0909 21:58:46.116912 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.117049 kubelet[2840]: E0909 21:58:46.117029 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.119936 kubelet[2840]: E0909 21:58:46.119822 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.119936 kubelet[2840]: W0909 21:58:46.119843 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.119936 kubelet[2840]: E0909 21:58:46.119865 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.120371 kubelet[2840]: E0909 21:58:46.120356 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.123978 kubelet[2840]: W0909 21:58:46.120445 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.123978 kubelet[2840]: E0909 21:58:46.123614 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.133037 kubelet[2840]: E0909 21:58:46.131519 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.133037 kubelet[2840]: W0909 21:58:46.131565 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.133037 kubelet[2840]: E0909 21:58:46.131599 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.136783 kubelet[2840]: E0909 21:58:46.136446 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.136783 kubelet[2840]: W0909 21:58:46.136493 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.136783 kubelet[2840]: E0909 21:58:46.136522 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.148132 kubelet[2840]: E0909 21:58:46.141612 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.148132 kubelet[2840]: W0909 21:58:46.141650 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.148132 kubelet[2840]: E0909 21:58:46.141683 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.150922 kubelet[2840]: E0909 21:58:46.148095 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.151098 kubelet[2840]: W0909 21:58:46.148464 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.151288 kubelet[2840]: E0909 21:58:46.151228 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.163484 kubelet[2840]: E0909 21:58:46.159748 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.163484 kubelet[2840]: W0909 21:58:46.159872 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.163484 kubelet[2840]: E0909 21:58:46.159929 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.169165 kubelet[2840]: E0909 21:58:46.164084 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.169165 kubelet[2840]: W0909 21:58:46.164116 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.169165 kubelet[2840]: E0909 21:58:46.164185 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.169417 kubelet[2840]: E0909 21:58:46.169316 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.169417 kubelet[2840]: W0909 21:58:46.169337 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.169417 kubelet[2840]: E0909 21:58:46.169364 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.172108 kubelet[2840]: E0909 21:58:46.172059 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.172108 kubelet[2840]: W0909 21:58:46.172092 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.172280 kubelet[2840]: E0909 21:58:46.172118 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.172672 kubelet[2840]: E0909 21:58:46.172646 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.174793 kubelet[2840]: W0909 21:58:46.173440 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.175444 kubelet[2840]: E0909 21:58:46.175156 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.178929 kubelet[2840]: E0909 21:58:46.178890 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.178929 kubelet[2840]: W0909 21:58:46.178922 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.179085 kubelet[2840]: E0909 21:58:46.178952 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.181518 kubelet[2840]: E0909 21:58:46.180512 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.181518 kubelet[2840]: W0909 21:58:46.180536 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.181518 kubelet[2840]: E0909 21:58:46.180552 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.181518 kubelet[2840]: E0909 21:58:46.180758 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.181518 kubelet[2840]: W0909 21:58:46.180786 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.181518 kubelet[2840]: E0909 21:58:46.180796 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.181518 kubelet[2840]: E0909 21:58:46.181088 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.181518 kubelet[2840]: W0909 21:58:46.181099 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.181879 kubelet[2840]: E0909 21:58:46.181532 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.183645 kubelet[2840]: E0909 21:58:46.182583 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.183645 kubelet[2840]: W0909 21:58:46.182603 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.183645 kubelet[2840]: E0909 21:58:46.182618 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.183860 kubelet[2840]: E0909 21:58:46.183707 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.183860 kubelet[2840]: W0909 21:58:46.183722 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.183860 kubelet[2840]: E0909 21:58:46.183734 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.184143 kubelet[2840]: E0909 21:58:46.183989 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.184143 kubelet[2840]: W0909 21:58:46.184002 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.184143 kubelet[2840]: E0909 21:58:46.184018 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.184293 kubelet[2840]: E0909 21:58:46.184224 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.184293 kubelet[2840]: W0909 21:58:46.184234 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.184293 kubelet[2840]: E0909 21:58:46.184243 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.185693 kubelet[2840]: E0909 21:58:46.184441 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.185693 kubelet[2840]: W0909 21:58:46.184463 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.185693 kubelet[2840]: E0909 21:58:46.184488 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.185693 kubelet[2840]: E0909 21:58:46.185093 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.185693 kubelet[2840]: W0909 21:58:46.185106 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.185693 kubelet[2840]: E0909 21:58:46.185119 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.185693 kubelet[2840]: I0909 21:58:46.185169 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3597f1e-72d0-40eb-b831-78b87602d9ab-kubelet-dir\") pod \"csi-node-driver-nzhdk\" (UID: \"a3597f1e-72d0-40eb-b831-78b87602d9ab\") " pod="calico-system/csi-node-driver-nzhdk" Sep 9 21:58:46.185693 kubelet[2840]: E0909 21:58:46.185556 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.185693 kubelet[2840]: W0909 21:58:46.185569 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.186107 kubelet[2840]: E0909 21:58:46.185581 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.191161 kubelet[2840]: E0909 21:58:46.190177 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.191161 kubelet[2840]: W0909 21:58:46.190224 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.191161 kubelet[2840]: E0909 21:58:46.190257 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.191161 kubelet[2840]: E0909 21:58:46.190719 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.191161 kubelet[2840]: W0909 21:58:46.190731 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.191161 kubelet[2840]: E0909 21:58:46.190746 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.192061 kubelet[2840]: I0909 21:58:46.190815 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5swbt\" (UniqueName: \"kubernetes.io/projected/a3597f1e-72d0-40eb-b831-78b87602d9ab-kube-api-access-5swbt\") pod \"csi-node-driver-nzhdk\" (UID: \"a3597f1e-72d0-40eb-b831-78b87602d9ab\") " pod="calico-system/csi-node-driver-nzhdk" Sep 9 21:58:46.193814 containerd[1576]: time="2025-09-09T21:58:46.193756575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wvsbv,Uid:9012f736-b3ec-4a0b-a40c-1caaf8e5c472,Namespace:calico-system,Attempt:0,}" Sep 9 21:58:46.196550 kubelet[2840]: E0909 21:58:46.195958 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.196550 kubelet[2840]: W0909 21:58:46.195994 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.196550 kubelet[2840]: E0909 21:58:46.196044 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.196866 kubelet[2840]: E0909 21:58:46.196782 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.196866 kubelet[2840]: W0909 21:58:46.196796 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.196866 kubelet[2840]: E0909 21:58:46.196809 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.200859 kubelet[2840]: E0909 21:58:46.198019 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.200859 kubelet[2840]: W0909 21:58:46.198992 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.202484 kubelet[2840]: E0909 21:58:46.201658 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.202484 kubelet[2840]: I0909 21:58:46.201803 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a3597f1e-72d0-40eb-b831-78b87602d9ab-registration-dir\") pod \"csi-node-driver-nzhdk\" (UID: \"a3597f1e-72d0-40eb-b831-78b87602d9ab\") " pod="calico-system/csi-node-driver-nzhdk" Sep 9 21:58:46.204401 kubelet[2840]: E0909 21:58:46.203946 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.204401 kubelet[2840]: W0909 21:58:46.203989 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.204401 kubelet[2840]: E0909 21:58:46.204028 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.204401 kubelet[2840]: I0909 21:58:46.204092 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a3597f1e-72d0-40eb-b831-78b87602d9ab-varrun\") pod \"csi-node-driver-nzhdk\" (UID: \"a3597f1e-72d0-40eb-b831-78b87602d9ab\") " pod="calico-system/csi-node-driver-nzhdk" Sep 9 21:58:46.207911 kubelet[2840]: E0909 21:58:46.205813 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.207911 kubelet[2840]: W0909 21:58:46.205845 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.207911 kubelet[2840]: E0909 21:58:46.205866 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.207911 kubelet[2840]: I0909 21:58:46.205987 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a3597f1e-72d0-40eb-b831-78b87602d9ab-socket-dir\") pod \"csi-node-driver-nzhdk\" (UID: \"a3597f1e-72d0-40eb-b831-78b87602d9ab\") " pod="calico-system/csi-node-driver-nzhdk" Sep 9 21:58:46.211971 kubelet[2840]: E0909 21:58:46.210567 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.211971 kubelet[2840]: W0909 21:58:46.210653 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.211971 kubelet[2840]: E0909 21:58:46.210740 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.211971 kubelet[2840]: E0909 21:58:46.211561 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.211971 kubelet[2840]: W0909 21:58:46.211577 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.211971 kubelet[2840]: E0909 21:58:46.211592 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.217381 kubelet[2840]: E0909 21:58:46.217090 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.217381 kubelet[2840]: W0909 21:58:46.217132 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.217381 kubelet[2840]: E0909 21:58:46.217166 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.222718 kubelet[2840]: E0909 21:58:46.220448 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.222718 kubelet[2840]: W0909 21:58:46.220482 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.222718 kubelet[2840]: E0909 21:58:46.220508 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.222718 kubelet[2840]: E0909 21:58:46.220976 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.222718 kubelet[2840]: W0909 21:58:46.220989 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.222718 kubelet[2840]: E0909 21:58:46.221002 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.222718 kubelet[2840]: E0909 21:58:46.221278 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.222718 kubelet[2840]: W0909 21:58:46.221290 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.222718 kubelet[2840]: E0909 21:58:46.221331 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.315236 kubelet[2840]: E0909 21:58:46.314058 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.315236 kubelet[2840]: W0909 21:58:46.314108 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.315236 kubelet[2840]: E0909 21:58:46.314151 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.315593 kubelet[2840]: E0909 21:58:46.315538 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.315593 kubelet[2840]: W0909 21:58:46.315559 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.315593 kubelet[2840]: E0909 21:58:46.315575 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.328083 kubelet[2840]: E0909 21:58:46.322521 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.328083 kubelet[2840]: W0909 21:58:46.327017 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.328083 kubelet[2840]: E0909 21:58:46.327086 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.328083 kubelet[2840]: E0909 21:58:46.327578 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.328083 kubelet[2840]: W0909 21:58:46.327593 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.328083 kubelet[2840]: E0909 21:58:46.327607 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.328083 kubelet[2840]: E0909 21:58:46.327897 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.328083 kubelet[2840]: W0909 21:58:46.327909 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.328083 kubelet[2840]: E0909 21:58:46.327920 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.328446 kubelet[2840]: E0909 21:58:46.328236 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.328446 kubelet[2840]: W0909 21:58:46.328246 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.328446 kubelet[2840]: E0909 21:58:46.328257 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.335971 kubelet[2840]: E0909 21:58:46.328500 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.335971 kubelet[2840]: W0909 21:58:46.328515 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.335971 kubelet[2840]: E0909 21:58:46.328528 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.335971 kubelet[2840]: E0909 21:58:46.328895 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.335971 kubelet[2840]: W0909 21:58:46.328908 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.335971 kubelet[2840]: E0909 21:58:46.328920 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.335971 kubelet[2840]: E0909 21:58:46.329484 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.335971 kubelet[2840]: W0909 21:58:46.329497 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.335971 kubelet[2840]: E0909 21:58:46.329510 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.335971 kubelet[2840]: E0909 21:58:46.331186 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.336289 kubelet[2840]: W0909 21:58:46.331210 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.336289 kubelet[2840]: E0909 21:58:46.331231 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.336289 kubelet[2840]: E0909 21:58:46.335027 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.336289 kubelet[2840]: W0909 21:58:46.335058 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.336289 kubelet[2840]: E0909 21:58:46.335085 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.340514 kubelet[2840]: E0909 21:58:46.337079 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.340514 kubelet[2840]: W0909 21:58:46.337102 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.340514 kubelet[2840]: E0909 21:58:46.337123 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.340514 kubelet[2840]: E0909 21:58:46.337723 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.340514 kubelet[2840]: W0909 21:58:46.337736 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.340514 kubelet[2840]: E0909 21:58:46.337749 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.342136 kubelet[2840]: E0909 21:58:46.342114 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.342258 kubelet[2840]: W0909 21:58:46.342215 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.342258 kubelet[2840]: E0909 21:58:46.342242 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.342943 kubelet[2840]: E0909 21:58:46.342884 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.342943 kubelet[2840]: W0909 21:58:46.342899 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.343148 kubelet[2840]: E0909 21:58:46.343031 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.346508 kubelet[2840]: E0909 21:58:46.346469 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.348961 kubelet[2840]: W0909 21:58:46.347193 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.348961 kubelet[2840]: E0909 21:58:46.347229 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.349534 kubelet[2840]: E0909 21:58:46.349486 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.349702 kubelet[2840]: W0909 21:58:46.349509 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.349702 kubelet[2840]: E0909 21:58:46.349639 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.351044 kubelet[2840]: E0909 21:58:46.351027 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.351238 kubelet[2840]: W0909 21:58:46.351113 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.351238 kubelet[2840]: E0909 21:58:46.351136 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.352322 kubelet[2840]: E0909 21:58:46.351712 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.352322 kubelet[2840]: W0909 21:58:46.351728 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.352322 kubelet[2840]: E0909 21:58:46.351758 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.353102 kubelet[2840]: E0909 21:58:46.352810 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.353102 kubelet[2840]: W0909 21:58:46.352826 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.353102 kubelet[2840]: E0909 21:58:46.352838 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.361800 kubelet[2840]: E0909 21:58:46.358044 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.361800 kubelet[2840]: W0909 21:58:46.358119 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.361800 kubelet[2840]: E0909 21:58:46.358176 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.361800 kubelet[2840]: E0909 21:58:46.360832 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.361800 kubelet[2840]: W0909 21:58:46.360862 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.361800 kubelet[2840]: E0909 21:58:46.360893 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.366050 kubelet[2840]: E0909 21:58:46.362740 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.366050 kubelet[2840]: W0909 21:58:46.362782 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.366050 kubelet[2840]: E0909 21:58:46.362812 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.366050 kubelet[2840]: E0909 21:58:46.364832 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.366050 kubelet[2840]: W0909 21:58:46.364855 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.366050 kubelet[2840]: E0909 21:58:46.364883 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.369733 kubelet[2840]: E0909 21:58:46.369679 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.369733 kubelet[2840]: W0909 21:58:46.369719 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.369955 kubelet[2840]: E0909 21:58:46.369754 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.377217 containerd[1576]: time="2025-09-09T21:58:46.376031872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-849f69cb5f-pwlpw,Uid:9b0b9347-db73-483f-b7fc-0059c803b40e,Namespace:calico-system,Attempt:0,} returns sandbox id \"8a350c9582495738969422bc8ae68402700c5124eb3b7acf7c9f8b2908e5a376\"" Sep 9 21:58:46.385426 kubelet[2840]: E0909 21:58:46.384222 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:58:46.396651 containerd[1576]: time="2025-09-09T21:58:46.395879447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 9 21:58:46.461408 kubelet[2840]: E0909 21:58:46.461049 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:46.461408 kubelet[2840]: W0909 21:58:46.461100 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:46.461408 kubelet[2840]: E0909 21:58:46.461125 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:46.531802 containerd[1576]: time="2025-09-09T21:58:46.516623326Z" level=info msg="connecting to shim 7befd3648a8aae4a7ad25dc823bfc8c28a400ab01fff3876636a55b2be3cba63" address="unix:///run/containerd/s/276e3ac4e13059c32a222920c318e3b159ba13e787242ad40db8b68e2e69c7e4" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:58:46.612681 systemd[1]: Started cri-containerd-7befd3648a8aae4a7ad25dc823bfc8c28a400ab01fff3876636a55b2be3cba63.scope - libcontainer container 7befd3648a8aae4a7ad25dc823bfc8c28a400ab01fff3876636a55b2be3cba63. Sep 9 21:58:46.751338 containerd[1576]: time="2025-09-09T21:58:46.751180375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wvsbv,Uid:9012f736-b3ec-4a0b-a40c-1caaf8e5c472,Namespace:calico-system,Attempt:0,} returns sandbox id \"7befd3648a8aae4a7ad25dc823bfc8c28a400ab01fff3876636a55b2be3cba63\"" Sep 9 21:58:47.725546 kubelet[2840]: E0909 21:58:47.725036 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nzhdk" podUID="a3597f1e-72d0-40eb-b831-78b87602d9ab" Sep 9 21:58:48.483086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3482238659.mount: Deactivated successfully. Sep 9 21:58:49.730707 kubelet[2840]: E0909 21:58:49.728514 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nzhdk" podUID="a3597f1e-72d0-40eb-b831-78b87602d9ab" Sep 9 21:58:51.681876 containerd[1576]: time="2025-09-09T21:58:51.680880182Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:58:51.690823 containerd[1576]: time="2025-09-09T21:58:51.690738135Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 9 21:58:51.696576 containerd[1576]: time="2025-09-09T21:58:51.696465886Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:58:51.707065 containerd[1576]: time="2025-09-09T21:58:51.706976764Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:58:51.711511 containerd[1576]: time="2025-09-09T21:58:51.711395851Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 5.315455678s" Sep 9 21:58:51.711511 containerd[1576]: time="2025-09-09T21:58:51.711481232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 9 21:58:51.716849 containerd[1576]: time="2025-09-09T21:58:51.716792687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 9 21:58:51.727013 kubelet[2840]: E0909 21:58:51.726836 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nzhdk" podUID="a3597f1e-72d0-40eb-b831-78b87602d9ab" Sep 9 21:58:51.745148 containerd[1576]: time="2025-09-09T21:58:51.745033323Z" level=info msg="CreateContainer within sandbox \"8a350c9582495738969422bc8ae68402700c5124eb3b7acf7c9f8b2908e5a376\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 9 21:58:51.788573 containerd[1576]: time="2025-09-09T21:58:51.786709347Z" level=info msg="Container 7aa7109938a9e6895ae0adfbfb8ad659d9d617060ff9ce42174c0d593402c56a: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:58:51.812115 containerd[1576]: time="2025-09-09T21:58:51.811855071Z" level=info msg="CreateContainer within sandbox \"8a350c9582495738969422bc8ae68402700c5124eb3b7acf7c9f8b2908e5a376\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7aa7109938a9e6895ae0adfbfb8ad659d9d617060ff9ce42174c0d593402c56a\"" Sep 9 21:58:51.817468 containerd[1576]: time="2025-09-09T21:58:51.813567319Z" level=info msg="StartContainer for \"7aa7109938a9e6895ae0adfbfb8ad659d9d617060ff9ce42174c0d593402c56a\"" Sep 9 21:58:51.817468 containerd[1576]: time="2025-09-09T21:58:51.815845637Z" level=info msg="connecting to shim 7aa7109938a9e6895ae0adfbfb8ad659d9d617060ff9ce42174c0d593402c56a" address="unix:///run/containerd/s/514c51dd662aeeeef0a81b6b6950a73b20a4bed0dd92eba45f34dfb28021f982" protocol=ttrpc version=3 Sep 9 21:58:51.894200 systemd[1]: Started cri-containerd-7aa7109938a9e6895ae0adfbfb8ad659d9d617060ff9ce42174c0d593402c56a.scope - libcontainer container 7aa7109938a9e6895ae0adfbfb8ad659d9d617060ff9ce42174c0d593402c56a. Sep 9 21:58:52.170033 containerd[1576]: time="2025-09-09T21:58:52.168795258Z" level=info msg="StartContainer for \"7aa7109938a9e6895ae0adfbfb8ad659d9d617060ff9ce42174c0d593402c56a\" returns successfully" Sep 9 21:58:53.185413 kubelet[2840]: E0909 21:58:53.177376 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:58:53.225325 kubelet[2840]: E0909 21:58:53.225062 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:53.225325 kubelet[2840]: W0909 21:58:53.225093 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:53.225325 kubelet[2840]: E0909 21:58:53.225131 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:53.228040 kubelet[2840]: E0909 21:58:53.227758 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:53.228040 kubelet[2840]: W0909 21:58:53.227805 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:53.228040 kubelet[2840]: E0909 21:58:53.227825 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:53.228248 kubelet[2840]: E0909 21:58:53.228073 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:53.228248 kubelet[2840]: W0909 21:58:53.228084 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:53.228248 kubelet[2840]: E0909 21:58:53.228095 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:53.228986 kubelet[2840]: E0909 21:58:53.228809 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:53.228986 kubelet[2840]: W0909 21:58:53.228828 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:53.228986 kubelet[2840]: E0909 21:58:53.228848 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:53.229280 kubelet[2840]: E0909 21:58:53.229240 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:53.229280 kubelet[2840]: W0909 21:58:53.229252 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:53.229280 kubelet[2840]: E0909 21:58:53.229263 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:53.231433 kubelet[2840]: E0909 21:58:53.229538 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:53.231433 kubelet[2840]: W0909 21:58:53.229554 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:53.231433 kubelet[2840]: E0909 21:58:53.229570 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:53.236698 kubelet[2840]: E0909 21:58:53.236634 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:53.236698 kubelet[2840]: W0909 21:58:53.236675 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:53.236698 kubelet[2840]: E0909 21:58:53.236706 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:53.237034 kubelet[2840]: E0909 21:58:53.237014 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:53.237034 kubelet[2840]: W0909 21:58:53.237030 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:53.237111 kubelet[2840]: E0909 21:58:53.237042 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:53.240435 kubelet[2840]: E0909 21:58:53.239921 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:53.240435 kubelet[2840]: W0909 21:58:53.239950 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:53.240435 kubelet[2840]: E0909 21:58:53.239967 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:53.240435 kubelet[2840]: E0909 21:58:53.240179 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:53.240435 kubelet[2840]: W0909 21:58:53.240188 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:53.240435 kubelet[2840]: E0909 21:58:53.240198 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:53.241509 kubelet[2840]: E0909 21:58:53.240879 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:53.241509 kubelet[2840]: W0909 21:58:53.240895 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:53.241509 kubelet[2840]: E0909 21:58:53.240906 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:53.241509 kubelet[2840]: E0909 21:58:53.241101 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:53.241509 kubelet[2840]: W0909 21:58:53.241110 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:53.241509 kubelet[2840]: E0909 21:58:53.241119 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:53.248567 kubelet[2840]: E0909 21:58:53.248512 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:53.248567 kubelet[2840]: W0909 21:58:53.248552 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:53.248822 kubelet[2840]: E0909 21:58:53.248587 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:53.249155 kubelet[2840]: E0909 21:58:53.248934 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:53.249155 kubelet[2840]: W0909 21:58:53.248948 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:53.249155 kubelet[2840]: E0909 21:58:53.248958 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:53.254130 kubelet[2840]: E0909 21:58:53.252682 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:53.254130 kubelet[2840]: W0909 21:58:53.253658 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:53.261676 kubelet[2840]: E0909 21:58:53.256334 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:53.276450 kubelet[2840]: E0909 21:58:53.273699 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:53.276450 kubelet[2840]: W0909 21:58:53.273738 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:53.276450 kubelet[2840]: E0909 21:58:53.273788 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:53.276450 kubelet[2840]: E0909 21:58:53.274104 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:53.276450 kubelet[2840]: W0909 21:58:53.274115 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:53.276450 kubelet[2840]: E0909 21:58:53.274126 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:53.276450 kubelet[2840]: E0909 21:58:53.274376 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:53.276450 kubelet[2840]: W0909 21:58:53.274394 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:53.276450 kubelet[2840]: E0909 21:58:53.274404 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:53.276450 kubelet[2840]: E0909 21:58:53.274650 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:53.276995 kubelet[2840]: W0909 21:58:53.274660 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:53.276995 kubelet[2840]: E0909 21:58:53.274671 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:53.276995 kubelet[2840]: E0909 21:58:53.274906 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:53.276995 kubelet[2840]: W0909 21:58:53.274916 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:53.276995 kubelet[2840]: E0909 21:58:53.274926 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:53.276995 kubelet[2840]: E0909 21:58:53.275159 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:53.276995 kubelet[2840]: W0909 21:58:53.275170 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:53.276995 kubelet[2840]: E0909 21:58:53.275182 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:53.292053 kubelet[2840]: E0909 21:58:53.288549 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:53.292053 kubelet[2840]: W0909 21:58:53.291655 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:53.293808 kubelet[2840]: E0909 21:58:53.293485 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:53.294253 kubelet[2840]: E0909 21:58:53.294073 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:53.294253 kubelet[2840]: W0909 21:58:53.294096 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:53.294253 kubelet[2840]: E0909 21:58:53.294111 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:53.294581 kubelet[2840]: E0909 21:58:53.294534 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:53.294581 kubelet[2840]: W0909 21:58:53.294550 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:53.294581 kubelet[2840]: E0909 21:58:53.294563 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:53.295375 kubelet[2840]: E0909 21:58:53.295335 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:53.295375 kubelet[2840]: W0909 21:58:53.295349 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:53.295375 kubelet[2840]: E0909 21:58:53.295359 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:53.303821 kubelet[2840]: E0909 21:58:53.303542 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:53.303821 kubelet[2840]: W0909 21:58:53.303586 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:53.303821 kubelet[2840]: E0909 21:58:53.303622 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:53.304466 kubelet[2840]: E0909 21:58:53.304321 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:53.304466 kubelet[2840]: W0909 21:58:53.304337 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:53.304466 kubelet[2840]: E0909 21:58:53.304350 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:53.310006 kubelet[2840]: E0909 21:58:53.309982 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:53.310093 kubelet[2840]: W0909 21:58:53.310077 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:53.310161 kubelet[2840]: E0909 21:58:53.310147 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:53.313540 kubelet[2840]: E0909 21:58:53.311082 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:53.313540 kubelet[2840]: W0909 21:58:53.311312 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:53.314532 kubelet[2840]: E0909 21:58:53.313820 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:53.318600 kubelet[2840]: E0909 21:58:53.318553 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:53.318600 kubelet[2840]: W0909 21:58:53.318593 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:53.318784 kubelet[2840]: E0909 21:58:53.318621 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:53.320651 kubelet[2840]: E0909 21:58:53.320468 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:53.320651 kubelet[2840]: W0909 21:58:53.320487 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:53.320651 kubelet[2840]: E0909 21:58:53.320505 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:53.321033 kubelet[2840]: E0909 21:58:53.321016 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:53.321114 kubelet[2840]: W0909 21:58:53.321099 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:53.321186 kubelet[2840]: E0909 21:58:53.321172 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:53.321926 kubelet[2840]: E0909 21:58:53.321862 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:53.321926 kubelet[2840]: W0909 21:58:53.321878 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:53.321926 kubelet[2840]: E0909 21:58:53.321891 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:53.460121 kubelet[2840]: I0909 21:58:53.459708 2840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-849f69cb5f-pwlpw" podStartSLOduration=3.139382127 podStartE2EDuration="8.459660519s" podCreationTimestamp="2025-09-09 21:58:45 +0000 UTC" firstStartedPulling="2025-09-09 21:58:46.39393691 +0000 UTC m=+37.021790004" lastFinishedPulling="2025-09-09 21:58:51.714215302 +0000 UTC m=+42.342068396" observedRunningTime="2025-09-09 21:58:53.453984339 +0000 UTC m=+44.081837443" watchObservedRunningTime="2025-09-09 21:58:53.459660519 +0000 UTC m=+44.087513613" Sep 9 21:58:53.730677 kubelet[2840]: E0909 21:58:53.725148 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nzhdk" podUID="a3597f1e-72d0-40eb-b831-78b87602d9ab" Sep 9 21:58:54.183336 kubelet[2840]: E0909 21:58:54.182998 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:58:54.184778 kubelet[2840]: E0909 21:58:54.183745 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:54.184778 kubelet[2840]: W0909 21:58:54.183791 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:54.184778 kubelet[2840]: E0909 21:58:54.183818 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:54.184778 kubelet[2840]: E0909 21:58:54.184040 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:54.184778 kubelet[2840]: W0909 21:58:54.184050 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:54.184778 kubelet[2840]: E0909 21:58:54.184061 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:54.184778 kubelet[2840]: E0909 21:58:54.184237 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:54.184778 kubelet[2840]: W0909 21:58:54.184247 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:54.184778 kubelet[2840]: E0909 21:58:54.184259 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:54.188502 kubelet[2840]: E0909 21:58:54.186302 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:54.188502 kubelet[2840]: W0909 21:58:54.186326 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:54.188502 kubelet[2840]: E0909 21:58:54.186349 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:54.188502 kubelet[2840]: E0909 21:58:54.186874 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:54.188502 kubelet[2840]: W0909 21:58:54.186887 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:54.188502 kubelet[2840]: E0909 21:58:54.186898 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:54.194794 kubelet[2840]: E0909 21:58:54.192003 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:54.194794 kubelet[2840]: W0909 21:58:54.192041 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:54.194794 kubelet[2840]: E0909 21:58:54.192075 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:54.194794 kubelet[2840]: E0909 21:58:54.192407 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:54.194794 kubelet[2840]: W0909 21:58:54.192417 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:54.194794 kubelet[2840]: E0909 21:58:54.192426 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:54.194794 kubelet[2840]: E0909 21:58:54.192610 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:54.194794 kubelet[2840]: W0909 21:58:54.192619 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:54.194794 kubelet[2840]: E0909 21:58:54.192630 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:54.194794 kubelet[2840]: E0909 21:58:54.192859 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:54.195263 kubelet[2840]: W0909 21:58:54.192868 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:54.195263 kubelet[2840]: E0909 21:58:54.192878 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:54.195263 kubelet[2840]: E0909 21:58:54.193046 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:54.195263 kubelet[2840]: W0909 21:58:54.193054 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:54.195263 kubelet[2840]: E0909 21:58:54.193062 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:54.195263 kubelet[2840]: E0909 21:58:54.193275 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:54.195263 kubelet[2840]: W0909 21:58:54.193284 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:54.195263 kubelet[2840]: E0909 21:58:54.193295 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:54.195263 kubelet[2840]: E0909 21:58:54.193475 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:54.195263 kubelet[2840]: W0909 21:58:54.193484 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:54.195601 kubelet[2840]: E0909 21:58:54.193493 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:54.195601 kubelet[2840]: E0909 21:58:54.193733 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:54.195601 kubelet[2840]: W0909 21:58:54.193741 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:54.195601 kubelet[2840]: E0909 21:58:54.193750 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:54.195601 kubelet[2840]: E0909 21:58:54.193939 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:54.195601 kubelet[2840]: W0909 21:58:54.193948 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:54.195601 kubelet[2840]: E0909 21:58:54.193957 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:54.195601 kubelet[2840]: E0909 21:58:54.194134 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:54.195601 kubelet[2840]: W0909 21:58:54.194142 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:54.195601 kubelet[2840]: E0909 21:58:54.194152 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:54.195949 kubelet[2840]: E0909 21:58:54.194442 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:54.195949 kubelet[2840]: W0909 21:58:54.194455 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:54.195949 kubelet[2840]: E0909 21:58:54.194469 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:54.195949 kubelet[2840]: E0909 21:58:54.194761 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:54.195949 kubelet[2840]: W0909 21:58:54.194790 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:54.195949 kubelet[2840]: E0909 21:58:54.194802 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:54.195949 kubelet[2840]: E0909 21:58:54.195058 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:54.195949 kubelet[2840]: W0909 21:58:54.195070 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:54.195949 kubelet[2840]: E0909 21:58:54.195081 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:54.195949 kubelet[2840]: E0909 21:58:54.195318 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:54.196232 kubelet[2840]: W0909 21:58:54.195330 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:54.196232 kubelet[2840]: E0909 21:58:54.195341 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:54.203960 kubelet[2840]: E0909 21:58:54.203244 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:54.203960 kubelet[2840]: W0909 21:58:54.203285 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:54.203960 kubelet[2840]: E0909 21:58:54.203328 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:54.203960 kubelet[2840]: E0909 21:58:54.203643 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:54.203960 kubelet[2840]: W0909 21:58:54.203655 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:54.203960 kubelet[2840]: E0909 21:58:54.203667 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:54.207015 kubelet[2840]: E0909 21:58:54.206909 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:54.207015 kubelet[2840]: W0909 21:58:54.206966 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:54.207015 kubelet[2840]: E0909 21:58:54.206986 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:54.213163 kubelet[2840]: E0909 21:58:54.211598 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:54.213163 kubelet[2840]: W0909 21:58:54.211636 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:54.213163 kubelet[2840]: E0909 21:58:54.211664 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:54.214423 kubelet[2840]: E0909 21:58:54.214391 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:54.214423 kubelet[2840]: W0909 21:58:54.214419 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:54.214600 kubelet[2840]: E0909 21:58:54.214443 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:54.216100 kubelet[2840]: E0909 21:58:54.215675 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:54.216100 kubelet[2840]: W0909 21:58:54.215700 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:54.216100 kubelet[2840]: E0909 21:58:54.215715 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:54.219229 kubelet[2840]: E0909 21:58:54.218964 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:54.219229 kubelet[2840]: W0909 21:58:54.218989 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:54.219229 kubelet[2840]: E0909 21:58:54.219009 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:54.222873 kubelet[2840]: E0909 21:58:54.222595 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:54.222873 kubelet[2840]: W0909 21:58:54.222625 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:54.222873 kubelet[2840]: E0909 21:58:54.222650 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:54.223228 kubelet[2840]: E0909 21:58:54.223207 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:54.223290 kubelet[2840]: W0909 21:58:54.223278 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:54.223347 kubelet[2840]: E0909 21:58:54.223336 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:54.223660 kubelet[2840]: E0909 21:58:54.223624 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:54.223660 kubelet[2840]: W0909 21:58:54.223636 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:54.223660 kubelet[2840]: E0909 21:58:54.223646 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:54.225487 kubelet[2840]: E0909 21:58:54.224335 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:54.225487 kubelet[2840]: W0909 21:58:54.224352 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:54.225487 kubelet[2840]: E0909 21:58:54.224366 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:54.225487 kubelet[2840]: E0909 21:58:54.224803 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:54.225487 kubelet[2840]: W0909 21:58:54.224818 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:54.225487 kubelet[2840]: E0909 21:58:54.224833 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:54.226667 kubelet[2840]: E0909 21:58:54.226293 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:54.226667 kubelet[2840]: W0909 21:58:54.226312 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:54.226667 kubelet[2840]: E0909 21:58:54.226333 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:54.227267 kubelet[2840]: E0909 21:58:54.227060 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:58:54.227267 kubelet[2840]: W0909 21:58:54.227074 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:58:54.227267 kubelet[2840]: E0909 21:58:54.227088 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:58:55.730848 kubelet[2840]: E0909 21:58:55.728930 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nzhdk" podUID="a3597f1e-72d0-40eb-b831-78b87602d9ab" Sep 9 21:58:56.482836 containerd[1576]: time="2025-09-09T21:58:56.481861901Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:58:56.528369 containerd[1576]: time="2025-09-09T21:58:56.528254333Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 9 21:58:56.579605 containerd[1576]: time="2025-09-09T21:58:56.575216571Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:58:56.623253 containerd[1576]: time="2025-09-09T21:58:56.623056147Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:58:56.626362 containerd[1576]: time="2025-09-09T21:58:56.625398965Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 4.908118837s" Sep 9 21:58:56.626362 containerd[1576]: time="2025-09-09T21:58:56.625448369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 9 21:58:56.676449 containerd[1576]: time="2025-09-09T21:58:56.674515044Z" level=info msg="CreateContainer within sandbox \"7befd3648a8aae4a7ad25dc823bfc8c28a400ab01fff3876636a55b2be3cba63\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 9 21:58:56.815488 containerd[1576]: time="2025-09-09T21:58:56.814161505Z" level=info msg="Container 4f56981cb94e51ceec9719defdee6ed1f5677ea7376489befb26c1adea37cc13: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:58:57.088949 containerd[1576]: time="2025-09-09T21:58:57.082837305Z" level=info msg="CreateContainer within sandbox \"7befd3648a8aae4a7ad25dc823bfc8c28a400ab01fff3876636a55b2be3cba63\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4f56981cb94e51ceec9719defdee6ed1f5677ea7376489befb26c1adea37cc13\"" Sep 9 21:58:57.091623 containerd[1576]: time="2025-09-09T21:58:57.089520864Z" level=info msg="StartContainer for \"4f56981cb94e51ceec9719defdee6ed1f5677ea7376489befb26c1adea37cc13\"" Sep 9 21:58:57.092672 containerd[1576]: time="2025-09-09T21:58:57.092609179Z" level=info msg="connecting to shim 4f56981cb94e51ceec9719defdee6ed1f5677ea7376489befb26c1adea37cc13" address="unix:///run/containerd/s/276e3ac4e13059c32a222920c318e3b159ba13e787242ad40db8b68e2e69c7e4" protocol=ttrpc version=3 Sep 9 21:58:57.163080 systemd[1]: Started cri-containerd-4f56981cb94e51ceec9719defdee6ed1f5677ea7376489befb26c1adea37cc13.scope - libcontainer container 4f56981cb94e51ceec9719defdee6ed1f5677ea7376489befb26c1adea37cc13. Sep 9 21:58:57.385581 containerd[1576]: time="2025-09-09T21:58:57.383901297Z" level=info msg="StartContainer for \"4f56981cb94e51ceec9719defdee6ed1f5677ea7376489befb26c1adea37cc13\" returns successfully" Sep 9 21:58:57.434891 systemd[1]: cri-containerd-4f56981cb94e51ceec9719defdee6ed1f5677ea7376489befb26c1adea37cc13.scope: Deactivated successfully. Sep 9 21:58:57.460548 containerd[1576]: time="2025-09-09T21:58:57.460314794Z" level=info msg="received exit event container_id:\"4f56981cb94e51ceec9719defdee6ed1f5677ea7376489befb26c1adea37cc13\" id:\"4f56981cb94e51ceec9719defdee6ed1f5677ea7376489befb26c1adea37cc13\" pid:3577 exited_at:{seconds:1757455137 nanos:458350903}" Sep 9 21:58:57.471456 containerd[1576]: time="2025-09-09T21:58:57.461239481Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4f56981cb94e51ceec9719defdee6ed1f5677ea7376489befb26c1adea37cc13\" id:\"4f56981cb94e51ceec9719defdee6ed1f5677ea7376489befb26c1adea37cc13\" pid:3577 exited_at:{seconds:1757455137 nanos:458350903}" Sep 9 21:58:57.675098 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f56981cb94e51ceec9719defdee6ed1f5677ea7376489befb26c1adea37cc13-rootfs.mount: Deactivated successfully. Sep 9 21:58:57.731062 kubelet[2840]: E0909 21:58:57.730505 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nzhdk" podUID="a3597f1e-72d0-40eb-b831-78b87602d9ab" Sep 9 21:58:59.269605 containerd[1576]: time="2025-09-09T21:58:59.264575855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 9 21:58:59.732414 kubelet[2840]: E0909 21:58:59.728021 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nzhdk" podUID="a3597f1e-72d0-40eb-b831-78b87602d9ab" Sep 9 21:59:01.733474 kubelet[2840]: E0909 21:59:01.732730 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nzhdk" podUID="a3597f1e-72d0-40eb-b831-78b87602d9ab" Sep 9 21:59:03.732963 kubelet[2840]: E0909 21:59:03.732872 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nzhdk" podUID="a3597f1e-72d0-40eb-b831-78b87602d9ab" Sep 9 21:59:05.733304 kubelet[2840]: E0909 21:59:05.731117 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nzhdk" podUID="a3597f1e-72d0-40eb-b831-78b87602d9ab" Sep 9 21:59:07.730232 kubelet[2840]: E0909 21:59:07.727486 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nzhdk" podUID="a3597f1e-72d0-40eb-b831-78b87602d9ab" Sep 9 21:59:08.547474 containerd[1576]: time="2025-09-09T21:59:08.546578051Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:59:08.550121 containerd[1576]: time="2025-09-09T21:59:08.548688685Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 9 21:59:08.556546 containerd[1576]: time="2025-09-09T21:59:08.555650755Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:59:08.565425 containerd[1576]: time="2025-09-09T21:59:08.565154391Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 9.300519114s" Sep 9 21:59:08.565425 containerd[1576]: time="2025-09-09T21:59:08.565208803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 9 21:59:08.569449 containerd[1576]: time="2025-09-09T21:59:08.568343500Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:59:08.601619 containerd[1576]: time="2025-09-09T21:59:08.600006158Z" level=info msg="CreateContainer within sandbox \"7befd3648a8aae4a7ad25dc823bfc8c28a400ab01fff3876636a55b2be3cba63\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 9 21:59:08.663824 containerd[1576]: time="2025-09-09T21:59:08.662444912Z" level=info msg="Container 75f482807b7088c5d7c50d49b6c3ff9b416f701bb28b0d95386972ef531b2832: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:59:08.719356 containerd[1576]: time="2025-09-09T21:59:08.714036492Z" level=info msg="CreateContainer within sandbox \"7befd3648a8aae4a7ad25dc823bfc8c28a400ab01fff3876636a55b2be3cba63\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"75f482807b7088c5d7c50d49b6c3ff9b416f701bb28b0d95386972ef531b2832\"" Sep 9 21:59:08.719356 containerd[1576]: time="2025-09-09T21:59:08.718477403Z" level=info msg="StartContainer for \"75f482807b7088c5d7c50d49b6c3ff9b416f701bb28b0d95386972ef531b2832\"" Sep 9 21:59:08.723579 containerd[1576]: time="2025-09-09T21:59:08.723462061Z" level=info msg="connecting to shim 75f482807b7088c5d7c50d49b6c3ff9b416f701bb28b0d95386972ef531b2832" address="unix:///run/containerd/s/276e3ac4e13059c32a222920c318e3b159ba13e787242ad40db8b68e2e69c7e4" protocol=ttrpc version=3 Sep 9 21:59:08.848254 systemd[1]: Started cri-containerd-75f482807b7088c5d7c50d49b6c3ff9b416f701bb28b0d95386972ef531b2832.scope - libcontainer container 75f482807b7088c5d7c50d49b6c3ff9b416f701bb28b0d95386972ef531b2832. Sep 9 21:59:09.009632 containerd[1576]: time="2025-09-09T21:59:09.008694789Z" level=info msg="StartContainer for \"75f482807b7088c5d7c50d49b6c3ff9b416f701bb28b0d95386972ef531b2832\" returns successfully" Sep 9 21:59:09.734184 kubelet[2840]: E0909 21:59:09.728865 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nzhdk" podUID="a3597f1e-72d0-40eb-b831-78b87602d9ab" Sep 9 21:59:11.729650 kubelet[2840]: E0909 21:59:11.725730 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nzhdk" podUID="a3597f1e-72d0-40eb-b831-78b87602d9ab" Sep 9 21:59:12.796293 containerd[1576]: time="2025-09-09T21:59:12.796157887Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 21:59:12.805584 systemd[1]: cri-containerd-75f482807b7088c5d7c50d49b6c3ff9b416f701bb28b0d95386972ef531b2832.scope: Deactivated successfully. Sep 9 21:59:12.806649 systemd[1]: cri-containerd-75f482807b7088c5d7c50d49b6c3ff9b416f701bb28b0d95386972ef531b2832.scope: Consumed 1.200s CPU time, 184.1M memory peak, 5M read from disk, 171.3M written to disk. Sep 9 21:59:12.836029 containerd[1576]: time="2025-09-09T21:59:12.831443390Z" level=info msg="TaskExit event in podsandbox handler container_id:\"75f482807b7088c5d7c50d49b6c3ff9b416f701bb28b0d95386972ef531b2832\" id:\"75f482807b7088c5d7c50d49b6c3ff9b416f701bb28b0d95386972ef531b2832\" pid:3639 exited_at:{seconds:1757455152 nanos:826135045}" Sep 9 21:59:12.836029 containerd[1576]: time="2025-09-09T21:59:12.831677612Z" level=info msg="received exit event container_id:\"75f482807b7088c5d7c50d49b6c3ff9b416f701bb28b0d95386972ef531b2832\" id:\"75f482807b7088c5d7c50d49b6c3ff9b416f701bb28b0d95386972ef531b2832\" pid:3639 exited_at:{seconds:1757455152 nanos:826135045}" Sep 9 21:59:12.852570 kubelet[2840]: I0909 21:59:12.844692 2840 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 9 21:59:12.978598 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75f482807b7088c5d7c50d49b6c3ff9b416f701bb28b0d95386972ef531b2832-rootfs.mount: Deactivated successfully. Sep 9 21:59:13.778831 systemd[1]: Created slice kubepods-besteffort-poda3597f1e_72d0_40eb_b831_78b87602d9ab.slice - libcontainer container kubepods-besteffort-poda3597f1e_72d0_40eb_b831_78b87602d9ab.slice. Sep 9 21:59:13.807566 containerd[1576]: time="2025-09-09T21:59:13.804671880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nzhdk,Uid:a3597f1e-72d0-40eb-b831-78b87602d9ab,Namespace:calico-system,Attempt:0,}" Sep 9 21:59:14.583307 kubelet[2840]: I0909 21:59:14.583108 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8a89593-6c20-439c-928d-0e86b0f1dbe1-config-volume\") pod \"coredns-674b8bbfcf-bjgk4\" (UID: \"e8a89593-6c20-439c-928d-0e86b0f1dbe1\") " pod="kube-system/coredns-674b8bbfcf-bjgk4" Sep 9 21:59:14.583307 kubelet[2840]: I0909 21:59:14.583180 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcjdg\" (UniqueName: \"kubernetes.io/projected/e8a89593-6c20-439c-928d-0e86b0f1dbe1-kube-api-access-lcjdg\") pod \"coredns-674b8bbfcf-bjgk4\" (UID: \"e8a89593-6c20-439c-928d-0e86b0f1dbe1\") " pod="kube-system/coredns-674b8bbfcf-bjgk4" Sep 9 21:59:14.687274 kubelet[2840]: E0909 21:59:14.684206 2840 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered Sep 9 21:59:14.687274 kubelet[2840]: E0909 21:59:14.684312 2840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e8a89593-6c20-439c-928d-0e86b0f1dbe1-config-volume podName:e8a89593-6c20-439c-928d-0e86b0f1dbe1 nodeName:}" failed. No retries permitted until 2025-09-09 21:59:15.184277804 +0000 UTC m=+65.812130898 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e8a89593-6c20-439c-928d-0e86b0f1dbe1-config-volume") pod "coredns-674b8bbfcf-bjgk4" (UID: "e8a89593-6c20-439c-928d-0e86b0f1dbe1") : object "kube-system"/"coredns" not registered Sep 9 21:59:14.792192 kubelet[2840]: I0909 21:59:14.788050 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxt6w\" (UniqueName: \"kubernetes.io/projected/8a9661e9-5b18-43f4-b60f-9cb3c54276be-kube-api-access-mxt6w\") pod \"calico-kube-controllers-7c66bfd5cf-zxqkv\" (UID: \"8a9661e9-5b18-43f4-b60f-9cb3c54276be\") " pod="calico-system/calico-kube-controllers-7c66bfd5cf-zxqkv" Sep 9 21:59:14.792192 kubelet[2840]: I0909 21:59:14.788136 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a9661e9-5b18-43f4-b60f-9cb3c54276be-tigera-ca-bundle\") pod \"calico-kube-controllers-7c66bfd5cf-zxqkv\" (UID: \"8a9661e9-5b18-43f4-b60f-9cb3c54276be\") " pod="calico-system/calico-kube-controllers-7c66bfd5cf-zxqkv" Sep 9 21:59:14.824410 systemd[1]: Created slice kubepods-besteffort-pod8a9661e9_5b18_43f4_b60f_9cb3c54276be.slice - libcontainer container kubepods-besteffort-pod8a9661e9_5b18_43f4_b60f_9cb3c54276be.slice. Sep 9 21:59:14.896860 kubelet[2840]: I0909 21:59:14.893948 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d774d884-64e5-4f7e-84a6-5dcae502322a-whisker-ca-bundle\") pod \"whisker-6dd55fb676-xxbsj\" (UID: \"d774d884-64e5-4f7e-84a6-5dcae502322a\") " pod="calico-system/whisker-6dd55fb676-xxbsj" Sep 9 21:59:14.896860 kubelet[2840]: I0909 21:59:14.895936 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt7fw\" (UniqueName: \"kubernetes.io/projected/d774d884-64e5-4f7e-84a6-5dcae502322a-kube-api-access-nt7fw\") pod \"whisker-6dd55fb676-xxbsj\" (UID: \"d774d884-64e5-4f7e-84a6-5dcae502322a\") " pod="calico-system/whisker-6dd55fb676-xxbsj" Sep 9 21:59:14.896860 kubelet[2840]: I0909 21:59:14.895978 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d774d884-64e5-4f7e-84a6-5dcae502322a-whisker-backend-key-pair\") pod \"whisker-6dd55fb676-xxbsj\" (UID: \"d774d884-64e5-4f7e-84a6-5dcae502322a\") " pod="calico-system/whisker-6dd55fb676-xxbsj" Sep 9 21:59:14.896860 kubelet[2840]: I0909 21:59:14.896063 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx6nf\" (UniqueName: \"kubernetes.io/projected/a09151ad-a9b3-49e6-974f-bb1c62675974-kube-api-access-cx6nf\") pod \"coredns-674b8bbfcf-vg22c\" (UID: \"a09151ad-a9b3-49e6-974f-bb1c62675974\") " pod="kube-system/coredns-674b8bbfcf-vg22c" Sep 9 21:59:14.896860 kubelet[2840]: I0909 21:59:14.896102 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a09151ad-a9b3-49e6-974f-bb1c62675974-config-volume\") pod \"coredns-674b8bbfcf-vg22c\" (UID: \"a09151ad-a9b3-49e6-974f-bb1c62675974\") " pod="kube-system/coredns-674b8bbfcf-vg22c" Sep 9 21:59:14.916410 systemd[1]: Created slice kubepods-burstable-poda09151ad_a9b3_49e6_974f_bb1c62675974.slice - libcontainer container kubepods-burstable-poda09151ad_a9b3_49e6_974f_bb1c62675974.slice. Sep 9 21:59:14.947787 systemd[1]: Created slice kubepods-besteffort-podd774d884_64e5_4f7e_84a6_5dcae502322a.slice - libcontainer container kubepods-besteffort-podd774d884_64e5_4f7e_84a6_5dcae502322a.slice. Sep 9 21:59:14.978502 systemd[1]: Created slice kubepods-burstable-pode8a89593_6c20_439c_928d_0e86b0f1dbe1.slice - libcontainer container kubepods-burstable-pode8a89593_6c20_439c_928d_0e86b0f1dbe1.slice. Sep 9 21:59:15.001995 kubelet[2840]: I0909 21:59:14.999234 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9ckk\" (UniqueName: \"kubernetes.io/projected/c9730904-d479-4f03-8945-1eae9818173f-kube-api-access-f9ckk\") pod \"goldmane-54d579b49d-cbmvm\" (UID: \"c9730904-d479-4f03-8945-1eae9818173f\") " pod="calico-system/goldmane-54d579b49d-cbmvm" Sep 9 21:59:15.001995 kubelet[2840]: I0909 21:59:14.999455 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c9730904-d479-4f03-8945-1eae9818173f-goldmane-key-pair\") pod \"goldmane-54d579b49d-cbmvm\" (UID: \"c9730904-d479-4f03-8945-1eae9818173f\") " pod="calico-system/goldmane-54d579b49d-cbmvm" Sep 9 21:59:15.001995 kubelet[2840]: I0909 21:59:14.999511 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9730904-d479-4f03-8945-1eae9818173f-config\") pod \"goldmane-54d579b49d-cbmvm\" (UID: \"c9730904-d479-4f03-8945-1eae9818173f\") " pod="calico-system/goldmane-54d579b49d-cbmvm" Sep 9 21:59:15.001995 kubelet[2840]: I0909 21:59:14.999536 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/57318330-371c-4f00-bb0e-4ef73267101a-calico-apiserver-certs\") pod \"calico-apiserver-85fb8bd9d8-2rqmt\" (UID: \"57318330-371c-4f00-bb0e-4ef73267101a\") " pod="calico-apiserver/calico-apiserver-85fb8bd9d8-2rqmt" Sep 9 21:59:15.001995 kubelet[2840]: I0909 21:59:14.999556 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6mr4\" (UniqueName: \"kubernetes.io/projected/57318330-371c-4f00-bb0e-4ef73267101a-kube-api-access-t6mr4\") pod \"calico-apiserver-85fb8bd9d8-2rqmt\" (UID: \"57318330-371c-4f00-bb0e-4ef73267101a\") " pod="calico-apiserver/calico-apiserver-85fb8bd9d8-2rqmt" Sep 9 21:59:15.005600 kubelet[2840]: I0909 21:59:14.999625 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1edad75d-6a90-4301-b98f-fe1e20ca7dba-calico-apiserver-certs\") pod \"calico-apiserver-85fb8bd9d8-gxhlf\" (UID: \"1edad75d-6a90-4301-b98f-fe1e20ca7dba\") " pod="calico-apiserver/calico-apiserver-85fb8bd9d8-gxhlf" Sep 9 21:59:15.005600 kubelet[2840]: I0909 21:59:14.999651 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9730904-d479-4f03-8945-1eae9818173f-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-cbmvm\" (UID: \"c9730904-d479-4f03-8945-1eae9818173f\") " pod="calico-system/goldmane-54d579b49d-cbmvm" Sep 9 21:59:15.005600 kubelet[2840]: I0909 21:59:14.999674 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbxpd\" (UniqueName: \"kubernetes.io/projected/1edad75d-6a90-4301-b98f-fe1e20ca7dba-kube-api-access-vbxpd\") pod \"calico-apiserver-85fb8bd9d8-gxhlf\" (UID: \"1edad75d-6a90-4301-b98f-fe1e20ca7dba\") " pod="calico-apiserver/calico-apiserver-85fb8bd9d8-gxhlf" Sep 9 21:59:15.006557 systemd[1]: Created slice kubepods-besteffort-podc9730904_d479_4f03_8945_1eae9818173f.slice - libcontainer container kubepods-besteffort-podc9730904_d479_4f03_8945_1eae9818173f.slice. Sep 9 21:59:15.131881 systemd[1]: Created slice kubepods-besteffort-pod1edad75d_6a90_4301_b98f_fe1e20ca7dba.slice - libcontainer container kubepods-besteffort-pod1edad75d_6a90_4301_b98f_fe1e20ca7dba.slice. Sep 9 21:59:15.161269 containerd[1576]: time="2025-09-09T21:59:15.161125737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c66bfd5cf-zxqkv,Uid:8a9661e9-5b18-43f4-b60f-9cb3c54276be,Namespace:calico-system,Attempt:0,}" Sep 9 21:59:15.226412 systemd[1]: Created slice kubepods-besteffort-pod57318330_371c_4f00_bb0e_4ef73267101a.slice - libcontainer container kubepods-besteffort-pod57318330_371c_4f00_bb0e_4ef73267101a.slice. Sep 9 21:59:15.239884 kubelet[2840]: E0909 21:59:15.238337 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:59:15.240411 containerd[1576]: time="2025-09-09T21:59:15.240371968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vg22c,Uid:a09151ad-a9b3-49e6-974f-bb1c62675974,Namespace:kube-system,Attempt:0,}" Sep 9 21:59:15.256544 containerd[1576]: time="2025-09-09T21:59:15.256488148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dd55fb676-xxbsj,Uid:d774d884-64e5-4f7e-84a6-5dcae502322a,Namespace:calico-system,Attempt:0,}" Sep 9 21:59:15.284443 kubelet[2840]: E0909 21:59:15.283905 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:59:15.288930 containerd[1576]: time="2025-09-09T21:59:15.288196429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bjgk4,Uid:e8a89593-6c20-439c-928d-0e86b0f1dbe1,Namespace:kube-system,Attempt:0,}" Sep 9 21:59:15.344189 containerd[1576]: time="2025-09-09T21:59:15.343143014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-cbmvm,Uid:c9730904-d479-4f03-8945-1eae9818173f,Namespace:calico-system,Attempt:0,}" Sep 9 21:59:15.352056 containerd[1576]: time="2025-09-09T21:59:15.351840209Z" level=error msg="Failed to destroy network for sandbox \"1b4720004a540e1a9b2d15ebadc239856d0ff38c3493ff6b3f6bb242712d4cbc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:15.396321 containerd[1576]: time="2025-09-09T21:59:15.395945544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 9 21:59:15.503712 containerd[1576]: time="2025-09-09T21:59:15.503656798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85fb8bd9d8-gxhlf,Uid:1edad75d-6a90-4301-b98f-fe1e20ca7dba,Namespace:calico-apiserver,Attempt:0,}" Sep 9 21:59:15.548131 containerd[1576]: time="2025-09-09T21:59:15.545942050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85fb8bd9d8-2rqmt,Uid:57318330-371c-4f00-bb0e-4ef73267101a,Namespace:calico-apiserver,Attempt:0,}" Sep 9 21:59:15.671745 containerd[1576]: time="2025-09-09T21:59:15.670073025Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nzhdk,Uid:a3597f1e-72d0-40eb-b831-78b87602d9ab,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b4720004a540e1a9b2d15ebadc239856d0ff38c3493ff6b3f6bb242712d4cbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:15.671999 kubelet[2840]: E0909 21:59:15.670424 2840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b4720004a540e1a9b2d15ebadc239856d0ff38c3493ff6b3f6bb242712d4cbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:15.671999 kubelet[2840]: E0909 21:59:15.670528 2840 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b4720004a540e1a9b2d15ebadc239856d0ff38c3493ff6b3f6bb242712d4cbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nzhdk" Sep 9 21:59:15.671999 kubelet[2840]: E0909 21:59:15.670560 2840 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b4720004a540e1a9b2d15ebadc239856d0ff38c3493ff6b3f6bb242712d4cbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nzhdk" Sep 9 21:59:15.672500 kubelet[2840]: E0909 21:59:15.670625 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nzhdk_calico-system(a3597f1e-72d0-40eb-b831-78b87602d9ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nzhdk_calico-system(a3597f1e-72d0-40eb-b831-78b87602d9ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b4720004a540e1a9b2d15ebadc239856d0ff38c3493ff6b3f6bb242712d4cbc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nzhdk" podUID="a3597f1e-72d0-40eb-b831-78b87602d9ab" Sep 9 21:59:15.740086 kubelet[2840]: E0909 21:59:15.728751 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:59:15.780899 systemd[1]: run-netns-cni\x2d596a1a38\x2d62d2\x2d4fe9\x2d1614\x2d5bd704055c39.mount: Deactivated successfully. Sep 9 21:59:15.957493 containerd[1576]: time="2025-09-09T21:59:15.952929023Z" level=error msg="Failed to destroy network for sandbox \"e7b370c1d42bdacbd228adabe38e07b7196225b0cac7f9d27e01b678dbb85a62\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:15.971256 systemd[1]: run-netns-cni\x2d54fafaec\x2dea0f\x2d4277\x2def2e\x2de7b0a5ac2a3c.mount: Deactivated successfully. Sep 9 21:59:16.159490 containerd[1576]: time="2025-09-09T21:59:16.157268043Z" level=error msg="Failed to destroy network for sandbox \"9793367f1525c0fe2350df4c95199c9bb00e1fa6e5f6f16231135ee3432ea656\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:16.166027 systemd[1]: run-netns-cni\x2d182b3af8\x2d70e9\x2d0482\x2d17c0\x2daf5332374cd4.mount: Deactivated successfully. Sep 9 21:59:16.248213 containerd[1576]: time="2025-09-09T21:59:16.247365160Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c66bfd5cf-zxqkv,Uid:8a9661e9-5b18-43f4-b60f-9cb3c54276be,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7b370c1d42bdacbd228adabe38e07b7196225b0cac7f9d27e01b678dbb85a62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:16.251966 kubelet[2840]: E0909 21:59:16.251867 2840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7b370c1d42bdacbd228adabe38e07b7196225b0cac7f9d27e01b678dbb85a62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:16.251966 kubelet[2840]: E0909 21:59:16.252055 2840 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7b370c1d42bdacbd228adabe38e07b7196225b0cac7f9d27e01b678dbb85a62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c66bfd5cf-zxqkv" Sep 9 21:59:16.251966 kubelet[2840]: E0909 21:59:16.252135 2840 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7b370c1d42bdacbd228adabe38e07b7196225b0cac7f9d27e01b678dbb85a62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c66bfd5cf-zxqkv" Sep 9 21:59:16.260092 kubelet[2840]: E0909 21:59:16.252284 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7c66bfd5cf-zxqkv_calico-system(8a9661e9-5b18-43f4-b60f-9cb3c54276be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7c66bfd5cf-zxqkv_calico-system(8a9661e9-5b18-43f4-b60f-9cb3c54276be)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e7b370c1d42bdacbd228adabe38e07b7196225b0cac7f9d27e01b678dbb85a62\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c66bfd5cf-zxqkv" podUID="8a9661e9-5b18-43f4-b60f-9cb3c54276be" Sep 9 21:59:16.306309 containerd[1576]: time="2025-09-09T21:59:16.306131081Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vg22c,Uid:a09151ad-a9b3-49e6-974f-bb1c62675974,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9793367f1525c0fe2350df4c95199c9bb00e1fa6e5f6f16231135ee3432ea656\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:16.307261 kubelet[2840]: E0909 21:59:16.307189 2840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9793367f1525c0fe2350df4c95199c9bb00e1fa6e5f6f16231135ee3432ea656\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:16.308061 kubelet[2840]: E0909 21:59:16.307650 2840 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9793367f1525c0fe2350df4c95199c9bb00e1fa6e5f6f16231135ee3432ea656\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vg22c" Sep 9 21:59:16.308061 kubelet[2840]: E0909 21:59:16.307694 2840 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9793367f1525c0fe2350df4c95199c9bb00e1fa6e5f6f16231135ee3432ea656\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vg22c" Sep 9 21:59:16.308297 kubelet[2840]: E0909 21:59:16.307812 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-vg22c_kube-system(a09151ad-a9b3-49e6-974f-bb1c62675974)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-vg22c_kube-system(a09151ad-a9b3-49e6-974f-bb1c62675974)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9793367f1525c0fe2350df4c95199c9bb00e1fa6e5f6f16231135ee3432ea656\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-vg22c" podUID="a09151ad-a9b3-49e6-974f-bb1c62675974" Sep 9 21:59:16.308865 containerd[1576]: time="2025-09-09T21:59:16.308812830Z" level=error msg="Failed to destroy network for sandbox \"4c42636c7b68ee0f023bb144c1dc003b08e4cf9495377928979148330102b211\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:16.320717 systemd[1]: run-netns-cni\x2d26a86cbc\x2ddc51\x2d5d9b\x2da071\x2d994e87615e2c.mount: Deactivated successfully. Sep 9 21:59:16.325907 containerd[1576]: time="2025-09-09T21:59:16.325837020Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dd55fb676-xxbsj,Uid:d774d884-64e5-4f7e-84a6-5dcae502322a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c42636c7b68ee0f023bb144c1dc003b08e4cf9495377928979148330102b211\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:16.329191 kubelet[2840]: E0909 21:59:16.328860 2840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c42636c7b68ee0f023bb144c1dc003b08e4cf9495377928979148330102b211\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:16.329191 kubelet[2840]: E0909 21:59:16.328959 2840 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c42636c7b68ee0f023bb144c1dc003b08e4cf9495377928979148330102b211\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6dd55fb676-xxbsj" Sep 9 21:59:16.329191 kubelet[2840]: E0909 21:59:16.328988 2840 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c42636c7b68ee0f023bb144c1dc003b08e4cf9495377928979148330102b211\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6dd55fb676-xxbsj" Sep 9 21:59:16.329431 kubelet[2840]: E0909 21:59:16.329089 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6dd55fb676-xxbsj_calico-system(d774d884-64e5-4f7e-84a6-5dcae502322a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6dd55fb676-xxbsj_calico-system(d774d884-64e5-4f7e-84a6-5dcae502322a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c42636c7b68ee0f023bb144c1dc003b08e4cf9495377928979148330102b211\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6dd55fb676-xxbsj" podUID="d774d884-64e5-4f7e-84a6-5dcae502322a" Sep 9 21:59:16.374144 containerd[1576]: time="2025-09-09T21:59:16.372438057Z" level=error msg="Failed to destroy network for sandbox \"0e7182f09cbb48a4e77e1b38ddad7a3548e0a9a818201b3486f9deb52c22f9f7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:16.382204 containerd[1576]: time="2025-09-09T21:59:16.382122984Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bjgk4,Uid:e8a89593-6c20-439c-928d-0e86b0f1dbe1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e7182f09cbb48a4e77e1b38ddad7a3548e0a9a818201b3486f9deb52c22f9f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:16.382872 kubelet[2840]: E0909 21:59:16.382824 2840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e7182f09cbb48a4e77e1b38ddad7a3548e0a9a818201b3486f9deb52c22f9f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:16.383488 kubelet[2840]: E0909 21:59:16.383016 2840 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e7182f09cbb48a4e77e1b38ddad7a3548e0a9a818201b3486f9deb52c22f9f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bjgk4" Sep 9 21:59:16.383488 kubelet[2840]: E0909 21:59:16.383056 2840 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e7182f09cbb48a4e77e1b38ddad7a3548e0a9a818201b3486f9deb52c22f9f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bjgk4" Sep 9 21:59:16.383488 kubelet[2840]: E0909 21:59:16.383136 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-bjgk4_kube-system(e8a89593-6c20-439c-928d-0e86b0f1dbe1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-bjgk4_kube-system(e8a89593-6c20-439c-928d-0e86b0f1dbe1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e7182f09cbb48a4e77e1b38ddad7a3548e0a9a818201b3486f9deb52c22f9f7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-bjgk4" podUID="e8a89593-6c20-439c-928d-0e86b0f1dbe1" Sep 9 21:59:16.398663 containerd[1576]: time="2025-09-09T21:59:16.398584192Z" level=error msg="Failed to destroy network for sandbox \"eb9892b407295e62a742883a850af04970dc87177e6227b8f747d5282b60c291\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:16.412210 containerd[1576]: time="2025-09-09T21:59:16.412034542Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85fb8bd9d8-2rqmt,Uid:57318330-371c-4f00-bb0e-4ef73267101a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb9892b407295e62a742883a850af04970dc87177e6227b8f747d5282b60c291\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:16.413499 kubelet[2840]: E0909 21:59:16.413455 2840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb9892b407295e62a742883a850af04970dc87177e6227b8f747d5282b60c291\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:16.413643 kubelet[2840]: E0909 21:59:16.413626 2840 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb9892b407295e62a742883a850af04970dc87177e6227b8f747d5282b60c291\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85fb8bd9d8-2rqmt" Sep 9 21:59:16.413740 kubelet[2840]: E0909 21:59:16.413712 2840 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb9892b407295e62a742883a850af04970dc87177e6227b8f747d5282b60c291\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85fb8bd9d8-2rqmt" Sep 9 21:59:16.413938 kubelet[2840]: E0909 21:59:16.413900 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-85fb8bd9d8-2rqmt_calico-apiserver(57318330-371c-4f00-bb0e-4ef73267101a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-85fb8bd9d8-2rqmt_calico-apiserver(57318330-371c-4f00-bb0e-4ef73267101a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eb9892b407295e62a742883a850af04970dc87177e6227b8f747d5282b60c291\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-85fb8bd9d8-2rqmt" podUID="57318330-371c-4f00-bb0e-4ef73267101a" Sep 9 21:59:16.463501 containerd[1576]: time="2025-09-09T21:59:16.463430672Z" level=error msg="Failed to destroy network for sandbox \"fb286e0c753d7f72ea251797277992a59e6ecb17942c358e9d841fd37f57492b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:16.474165 containerd[1576]: time="2025-09-09T21:59:16.474095959Z" level=error msg="Failed to destroy network for sandbox \"6b4963729e45df1098e2fdc1a183ec556852a239f271899c741f15abfaf1ec9e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:16.498292 containerd[1576]: time="2025-09-09T21:59:16.495432152Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-cbmvm,Uid:c9730904-d479-4f03-8945-1eae9818173f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb286e0c753d7f72ea251797277992a59e6ecb17942c358e9d841fd37f57492b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:16.498641 kubelet[2840]: E0909 21:59:16.496110 2840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb286e0c753d7f72ea251797277992a59e6ecb17942c358e9d841fd37f57492b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:16.498641 kubelet[2840]: E0909 21:59:16.496242 2840 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb286e0c753d7f72ea251797277992a59e6ecb17942c358e9d841fd37f57492b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-cbmvm" Sep 9 21:59:16.498641 kubelet[2840]: E0909 21:59:16.496271 2840 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb286e0c753d7f72ea251797277992a59e6ecb17942c358e9d841fd37f57492b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-cbmvm" Sep 9 21:59:16.498802 kubelet[2840]: E0909 21:59:16.496337 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-cbmvm_calico-system(c9730904-d479-4f03-8945-1eae9818173f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-cbmvm_calico-system(c9730904-d479-4f03-8945-1eae9818173f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fb286e0c753d7f72ea251797277992a59e6ecb17942c358e9d841fd37f57492b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-cbmvm" podUID="c9730904-d479-4f03-8945-1eae9818173f" Sep 9 21:59:16.504260 containerd[1576]: time="2025-09-09T21:59:16.504025079Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85fb8bd9d8-gxhlf,Uid:1edad75d-6a90-4301-b98f-fe1e20ca7dba,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b4963729e45df1098e2fdc1a183ec556852a239f271899c741f15abfaf1ec9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:16.505128 kubelet[2840]: E0909 21:59:16.504818 2840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b4963729e45df1098e2fdc1a183ec556852a239f271899c741f15abfaf1ec9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:16.505128 kubelet[2840]: E0909 21:59:16.504907 2840 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b4963729e45df1098e2fdc1a183ec556852a239f271899c741f15abfaf1ec9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85fb8bd9d8-gxhlf" Sep 9 21:59:16.505128 kubelet[2840]: E0909 21:59:16.504958 2840 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b4963729e45df1098e2fdc1a183ec556852a239f271899c741f15abfaf1ec9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85fb8bd9d8-gxhlf" Sep 9 21:59:16.505310 kubelet[2840]: E0909 21:59:16.505036 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-85fb8bd9d8-gxhlf_calico-apiserver(1edad75d-6a90-4301-b98f-fe1e20ca7dba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-85fb8bd9d8-gxhlf_calico-apiserver(1edad75d-6a90-4301-b98f-fe1e20ca7dba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b4963729e45df1098e2fdc1a183ec556852a239f271899c741f15abfaf1ec9e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-85fb8bd9d8-gxhlf" podUID="1edad75d-6a90-4301-b98f-fe1e20ca7dba" Sep 9 21:59:16.718280 systemd[1]: run-netns-cni\x2d171d7b81\x2dd615\x2d2f51\x2d8fa1\x2d7e18dd6d9800.mount: Deactivated successfully. Sep 9 21:59:16.718453 systemd[1]: run-netns-cni\x2dcacf1c88\x2d1ca6\x2dfdd2\x2dd08c\x2df26e00cf9266.mount: Deactivated successfully. Sep 9 21:59:16.718547 systemd[1]: run-netns-cni\x2d051a4aa4\x2d74c4\x2dc6a2\x2d15ac\x2d496696c46807.mount: Deactivated successfully. Sep 9 21:59:16.718642 systemd[1]: run-netns-cni\x2d06e3f1f6\x2d4f9a\x2d28f1\x2d9b1d\x2dfd56c773fc07.mount: Deactivated successfully. Sep 9 21:59:24.879709 update_engine[1559]: I20250909 21:59:24.879043 1559 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 9 21:59:24.879709 update_engine[1559]: I20250909 21:59:24.879128 1559 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 9 21:59:24.879709 update_engine[1559]: I20250909 21:59:24.879436 1559 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 9 21:59:24.901652 update_engine[1559]: I20250909 21:59:24.898260 1559 omaha_request_params.cc:62] Current group set to developer Sep 9 21:59:24.905614 update_engine[1559]: I20250909 21:59:24.903944 1559 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 9 21:59:24.905614 update_engine[1559]: I20250909 21:59:24.905601 1559 update_attempter.cc:643] Scheduling an action processor start. Sep 9 21:59:24.906132 update_engine[1559]: I20250909 21:59:24.905649 1559 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 9 21:59:24.906132 update_engine[1559]: I20250909 21:59:24.905723 1559 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 9 21:59:24.907045 update_engine[1559]: I20250909 21:59:24.906893 1559 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 9 21:59:24.907045 update_engine[1559]: I20250909 21:59:24.906917 1559 omaha_request_action.cc:272] Request: Sep 9 21:59:24.907045 update_engine[1559]: Sep 9 21:59:24.907045 update_engine[1559]: Sep 9 21:59:24.907045 update_engine[1559]: Sep 9 21:59:24.907045 update_engine[1559]: Sep 9 21:59:24.907045 update_engine[1559]: Sep 9 21:59:24.907045 update_engine[1559]: Sep 9 21:59:24.907045 update_engine[1559]: Sep 9 21:59:24.907045 update_engine[1559]: Sep 9 21:59:24.907045 update_engine[1559]: I20250909 21:59:24.906926 1559 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 9 21:59:24.925627 locksmithd[1601]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 9 21:59:24.930756 update_engine[1559]: I20250909 21:59:24.928359 1559 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 9 21:59:24.934136 update_engine[1559]: I20250909 21:59:24.932026 1559 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 9 21:59:24.957531 update_engine[1559]: E20250909 21:59:24.957329 1559 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 9 21:59:24.957531 update_engine[1559]: I20250909 21:59:24.957480 1559 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 9 21:59:25.736142 kubelet[2840]: E0909 21:59:25.736071 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:59:26.731852 kubelet[2840]: E0909 21:59:26.726123 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:59:26.734691 containerd[1576]: time="2025-09-09T21:59:26.733409830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vg22c,Uid:a09151ad-a9b3-49e6-974f-bb1c62675974,Namespace:kube-system,Attempt:0,}" Sep 9 21:59:27.049265 containerd[1576]: time="2025-09-09T21:59:27.049045502Z" level=error msg="Failed to destroy network for sandbox \"be5256d70a4430635389623df9eec6ac26c929e8eb5e76f0f14a2b038d29239c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:27.055387 containerd[1576]: time="2025-09-09T21:59:27.054914264Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vg22c,Uid:a09151ad-a9b3-49e6-974f-bb1c62675974,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"be5256d70a4430635389623df9eec6ac26c929e8eb5e76f0f14a2b038d29239c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:27.055586 kubelet[2840]: E0909 21:59:27.055308 2840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be5256d70a4430635389623df9eec6ac26c929e8eb5e76f0f14a2b038d29239c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:27.055586 kubelet[2840]: E0909 21:59:27.055422 2840 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be5256d70a4430635389623df9eec6ac26c929e8eb5e76f0f14a2b038d29239c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vg22c" Sep 9 21:59:27.055586 kubelet[2840]: E0909 21:59:27.055463 2840 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be5256d70a4430635389623df9eec6ac26c929e8eb5e76f0f14a2b038d29239c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vg22c" Sep 9 21:59:27.057315 kubelet[2840]: E0909 21:59:27.055580 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-vg22c_kube-system(a09151ad-a9b3-49e6-974f-bb1c62675974)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-vg22c_kube-system(a09151ad-a9b3-49e6-974f-bb1c62675974)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be5256d70a4430635389623df9eec6ac26c929e8eb5e76f0f14a2b038d29239c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-vg22c" podUID="a09151ad-a9b3-49e6-974f-bb1c62675974" Sep 9 21:59:27.056758 systemd[1]: run-netns-cni\x2dab8eee38\x2da6a3\x2d05ba\x2d31e2\x2dce3223f8ccc9.mount: Deactivated successfully. Sep 9 21:59:27.728760 containerd[1576]: time="2025-09-09T21:59:27.728686346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c66bfd5cf-zxqkv,Uid:8a9661e9-5b18-43f4-b60f-9cb3c54276be,Namespace:calico-system,Attempt:0,}" Sep 9 21:59:28.048205 containerd[1576]: time="2025-09-09T21:59:28.048027842Z" level=error msg="Failed to destroy network for sandbox \"0054ffb45787f77badfe1ea113b8e82437b6a3ec4bfb660e97db44956dc0eafe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:28.057843 containerd[1576]: time="2025-09-09T21:59:28.056936073Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c66bfd5cf-zxqkv,Uid:8a9661e9-5b18-43f4-b60f-9cb3c54276be,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0054ffb45787f77badfe1ea113b8e82437b6a3ec4bfb660e97db44956dc0eafe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:28.059812 kubelet[2840]: E0909 21:59:28.058539 2840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0054ffb45787f77badfe1ea113b8e82437b6a3ec4bfb660e97db44956dc0eafe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:28.059812 kubelet[2840]: E0909 21:59:28.058630 2840 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0054ffb45787f77badfe1ea113b8e82437b6a3ec4bfb660e97db44956dc0eafe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c66bfd5cf-zxqkv" Sep 9 21:59:28.059812 kubelet[2840]: E0909 21:59:28.058662 2840 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0054ffb45787f77badfe1ea113b8e82437b6a3ec4bfb660e97db44956dc0eafe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c66bfd5cf-zxqkv" Sep 9 21:59:28.060339 kubelet[2840]: E0909 21:59:28.058732 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7c66bfd5cf-zxqkv_calico-system(8a9661e9-5b18-43f4-b60f-9cb3c54276be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7c66bfd5cf-zxqkv_calico-system(8a9661e9-5b18-43f4-b60f-9cb3c54276be)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0054ffb45787f77badfe1ea113b8e82437b6a3ec4bfb660e97db44956dc0eafe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c66bfd5cf-zxqkv" podUID="8a9661e9-5b18-43f4-b60f-9cb3c54276be" Sep 9 21:59:28.060893 systemd[1]: run-netns-cni\x2d72a9c5e9\x2d39bd\x2da4fa\x2d2dc9\x2d823feebd3479.mount: Deactivated successfully. Sep 9 21:59:28.727799 containerd[1576]: time="2025-09-09T21:59:28.727134460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nzhdk,Uid:a3597f1e-72d0-40eb-b831-78b87602d9ab,Namespace:calico-system,Attempt:0,}" Sep 9 21:59:28.735680 kubelet[2840]: E0909 21:59:28.732174 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:59:28.756653 containerd[1576]: time="2025-09-09T21:59:28.734964417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bjgk4,Uid:e8a89593-6c20-439c-928d-0e86b0f1dbe1,Namespace:kube-system,Attempt:0,}" Sep 9 21:59:29.240382 containerd[1576]: time="2025-09-09T21:59:29.240036574Z" level=error msg="Failed to destroy network for sandbox \"9f77781a1b6de46da81f4918e3c3ff8f74e965cc83303997f281a6ff3353e87c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:29.271273 systemd[1]: run-netns-cni\x2d1017c5e3\x2d160a\x2d0776\x2d4ebb\x2db42d038ab842.mount: Deactivated successfully. Sep 9 21:59:29.287524 containerd[1576]: time="2025-09-09T21:59:29.284261618Z" level=error msg="Failed to destroy network for sandbox \"ebe56036750ad2dc60af78457ecb5e09ed9c846158c95c8c9fc68039ac78c717\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:29.293807 systemd[1]: run-netns-cni\x2dbcfc903f\x2d2404\x2d7e74\x2de0d9\x2d13a744429a1e.mount: Deactivated successfully. Sep 9 21:59:29.321872 containerd[1576]: time="2025-09-09T21:59:29.320349094Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nzhdk,Uid:a3597f1e-72d0-40eb-b831-78b87602d9ab,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f77781a1b6de46da81f4918e3c3ff8f74e965cc83303997f281a6ff3353e87c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:29.322186 kubelet[2840]: E0909 21:59:29.320877 2840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f77781a1b6de46da81f4918e3c3ff8f74e965cc83303997f281a6ff3353e87c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:29.322186 kubelet[2840]: E0909 21:59:29.320990 2840 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f77781a1b6de46da81f4918e3c3ff8f74e965cc83303997f281a6ff3353e87c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nzhdk" Sep 9 21:59:29.322186 kubelet[2840]: E0909 21:59:29.321046 2840 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f77781a1b6de46da81f4918e3c3ff8f74e965cc83303997f281a6ff3353e87c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nzhdk" Sep 9 21:59:29.322869 kubelet[2840]: E0909 21:59:29.321139 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nzhdk_calico-system(a3597f1e-72d0-40eb-b831-78b87602d9ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nzhdk_calico-system(a3597f1e-72d0-40eb-b831-78b87602d9ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9f77781a1b6de46da81f4918e3c3ff8f74e965cc83303997f281a6ff3353e87c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nzhdk" podUID="a3597f1e-72d0-40eb-b831-78b87602d9ab" Sep 9 21:59:29.327745 containerd[1576]: time="2025-09-09T21:59:29.327477620Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bjgk4,Uid:e8a89593-6c20-439c-928d-0e86b0f1dbe1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebe56036750ad2dc60af78457ecb5e09ed9c846158c95c8c9fc68039ac78c717\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:29.328157 kubelet[2840]: E0909 21:59:29.328105 2840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebe56036750ad2dc60af78457ecb5e09ed9c846158c95c8c9fc68039ac78c717\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:29.328366 kubelet[2840]: E0909 21:59:29.328238 2840 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebe56036750ad2dc60af78457ecb5e09ed9c846158c95c8c9fc68039ac78c717\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bjgk4" Sep 9 21:59:29.330315 kubelet[2840]: E0909 21:59:29.328317 2840 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebe56036750ad2dc60af78457ecb5e09ed9c846158c95c8c9fc68039ac78c717\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bjgk4" Sep 9 21:59:29.330684 kubelet[2840]: E0909 21:59:29.330618 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-bjgk4_kube-system(e8a89593-6c20-439c-928d-0e86b0f1dbe1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-bjgk4_kube-system(e8a89593-6c20-439c-928d-0e86b0f1dbe1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ebe56036750ad2dc60af78457ecb5e09ed9c846158c95c8c9fc68039ac78c717\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-bjgk4" podUID="e8a89593-6c20-439c-928d-0e86b0f1dbe1" Sep 9 21:59:29.742409 containerd[1576]: time="2025-09-09T21:59:29.739443409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dd55fb676-xxbsj,Uid:d774d884-64e5-4f7e-84a6-5dcae502322a,Namespace:calico-system,Attempt:0,}" Sep 9 21:59:30.126419 containerd[1576]: time="2025-09-09T21:59:30.123509535Z" level=error msg="Failed to destroy network for sandbox \"185d47f92ce888a680427e12be98a88b91bb702d1e20b9e3c13aaf160fd2b7d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:30.128720 systemd[1]: run-netns-cni\x2d8abe1b69\x2dadbe\x2dd23b\x2da7d5\x2d642ffd197645.mount: Deactivated successfully. Sep 9 21:59:30.446235 containerd[1576]: time="2025-09-09T21:59:30.443838783Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dd55fb676-xxbsj,Uid:d774d884-64e5-4f7e-84a6-5dcae502322a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"185d47f92ce888a680427e12be98a88b91bb702d1e20b9e3c13aaf160fd2b7d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:30.453838 kubelet[2840]: E0909 21:59:30.447568 2840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"185d47f92ce888a680427e12be98a88b91bb702d1e20b9e3c13aaf160fd2b7d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:30.453838 kubelet[2840]: E0909 21:59:30.447676 2840 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"185d47f92ce888a680427e12be98a88b91bb702d1e20b9e3c13aaf160fd2b7d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6dd55fb676-xxbsj" Sep 9 21:59:30.453838 kubelet[2840]: E0909 21:59:30.451018 2840 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"185d47f92ce888a680427e12be98a88b91bb702d1e20b9e3c13aaf160fd2b7d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6dd55fb676-xxbsj" Sep 9 21:59:30.454523 kubelet[2840]: E0909 21:59:30.451193 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6dd55fb676-xxbsj_calico-system(d774d884-64e5-4f7e-84a6-5dcae502322a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6dd55fb676-xxbsj_calico-system(d774d884-64e5-4f7e-84a6-5dcae502322a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"185d47f92ce888a680427e12be98a88b91bb702d1e20b9e3c13aaf160fd2b7d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6dd55fb676-xxbsj" podUID="d774d884-64e5-4f7e-84a6-5dcae502322a" Sep 9 21:59:30.733810 containerd[1576]: time="2025-09-09T21:59:30.733537374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85fb8bd9d8-gxhlf,Uid:1edad75d-6a90-4301-b98f-fe1e20ca7dba,Namespace:calico-apiserver,Attempt:0,}" Sep 9 21:59:31.735424 kubelet[2840]: E0909 21:59:31.734370 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:59:31.736730 containerd[1576]: time="2025-09-09T21:59:31.736677463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85fb8bd9d8-2rqmt,Uid:57318330-371c-4f00-bb0e-4ef73267101a,Namespace:calico-apiserver,Attempt:0,}" Sep 9 21:59:31.744813 containerd[1576]: time="2025-09-09T21:59:31.737332868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-cbmvm,Uid:c9730904-d479-4f03-8945-1eae9818173f,Namespace:calico-system,Attempt:0,}" Sep 9 21:59:31.884799 containerd[1576]: time="2025-09-09T21:59:31.882815265Z" level=error msg="Failed to destroy network for sandbox \"f1532619b4de5646cfb6bc786e3e6b11fba92b588135fa48f097cf369cceacc2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:31.898635 systemd[1]: run-netns-cni\x2d85b1d443\x2d2843\x2d484c\x2dfcdc\x2d3a485de67eb5.mount: Deactivated successfully. Sep 9 21:59:31.961786 containerd[1576]: time="2025-09-09T21:59:31.961451593Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85fb8bd9d8-gxhlf,Uid:1edad75d-6a90-4301-b98f-fe1e20ca7dba,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1532619b4de5646cfb6bc786e3e6b11fba92b588135fa48f097cf369cceacc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:31.964014 kubelet[2840]: E0909 21:59:31.963278 2840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1532619b4de5646cfb6bc786e3e6b11fba92b588135fa48f097cf369cceacc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:31.964014 kubelet[2840]: E0909 21:59:31.963396 2840 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1532619b4de5646cfb6bc786e3e6b11fba92b588135fa48f097cf369cceacc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85fb8bd9d8-gxhlf" Sep 9 21:59:31.964014 kubelet[2840]: E0909 21:59:31.963427 2840 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1532619b4de5646cfb6bc786e3e6b11fba92b588135fa48f097cf369cceacc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85fb8bd9d8-gxhlf" Sep 9 21:59:31.964219 kubelet[2840]: E0909 21:59:31.963505 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-85fb8bd9d8-gxhlf_calico-apiserver(1edad75d-6a90-4301-b98f-fe1e20ca7dba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-85fb8bd9d8-gxhlf_calico-apiserver(1edad75d-6a90-4301-b98f-fe1e20ca7dba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f1532619b4de5646cfb6bc786e3e6b11fba92b588135fa48f097cf369cceacc2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-85fb8bd9d8-gxhlf" podUID="1edad75d-6a90-4301-b98f-fe1e20ca7dba" Sep 9 21:59:32.093282 containerd[1576]: time="2025-09-09T21:59:32.092320652Z" level=error msg="Failed to destroy network for sandbox \"82f3968702a2528c976d2a4d44cf4fdc03b0b8ba4d1d7052039c08477cb92ba5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:32.099688 systemd[1]: run-netns-cni\x2d9c79942b\x2d1b67\x2d372c\x2d1145\x2d68c4953972fa.mount: Deactivated successfully. Sep 9 21:59:32.122053 containerd[1576]: time="2025-09-09T21:59:32.119448415Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-cbmvm,Uid:c9730904-d479-4f03-8945-1eae9818173f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"82f3968702a2528c976d2a4d44cf4fdc03b0b8ba4d1d7052039c08477cb92ba5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:32.122310 kubelet[2840]: E0909 21:59:32.119973 2840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82f3968702a2528c976d2a4d44cf4fdc03b0b8ba4d1d7052039c08477cb92ba5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:32.122310 kubelet[2840]: E0909 21:59:32.120067 2840 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82f3968702a2528c976d2a4d44cf4fdc03b0b8ba4d1d7052039c08477cb92ba5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-cbmvm" Sep 9 21:59:32.122310 kubelet[2840]: E0909 21:59:32.120098 2840 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82f3968702a2528c976d2a4d44cf4fdc03b0b8ba4d1d7052039c08477cb92ba5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-cbmvm" Sep 9 21:59:32.122457 kubelet[2840]: E0909 21:59:32.120315 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-cbmvm_calico-system(c9730904-d479-4f03-8945-1eae9818173f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-cbmvm_calico-system(c9730904-d479-4f03-8945-1eae9818173f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"82f3968702a2528c976d2a4d44cf4fdc03b0b8ba4d1d7052039c08477cb92ba5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-cbmvm" podUID="c9730904-d479-4f03-8945-1eae9818173f" Sep 9 21:59:32.151470 containerd[1576]: time="2025-09-09T21:59:32.149270325Z" level=error msg="Failed to destroy network for sandbox \"01d7b08bd405d77bb761eeaedb45075a4326584cc35366c81b2f11b32fd57005\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:32.187506 containerd[1576]: time="2025-09-09T21:59:32.187385414Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85fb8bd9d8-2rqmt,Uid:57318330-371c-4f00-bb0e-4ef73267101a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"01d7b08bd405d77bb761eeaedb45075a4326584cc35366c81b2f11b32fd57005\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:32.187948 kubelet[2840]: E0909 21:59:32.187858 2840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01d7b08bd405d77bb761eeaedb45075a4326584cc35366c81b2f11b32fd57005\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:32.188062 kubelet[2840]: E0909 21:59:32.187980 2840 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01d7b08bd405d77bb761eeaedb45075a4326584cc35366c81b2f11b32fd57005\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85fb8bd9d8-2rqmt" Sep 9 21:59:32.188062 kubelet[2840]: E0909 21:59:32.188022 2840 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01d7b08bd405d77bb761eeaedb45075a4326584cc35366c81b2f11b32fd57005\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85fb8bd9d8-2rqmt" Sep 9 21:59:32.188209 kubelet[2840]: E0909 21:59:32.188105 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-85fb8bd9d8-2rqmt_calico-apiserver(57318330-371c-4f00-bb0e-4ef73267101a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-85fb8bd9d8-2rqmt_calico-apiserver(57318330-371c-4f00-bb0e-4ef73267101a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"01d7b08bd405d77bb761eeaedb45075a4326584cc35366c81b2f11b32fd57005\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-85fb8bd9d8-2rqmt" podUID="57318330-371c-4f00-bb0e-4ef73267101a" Sep 9 21:59:32.438847 systemd[1]: run-netns-cni\x2dcd34f754\x2dc007\x2df9bc\x2dab88\x2d91188d6ae314.mount: Deactivated successfully. Sep 9 21:59:34.877912 update_engine[1559]: I20250909 21:59:34.877167 1559 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 9 21:59:34.877912 update_engine[1559]: I20250909 21:59:34.877292 1559 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 9 21:59:34.877912 update_engine[1559]: I20250909 21:59:34.877703 1559 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 9 21:59:34.905006 update_engine[1559]: E20250909 21:59:34.904728 1559 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 9 21:59:34.905006 update_engine[1559]: I20250909 21:59:34.904949 1559 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 9 21:59:38.656338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2312886840.mount: Deactivated successfully. Sep 9 21:59:38.729981 kubelet[2840]: E0909 21:59:38.725998 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:59:39.148531 containerd[1576]: time="2025-09-09T21:59:39.147871435Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:59:39.280545 containerd[1576]: time="2025-09-09T21:59:39.204691865Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 9 21:59:39.280545 containerd[1576]: time="2025-09-09T21:59:39.274579376Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:59:39.317734 containerd[1576]: time="2025-09-09T21:59:39.315087273Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:59:39.326653 containerd[1576]: time="2025-09-09T21:59:39.325899153Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 23.929884087s" Sep 9 21:59:39.326653 containerd[1576]: time="2025-09-09T21:59:39.325976770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 9 21:59:39.523656 containerd[1576]: time="2025-09-09T21:59:39.517069555Z" level=info msg="CreateContainer within sandbox \"7befd3648a8aae4a7ad25dc823bfc8c28a400ab01fff3876636a55b2be3cba63\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 9 21:59:39.719957 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount996815763.mount: Deactivated successfully. Sep 9 21:59:39.751510 containerd[1576]: time="2025-09-09T21:59:39.749743128Z" level=info msg="Container 411d97edd1dbe41be78c267c65930cb6352ef988764faa76cce1f84f6dc9ce68: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:59:39.901861 containerd[1576]: time="2025-09-09T21:59:39.893953796Z" level=info msg="CreateContainer within sandbox \"7befd3648a8aae4a7ad25dc823bfc8c28a400ab01fff3876636a55b2be3cba63\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"411d97edd1dbe41be78c267c65930cb6352ef988764faa76cce1f84f6dc9ce68\"" Sep 9 21:59:39.901861 containerd[1576]: time="2025-09-09T21:59:39.897841690Z" level=info msg="StartContainer for \"411d97edd1dbe41be78c267c65930cb6352ef988764faa76cce1f84f6dc9ce68\"" Sep 9 21:59:39.911264 containerd[1576]: time="2025-09-09T21:59:39.911145906Z" level=info msg="connecting to shim 411d97edd1dbe41be78c267c65930cb6352ef988764faa76cce1f84f6dc9ce68" address="unix:///run/containerd/s/276e3ac4e13059c32a222920c318e3b159ba13e787242ad40db8b68e2e69c7e4" protocol=ttrpc version=3 Sep 9 21:59:40.057253 systemd[1]: Started cri-containerd-411d97edd1dbe41be78c267c65930cb6352ef988764faa76cce1f84f6dc9ce68.scope - libcontainer container 411d97edd1dbe41be78c267c65930cb6352ef988764faa76cce1f84f6dc9ce68. Sep 9 21:59:40.732807 kubelet[2840]: E0909 21:59:40.724581 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:59:40.732807 kubelet[2840]: E0909 21:59:40.725143 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:59:40.733815 containerd[1576]: time="2025-09-09T21:59:40.728891568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bjgk4,Uid:e8a89593-6c20-439c-928d-0e86b0f1dbe1,Namespace:kube-system,Attempt:0,}" Sep 9 21:59:40.748960 containerd[1576]: time="2025-09-09T21:59:40.740913267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vg22c,Uid:a09151ad-a9b3-49e6-974f-bb1c62675974,Namespace:kube-system,Attempt:0,}" Sep 9 21:59:40.748960 containerd[1576]: time="2025-09-09T21:59:40.741404773Z" level=info msg="StartContainer for \"411d97edd1dbe41be78c267c65930cb6352ef988764faa76cce1f84f6dc9ce68\" returns successfully" Sep 9 21:59:41.229062 containerd[1576]: time="2025-09-09T21:59:41.228921915Z" level=error msg="Failed to destroy network for sandbox \"3dd9f738dd233dd192996f5e20d1be07ea768b24bd7fab7fdbef95530f692633\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:41.242034 systemd[1]: run-netns-cni\x2d4741af6b\x2d476c\x2d0c5c\x2ddc65\x2d1b09b27b2597.mount: Deactivated successfully. Sep 9 21:59:41.248076 containerd[1576]: time="2025-09-09T21:59:41.247031081Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vg22c,Uid:a09151ad-a9b3-49e6-974f-bb1c62675974,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3dd9f738dd233dd192996f5e20d1be07ea768b24bd7fab7fdbef95530f692633\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:41.248076 containerd[1576]: time="2025-09-09T21:59:41.247946506Z" level=error msg="Failed to destroy network for sandbox \"dc1d7aa61296a91039c2dedcf03640ace604491084bb3e3d5dd14c2fe6f2aed9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:41.249049 kubelet[2840]: E0909 21:59:41.247648 2840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3dd9f738dd233dd192996f5e20d1be07ea768b24bd7fab7fdbef95530f692633\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:41.249049 kubelet[2840]: E0909 21:59:41.248858 2840 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3dd9f738dd233dd192996f5e20d1be07ea768b24bd7fab7fdbef95530f692633\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vg22c" Sep 9 21:59:41.252294 systemd[1]: run-netns-cni\x2dfbc1376d\x2d8853\x2db12d\x2d20af\x2da29ff98a667b.mount: Deactivated successfully. Sep 9 21:59:41.254366 kubelet[2840]: E0909 21:59:41.253456 2840 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3dd9f738dd233dd192996f5e20d1be07ea768b24bd7fab7fdbef95530f692633\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vg22c" Sep 9 21:59:41.254366 kubelet[2840]: E0909 21:59:41.253630 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-vg22c_kube-system(a09151ad-a9b3-49e6-974f-bb1c62675974)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-vg22c_kube-system(a09151ad-a9b3-49e6-974f-bb1c62675974)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3dd9f738dd233dd192996f5e20d1be07ea768b24bd7fab7fdbef95530f692633\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-vg22c" podUID="a09151ad-a9b3-49e6-974f-bb1c62675974" Sep 9 21:59:41.266086 containerd[1576]: time="2025-09-09T21:59:41.265915989Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bjgk4,Uid:e8a89593-6c20-439c-928d-0e86b0f1dbe1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc1d7aa61296a91039c2dedcf03640ace604491084bb3e3d5dd14c2fe6f2aed9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:41.266300 kubelet[2840]: E0909 21:59:41.266227 2840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc1d7aa61296a91039c2dedcf03640ace604491084bb3e3d5dd14c2fe6f2aed9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:59:41.266368 kubelet[2840]: E0909 21:59:41.266300 2840 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc1d7aa61296a91039c2dedcf03640ace604491084bb3e3d5dd14c2fe6f2aed9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bjgk4" Sep 9 21:59:41.266368 kubelet[2840]: E0909 21:59:41.266327 2840 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc1d7aa61296a91039c2dedcf03640ace604491084bb3e3d5dd14c2fe6f2aed9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bjgk4" Sep 9 21:59:41.266556 kubelet[2840]: E0909 21:59:41.266391 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-bjgk4_kube-system(e8a89593-6c20-439c-928d-0e86b0f1dbe1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-bjgk4_kube-system(e8a89593-6c20-439c-928d-0e86b0f1dbe1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc1d7aa61296a91039c2dedcf03640ace604491084bb3e3d5dd14c2fe6f2aed9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-bjgk4" podUID="e8a89593-6c20-439c-928d-0e86b0f1dbe1" Sep 9 21:59:41.313106 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 9 21:59:41.314222 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 9 21:59:41.727948 containerd[1576]: time="2025-09-09T21:59:41.727361379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nzhdk,Uid:a3597f1e-72d0-40eb-b831-78b87602d9ab,Namespace:calico-system,Attempt:0,}" Sep 9 21:59:41.803086 kubelet[2840]: I0909 21:59:41.796865 2840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-wvsbv" podStartSLOduration=4.22544498 podStartE2EDuration="56.796823199s" podCreationTimestamp="2025-09-09 21:58:45 +0000 UTC" firstStartedPulling="2025-09-09 21:58:46.756508399 +0000 UTC m=+37.384361493" lastFinishedPulling="2025-09-09 21:59:39.327886618 +0000 UTC m=+89.955739712" observedRunningTime="2025-09-09 21:59:41.795758533 +0000 UTC m=+92.423611637" watchObservedRunningTime="2025-09-09 21:59:41.796823199 +0000 UTC m=+92.424676293" Sep 9 21:59:41.992080 kubelet[2840]: I0909 21:59:41.984911 2840 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nt7fw\" (UniqueName: \"kubernetes.io/projected/d774d884-64e5-4f7e-84a6-5dcae502322a-kube-api-access-nt7fw\") pod \"d774d884-64e5-4f7e-84a6-5dcae502322a\" (UID: \"d774d884-64e5-4f7e-84a6-5dcae502322a\") " Sep 9 21:59:41.992080 kubelet[2840]: I0909 21:59:41.984996 2840 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d774d884-64e5-4f7e-84a6-5dcae502322a-whisker-backend-key-pair\") pod \"d774d884-64e5-4f7e-84a6-5dcae502322a\" (UID: \"d774d884-64e5-4f7e-84a6-5dcae502322a\") " Sep 9 21:59:41.992080 kubelet[2840]: I0909 21:59:41.985029 2840 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d774d884-64e5-4f7e-84a6-5dcae502322a-whisker-ca-bundle\") pod \"d774d884-64e5-4f7e-84a6-5dcae502322a\" (UID: \"d774d884-64e5-4f7e-84a6-5dcae502322a\") " Sep 9 21:59:41.992080 kubelet[2840]: I0909 21:59:41.985656 2840 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d774d884-64e5-4f7e-84a6-5dcae502322a-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "d774d884-64e5-4f7e-84a6-5dcae502322a" (UID: "d774d884-64e5-4f7e-84a6-5dcae502322a"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 21:59:42.012824 systemd[1]: var-lib-kubelet-pods-d774d884\x2d64e5\x2d4f7e\x2d84a6\x2d5dcae502322a-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 9 21:59:42.027086 systemd[1]: var-lib-kubelet-pods-d774d884\x2d64e5\x2d4f7e\x2d84a6\x2d5dcae502322a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnt7fw.mount: Deactivated successfully. Sep 9 21:59:42.027936 kubelet[2840]: I0909 21:59:42.027631 2840 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d774d884-64e5-4f7e-84a6-5dcae502322a-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "d774d884-64e5-4f7e-84a6-5dcae502322a" (UID: "d774d884-64e5-4f7e-84a6-5dcae502322a"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 21:59:42.041094 kubelet[2840]: I0909 21:59:42.036269 2840 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d774d884-64e5-4f7e-84a6-5dcae502322a-kube-api-access-nt7fw" (OuterVolumeSpecName: "kube-api-access-nt7fw") pod "d774d884-64e5-4f7e-84a6-5dcae502322a" (UID: "d774d884-64e5-4f7e-84a6-5dcae502322a"). InnerVolumeSpecName "kube-api-access-nt7fw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 21:59:42.087658 kubelet[2840]: I0909 21:59:42.087229 2840 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nt7fw\" (UniqueName: \"kubernetes.io/projected/d774d884-64e5-4f7e-84a6-5dcae502322a-kube-api-access-nt7fw\") on node \"localhost\" DevicePath \"\"" Sep 9 21:59:42.087658 kubelet[2840]: I0909 21:59:42.087320 2840 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d774d884-64e5-4f7e-84a6-5dcae502322a-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 9 21:59:42.087658 kubelet[2840]: I0909 21:59:42.087379 2840 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d774d884-64e5-4f7e-84a6-5dcae502322a-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 9 21:59:42.384057 containerd[1576]: time="2025-09-09T21:59:42.383416726Z" level=info msg="TaskExit event in podsandbox handler container_id:\"411d97edd1dbe41be78c267c65930cb6352ef988764faa76cce1f84f6dc9ce68\" id:\"227a466afd7878c5728e974410084d7945023e97c1dd99d4ccc6a572c673f3d7\" pid:4346 exit_status:1 exited_at:{seconds:1757455182 nanos:375338127}" Sep 9 21:59:42.729500 containerd[1576]: time="2025-09-09T21:59:42.728420602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85fb8bd9d8-2rqmt,Uid:57318330-371c-4f00-bb0e-4ef73267101a,Namespace:calico-apiserver,Attempt:0,}" Sep 9 21:59:42.729500 containerd[1576]: time="2025-09-09T21:59:42.729426185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c66bfd5cf-zxqkv,Uid:8a9661e9-5b18-43f4-b60f-9cb3c54276be,Namespace:calico-system,Attempt:0,}" Sep 9 21:59:42.778393 systemd[1]: Removed slice kubepods-besteffort-podd774d884_64e5_4f7e_84a6_5dcae502322a.slice - libcontainer container kubepods-besteffort-podd774d884_64e5_4f7e_84a6_5dcae502322a.slice. Sep 9 21:59:43.113503 containerd[1576]: time="2025-09-09T21:59:43.112231727Z" level=info msg="TaskExit event in podsandbox handler container_id:\"411d97edd1dbe41be78c267c65930cb6352ef988764faa76cce1f84f6dc9ce68\" id:\"ac9973071bfaa5634ea55be7d6aec8ed44bb2ec05b4a741ea70cddcdf4dfd00e\" pid:4393 exit_status:1 exited_at:{seconds:1757455183 nanos:96818793}" Sep 9 21:59:43.406026 systemd[1]: Created slice kubepods-besteffort-pod1aea1796_6a61_4c7c_ab8e_cb07f5bfff91.slice - libcontainer container kubepods-besteffort-pod1aea1796_6a61_4c7c_ab8e_cb07f5bfff91.slice. Sep 9 21:59:43.434370 kubelet[2840]: I0909 21:59:43.434293 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1aea1796-6a61-4c7c-ab8e-cb07f5bfff91-whisker-ca-bundle\") pod \"whisker-764bbbbb79-jz7jk\" (UID: \"1aea1796-6a61-4c7c-ab8e-cb07f5bfff91\") " pod="calico-system/whisker-764bbbbb79-jz7jk" Sep 9 21:59:43.435131 kubelet[2840]: I0909 21:59:43.434406 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1aea1796-6a61-4c7c-ab8e-cb07f5bfff91-whisker-backend-key-pair\") pod \"whisker-764bbbbb79-jz7jk\" (UID: \"1aea1796-6a61-4c7c-ab8e-cb07f5bfff91\") " pod="calico-system/whisker-764bbbbb79-jz7jk" Sep 9 21:59:43.435131 kubelet[2840]: I0909 21:59:43.434488 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmpvh\" (UniqueName: \"kubernetes.io/projected/1aea1796-6a61-4c7c-ab8e-cb07f5bfff91-kube-api-access-mmpvh\") pod \"whisker-764bbbbb79-jz7jk\" (UID: \"1aea1796-6a61-4c7c-ab8e-cb07f5bfff91\") " pod="calico-system/whisker-764bbbbb79-jz7jk" Sep 9 21:59:43.758335 containerd[1576]: time="2025-09-09T21:59:43.758262717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-764bbbbb79-jz7jk,Uid:1aea1796-6a61-4c7c-ab8e-cb07f5bfff91,Namespace:calico-system,Attempt:0,}" Sep 9 21:59:43.766800 kubelet[2840]: I0909 21:59:43.766178 2840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d774d884-64e5-4f7e-84a6-5dcae502322a" path="/var/lib/kubelet/pods/d774d884-64e5-4f7e-84a6-5dcae502322a/volumes" Sep 9 21:59:44.100601 systemd-networkd[1478]: caliacb11718bdd: Link UP Sep 9 21:59:44.108409 systemd-networkd[1478]: caliacb11718bdd: Gained carrier Sep 9 21:59:44.246422 containerd[1576]: 2025-09-09 21:59:41.870 [INFO][4317] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 21:59:44.246422 containerd[1576]: 2025-09-09 21:59:42.237 [INFO][4317] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--nzhdk-eth0 csi-node-driver- calico-system a3597f1e-72d0-40eb-b831-78b87602d9ab 831 0 2025-09-09 21:58:46 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-nzhdk eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliacb11718bdd [] [] }} ContainerID="8ab313140b8a1a794eb8a0bc77b2bccd2307958e39e431201ed90df048ab3d2e" Namespace="calico-system" Pod="csi-node-driver-nzhdk" WorkloadEndpoint="localhost-k8s-csi--node--driver--nzhdk-" Sep 9 21:59:44.246422 containerd[1576]: 2025-09-09 21:59:42.237 [INFO][4317] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8ab313140b8a1a794eb8a0bc77b2bccd2307958e39e431201ed90df048ab3d2e" Namespace="calico-system" Pod="csi-node-driver-nzhdk" WorkloadEndpoint="localhost-k8s-csi--node--driver--nzhdk-eth0" Sep 9 21:59:44.246422 containerd[1576]: 2025-09-09 21:59:43.523 [INFO][4362] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8ab313140b8a1a794eb8a0bc77b2bccd2307958e39e431201ed90df048ab3d2e" HandleID="k8s-pod-network.8ab313140b8a1a794eb8a0bc77b2bccd2307958e39e431201ed90df048ab3d2e" Workload="localhost-k8s-csi--node--driver--nzhdk-eth0" Sep 9 21:59:44.253285 containerd[1576]: 2025-09-09 21:59:43.529 [INFO][4362] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8ab313140b8a1a794eb8a0bc77b2bccd2307958e39e431201ed90df048ab3d2e" HandleID="k8s-pod-network.8ab313140b8a1a794eb8a0bc77b2bccd2307958e39e431201ed90df048ab3d2e" Workload="localhost-k8s-csi--node--driver--nzhdk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00051d4c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-nzhdk", "timestamp":"2025-09-09 21:59:43.523442854 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 21:59:44.253285 containerd[1576]: 2025-09-09 21:59:43.529 [INFO][4362] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 21:59:44.253285 containerd[1576]: 2025-09-09 21:59:43.530 [INFO][4362] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 21:59:44.253285 containerd[1576]: 2025-09-09 21:59:43.531 [INFO][4362] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 21:59:44.253285 containerd[1576]: 2025-09-09 21:59:43.625 [INFO][4362] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8ab313140b8a1a794eb8a0bc77b2bccd2307958e39e431201ed90df048ab3d2e" host="localhost" Sep 9 21:59:44.253285 containerd[1576]: 2025-09-09 21:59:43.745 [INFO][4362] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 21:59:44.253285 containerd[1576]: 2025-09-09 21:59:43.785 [INFO][4362] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 21:59:44.253285 containerd[1576]: 2025-09-09 21:59:43.807 [INFO][4362] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 21:59:44.253285 containerd[1576]: 2025-09-09 21:59:43.828 [INFO][4362] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 21:59:44.253285 containerd[1576]: 2025-09-09 21:59:43.829 [INFO][4362] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8ab313140b8a1a794eb8a0bc77b2bccd2307958e39e431201ed90df048ab3d2e" host="localhost" Sep 9 21:59:44.253827 containerd[1576]: 2025-09-09 21:59:43.851 [INFO][4362] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8ab313140b8a1a794eb8a0bc77b2bccd2307958e39e431201ed90df048ab3d2e Sep 9 21:59:44.253827 containerd[1576]: 2025-09-09 21:59:43.900 [INFO][4362] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8ab313140b8a1a794eb8a0bc77b2bccd2307958e39e431201ed90df048ab3d2e" host="localhost" Sep 9 21:59:44.253827 containerd[1576]: 2025-09-09 21:59:43.955 [INFO][4362] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.8ab313140b8a1a794eb8a0bc77b2bccd2307958e39e431201ed90df048ab3d2e" host="localhost" Sep 9 21:59:44.253827 containerd[1576]: 2025-09-09 21:59:43.956 [INFO][4362] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.8ab313140b8a1a794eb8a0bc77b2bccd2307958e39e431201ed90df048ab3d2e" host="localhost" Sep 9 21:59:44.253827 containerd[1576]: 2025-09-09 21:59:43.957 [INFO][4362] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 21:59:44.253827 containerd[1576]: 2025-09-09 21:59:43.957 [INFO][4362] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="8ab313140b8a1a794eb8a0bc77b2bccd2307958e39e431201ed90df048ab3d2e" HandleID="k8s-pod-network.8ab313140b8a1a794eb8a0bc77b2bccd2307958e39e431201ed90df048ab3d2e" Workload="localhost-k8s-csi--node--driver--nzhdk-eth0" Sep 9 21:59:44.254148 containerd[1576]: 2025-09-09 21:59:44.003 [INFO][4317] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8ab313140b8a1a794eb8a0bc77b2bccd2307958e39e431201ed90df048ab3d2e" Namespace="calico-system" Pod="csi-node-driver-nzhdk" WorkloadEndpoint="localhost-k8s-csi--node--driver--nzhdk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nzhdk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a3597f1e-72d0-40eb-b831-78b87602d9ab", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 21, 58, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-nzhdk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliacb11718bdd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 21:59:44.254284 containerd[1576]: 2025-09-09 21:59:44.004 [INFO][4317] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="8ab313140b8a1a794eb8a0bc77b2bccd2307958e39e431201ed90df048ab3d2e" Namespace="calico-system" Pod="csi-node-driver-nzhdk" WorkloadEndpoint="localhost-k8s-csi--node--driver--nzhdk-eth0" Sep 9 21:59:44.254284 containerd[1576]: 2025-09-09 21:59:44.004 [INFO][4317] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliacb11718bdd ContainerID="8ab313140b8a1a794eb8a0bc77b2bccd2307958e39e431201ed90df048ab3d2e" Namespace="calico-system" Pod="csi-node-driver-nzhdk" WorkloadEndpoint="localhost-k8s-csi--node--driver--nzhdk-eth0" Sep 9 21:59:44.254284 containerd[1576]: 2025-09-09 21:59:44.108 [INFO][4317] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8ab313140b8a1a794eb8a0bc77b2bccd2307958e39e431201ed90df048ab3d2e" Namespace="calico-system" Pod="csi-node-driver-nzhdk" WorkloadEndpoint="localhost-k8s-csi--node--driver--nzhdk-eth0" Sep 9 21:59:44.254460 containerd[1576]: 2025-09-09 21:59:44.126 [INFO][4317] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8ab313140b8a1a794eb8a0bc77b2bccd2307958e39e431201ed90df048ab3d2e" Namespace="calico-system" Pod="csi-node-driver-nzhdk" WorkloadEndpoint="localhost-k8s-csi--node--driver--nzhdk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nzhdk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a3597f1e-72d0-40eb-b831-78b87602d9ab", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 21, 58, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8ab313140b8a1a794eb8a0bc77b2bccd2307958e39e431201ed90df048ab3d2e", Pod:"csi-node-driver-nzhdk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliacb11718bdd", MAC:"7e:31:da:9a:87:83", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 21:59:44.254560 containerd[1576]: 2025-09-09 21:59:44.203 [INFO][4317] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8ab313140b8a1a794eb8a0bc77b2bccd2307958e39e431201ed90df048ab3d2e" Namespace="calico-system" Pod="csi-node-driver-nzhdk" WorkloadEndpoint="localhost-k8s-csi--node--driver--nzhdk-eth0" Sep 9 21:59:44.349835 systemd-networkd[1478]: calie3565b33384: Link UP Sep 9 21:59:44.361113 systemd-networkd[1478]: calie3565b33384: Gained carrier Sep 9 21:59:44.487700 containerd[1576]: 2025-09-09 21:59:43.244 [INFO][4419] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 21:59:44.487700 containerd[1576]: 2025-09-09 21:59:43.308 [INFO][4419] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7c66bfd5cf--zxqkv-eth0 calico-kube-controllers-7c66bfd5cf- calico-system 8a9661e9-5b18-43f4-b60f-9cb3c54276be 978 0 2025-09-09 21:58:46 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7c66bfd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7c66bfd5cf-zxqkv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie3565b33384 [] [] }} ContainerID="9cc1f61c3786ce97fc4f3440c8fbe8e5ca8a8f666ffbd8fcc8c24cdb37268fd6" Namespace="calico-system" Pod="calico-kube-controllers-7c66bfd5cf-zxqkv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c66bfd5cf--zxqkv-" Sep 9 21:59:44.487700 containerd[1576]: 2025-09-09 21:59:43.309 [INFO][4419] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9cc1f61c3786ce97fc4f3440c8fbe8e5ca8a8f666ffbd8fcc8c24cdb37268fd6" Namespace="calico-system" Pod="calico-kube-controllers-7c66bfd5cf-zxqkv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c66bfd5cf--zxqkv-eth0" Sep 9 21:59:44.487700 containerd[1576]: 2025-09-09 21:59:43.524 [INFO][4438] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9cc1f61c3786ce97fc4f3440c8fbe8e5ca8a8f666ffbd8fcc8c24cdb37268fd6" HandleID="k8s-pod-network.9cc1f61c3786ce97fc4f3440c8fbe8e5ca8a8f666ffbd8fcc8c24cdb37268fd6" Workload="localhost-k8s-calico--kube--controllers--7c66bfd5cf--zxqkv-eth0" Sep 9 21:59:44.488166 containerd[1576]: 2025-09-09 21:59:43.530 [INFO][4438] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9cc1f61c3786ce97fc4f3440c8fbe8e5ca8a8f666ffbd8fcc8c24cdb37268fd6" HandleID="k8s-pod-network.9cc1f61c3786ce97fc4f3440c8fbe8e5ca8a8f666ffbd8fcc8c24cdb37268fd6" Workload="localhost-k8s-calico--kube--controllers--7c66bfd5cf--zxqkv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f770), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7c66bfd5cf-zxqkv", "timestamp":"2025-09-09 21:59:43.523942967 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 21:59:44.488166 containerd[1576]: 2025-09-09 21:59:43.530 [INFO][4438] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 21:59:44.488166 containerd[1576]: 2025-09-09 21:59:43.959 [INFO][4438] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 21:59:44.488166 containerd[1576]: 2025-09-09 21:59:43.960 [INFO][4438] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 21:59:44.488166 containerd[1576]: 2025-09-09 21:59:44.041 [INFO][4438] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9cc1f61c3786ce97fc4f3440c8fbe8e5ca8a8f666ffbd8fcc8c24cdb37268fd6" host="localhost" Sep 9 21:59:44.488166 containerd[1576]: 2025-09-09 21:59:44.096 [INFO][4438] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 21:59:44.488166 containerd[1576]: 2025-09-09 21:59:44.164 [INFO][4438] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 21:59:44.488166 containerd[1576]: 2025-09-09 21:59:44.201 [INFO][4438] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 21:59:44.488166 containerd[1576]: 2025-09-09 21:59:44.218 [INFO][4438] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 21:59:44.488166 containerd[1576]: 2025-09-09 21:59:44.218 [INFO][4438] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9cc1f61c3786ce97fc4f3440c8fbe8e5ca8a8f666ffbd8fcc8c24cdb37268fd6" host="localhost" Sep 9 21:59:44.493102 containerd[1576]: 2025-09-09 21:59:44.255 [INFO][4438] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9cc1f61c3786ce97fc4f3440c8fbe8e5ca8a8f666ffbd8fcc8c24cdb37268fd6 Sep 9 21:59:44.493102 containerd[1576]: 2025-09-09 21:59:44.270 [INFO][4438] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9cc1f61c3786ce97fc4f3440c8fbe8e5ca8a8f666ffbd8fcc8c24cdb37268fd6" host="localhost" Sep 9 21:59:44.493102 containerd[1576]: 2025-09-09 21:59:44.306 [INFO][4438] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.9cc1f61c3786ce97fc4f3440c8fbe8e5ca8a8f666ffbd8fcc8c24cdb37268fd6" host="localhost" Sep 9 21:59:44.493102 containerd[1576]: 2025-09-09 21:59:44.306 [INFO][4438] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.9cc1f61c3786ce97fc4f3440c8fbe8e5ca8a8f666ffbd8fcc8c24cdb37268fd6" host="localhost" Sep 9 21:59:44.493102 containerd[1576]: 2025-09-09 21:59:44.306 [INFO][4438] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 21:59:44.493102 containerd[1576]: 2025-09-09 21:59:44.306 [INFO][4438] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="9cc1f61c3786ce97fc4f3440c8fbe8e5ca8a8f666ffbd8fcc8c24cdb37268fd6" HandleID="k8s-pod-network.9cc1f61c3786ce97fc4f3440c8fbe8e5ca8a8f666ffbd8fcc8c24cdb37268fd6" Workload="localhost-k8s-calico--kube--controllers--7c66bfd5cf--zxqkv-eth0" Sep 9 21:59:44.493362 containerd[1576]: 2025-09-09 21:59:44.320 [INFO][4419] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9cc1f61c3786ce97fc4f3440c8fbe8e5ca8a8f666ffbd8fcc8c24cdb37268fd6" Namespace="calico-system" Pod="calico-kube-controllers-7c66bfd5cf-zxqkv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c66bfd5cf--zxqkv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7c66bfd5cf--zxqkv-eth0", GenerateName:"calico-kube-controllers-7c66bfd5cf-", Namespace:"calico-system", SelfLink:"", UID:"8a9661e9-5b18-43f4-b60f-9cb3c54276be", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 21, 58, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c66bfd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7c66bfd5cf-zxqkv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie3565b33384", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 21:59:44.493567 containerd[1576]: 2025-09-09 21:59:44.320 [INFO][4419] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="9cc1f61c3786ce97fc4f3440c8fbe8e5ca8a8f666ffbd8fcc8c24cdb37268fd6" Namespace="calico-system" Pod="calico-kube-controllers-7c66bfd5cf-zxqkv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c66bfd5cf--zxqkv-eth0" Sep 9 21:59:44.493567 containerd[1576]: 2025-09-09 21:59:44.321 [INFO][4419] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie3565b33384 ContainerID="9cc1f61c3786ce97fc4f3440c8fbe8e5ca8a8f666ffbd8fcc8c24cdb37268fd6" Namespace="calico-system" Pod="calico-kube-controllers-7c66bfd5cf-zxqkv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c66bfd5cf--zxqkv-eth0" Sep 9 21:59:44.493567 containerd[1576]: 2025-09-09 21:59:44.356 [INFO][4419] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9cc1f61c3786ce97fc4f3440c8fbe8e5ca8a8f666ffbd8fcc8c24cdb37268fd6" Namespace="calico-system" Pod="calico-kube-controllers-7c66bfd5cf-zxqkv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c66bfd5cf--zxqkv-eth0" Sep 9 21:59:44.493725 containerd[1576]: 2025-09-09 21:59:44.369 [INFO][4419] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9cc1f61c3786ce97fc4f3440c8fbe8e5ca8a8f666ffbd8fcc8c24cdb37268fd6" Namespace="calico-system" Pod="calico-kube-controllers-7c66bfd5cf-zxqkv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c66bfd5cf--zxqkv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7c66bfd5cf--zxqkv-eth0", GenerateName:"calico-kube-controllers-7c66bfd5cf-", Namespace:"calico-system", SelfLink:"", UID:"8a9661e9-5b18-43f4-b60f-9cb3c54276be", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 21, 58, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c66bfd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9cc1f61c3786ce97fc4f3440c8fbe8e5ca8a8f666ffbd8fcc8c24cdb37268fd6", Pod:"calico-kube-controllers-7c66bfd5cf-zxqkv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie3565b33384", MAC:"a2:35:8f:f0:36:42", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 21:59:44.494889 containerd[1576]: 2025-09-09 21:59:44.457 [INFO][4419] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9cc1f61c3786ce97fc4f3440c8fbe8e5ca8a8f666ffbd8fcc8c24cdb37268fd6" Namespace="calico-system" Pod="calico-kube-controllers-7c66bfd5cf-zxqkv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c66bfd5cf--zxqkv-eth0" Sep 9 21:59:44.654284 systemd-networkd[1478]: calib9f65ba19bb: Link UP Sep 9 21:59:44.654716 systemd-networkd[1478]: calib9f65ba19bb: Gained carrier Sep 9 21:59:44.782278 containerd[1576]: 2025-09-09 21:59:43.320 [INFO][4408] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 21:59:44.782278 containerd[1576]: 2025-09-09 21:59:43.441 [INFO][4408] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--85fb8bd9d8--2rqmt-eth0 calico-apiserver-85fb8bd9d8- calico-apiserver 57318330-371c-4f00-bb0e-4ef73267101a 984 0 2025-09-09 21:58:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:85fb8bd9d8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-85fb8bd9d8-2rqmt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib9f65ba19bb [] [] }} ContainerID="33002135697469f599327aaadbea61a9ae296ed7e68f82082e0e5efb9786a35e" Namespace="calico-apiserver" Pod="calico-apiserver-85fb8bd9d8-2rqmt" WorkloadEndpoint="localhost-k8s-calico--apiserver--85fb8bd9d8--2rqmt-" Sep 9 21:59:44.782278 containerd[1576]: 2025-09-09 21:59:43.442 [INFO][4408] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="33002135697469f599327aaadbea61a9ae296ed7e68f82082e0e5efb9786a35e" Namespace="calico-apiserver" Pod="calico-apiserver-85fb8bd9d8-2rqmt" WorkloadEndpoint="localhost-k8s-calico--apiserver--85fb8bd9d8--2rqmt-eth0" Sep 9 21:59:44.782278 containerd[1576]: 2025-09-09 21:59:43.652 [INFO][4447] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="33002135697469f599327aaadbea61a9ae296ed7e68f82082e0e5efb9786a35e" HandleID="k8s-pod-network.33002135697469f599327aaadbea61a9ae296ed7e68f82082e0e5efb9786a35e" Workload="localhost-k8s-calico--apiserver--85fb8bd9d8--2rqmt-eth0" Sep 9 21:59:44.783424 containerd[1576]: 2025-09-09 21:59:43.653 [INFO][4447] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="33002135697469f599327aaadbea61a9ae296ed7e68f82082e0e5efb9786a35e" HandleID="k8s-pod-network.33002135697469f599327aaadbea61a9ae296ed7e68f82082e0e5efb9786a35e" Workload="localhost-k8s-calico--apiserver--85fb8bd9d8--2rqmt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000119ce0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-85fb8bd9d8-2rqmt", "timestamp":"2025-09-09 21:59:43.652933649 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 21:59:44.783424 containerd[1576]: 2025-09-09 21:59:43.653 [INFO][4447] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 21:59:44.783424 containerd[1576]: 2025-09-09 21:59:44.319 [INFO][4447] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 21:59:44.783424 containerd[1576]: 2025-09-09 21:59:44.319 [INFO][4447] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 21:59:44.783424 containerd[1576]: 2025-09-09 21:59:44.356 [INFO][4447] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.33002135697469f599327aaadbea61a9ae296ed7e68f82082e0e5efb9786a35e" host="localhost" Sep 9 21:59:44.783424 containerd[1576]: 2025-09-09 21:59:44.428 [INFO][4447] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 21:59:44.783424 containerd[1576]: 2025-09-09 21:59:44.473 [INFO][4447] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 21:59:44.783424 containerd[1576]: 2025-09-09 21:59:44.503 [INFO][4447] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 21:59:44.783424 containerd[1576]: 2025-09-09 21:59:44.530 [INFO][4447] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 21:59:44.783424 containerd[1576]: 2025-09-09 21:59:44.534 [INFO][4447] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.33002135697469f599327aaadbea61a9ae296ed7e68f82082e0e5efb9786a35e" host="localhost" Sep 9 21:59:44.788817 containerd[1576]: 2025-09-09 21:59:44.553 [INFO][4447] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.33002135697469f599327aaadbea61a9ae296ed7e68f82082e0e5efb9786a35e Sep 9 21:59:44.788817 containerd[1576]: 2025-09-09 21:59:44.578 [INFO][4447] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.33002135697469f599327aaadbea61a9ae296ed7e68f82082e0e5efb9786a35e" host="localhost" Sep 9 21:59:44.788817 containerd[1576]: 2025-09-09 21:59:44.594 [INFO][4447] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.33002135697469f599327aaadbea61a9ae296ed7e68f82082e0e5efb9786a35e" host="localhost" Sep 9 21:59:44.788817 containerd[1576]: 2025-09-09 21:59:44.595 [INFO][4447] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.33002135697469f599327aaadbea61a9ae296ed7e68f82082e0e5efb9786a35e" host="localhost" Sep 9 21:59:44.788817 containerd[1576]: 2025-09-09 21:59:44.596 [INFO][4447] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 21:59:44.788817 containerd[1576]: 2025-09-09 21:59:44.596 [INFO][4447] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="33002135697469f599327aaadbea61a9ae296ed7e68f82082e0e5efb9786a35e" HandleID="k8s-pod-network.33002135697469f599327aaadbea61a9ae296ed7e68f82082e0e5efb9786a35e" Workload="localhost-k8s-calico--apiserver--85fb8bd9d8--2rqmt-eth0" Sep 9 21:59:44.789047 containerd[1576]: 2025-09-09 21:59:44.627 [INFO][4408] cni-plugin/k8s.go 418: Populated endpoint ContainerID="33002135697469f599327aaadbea61a9ae296ed7e68f82082e0e5efb9786a35e" Namespace="calico-apiserver" Pod="calico-apiserver-85fb8bd9d8-2rqmt" WorkloadEndpoint="localhost-k8s-calico--apiserver--85fb8bd9d8--2rqmt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--85fb8bd9d8--2rqmt-eth0", GenerateName:"calico-apiserver-85fb8bd9d8-", Namespace:"calico-apiserver", SelfLink:"", UID:"57318330-371c-4f00-bb0e-4ef73267101a", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 21, 58, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85fb8bd9d8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-85fb8bd9d8-2rqmt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib9f65ba19bb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 21:59:44.789167 containerd[1576]: 2025-09-09 21:59:44.627 [INFO][4408] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="33002135697469f599327aaadbea61a9ae296ed7e68f82082e0e5efb9786a35e" Namespace="calico-apiserver" Pod="calico-apiserver-85fb8bd9d8-2rqmt" WorkloadEndpoint="localhost-k8s-calico--apiserver--85fb8bd9d8--2rqmt-eth0" Sep 9 21:59:44.789167 containerd[1576]: 2025-09-09 21:59:44.627 [INFO][4408] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib9f65ba19bb ContainerID="33002135697469f599327aaadbea61a9ae296ed7e68f82082e0e5efb9786a35e" Namespace="calico-apiserver" Pod="calico-apiserver-85fb8bd9d8-2rqmt" WorkloadEndpoint="localhost-k8s-calico--apiserver--85fb8bd9d8--2rqmt-eth0" Sep 9 21:59:44.789167 containerd[1576]: 2025-09-09 21:59:44.647 [INFO][4408] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="33002135697469f599327aaadbea61a9ae296ed7e68f82082e0e5efb9786a35e" Namespace="calico-apiserver" Pod="calico-apiserver-85fb8bd9d8-2rqmt" WorkloadEndpoint="localhost-k8s-calico--apiserver--85fb8bd9d8--2rqmt-eth0" Sep 9 21:59:44.789323 containerd[1576]: 2025-09-09 21:59:44.664 [INFO][4408] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="33002135697469f599327aaadbea61a9ae296ed7e68f82082e0e5efb9786a35e" Namespace="calico-apiserver" Pod="calico-apiserver-85fb8bd9d8-2rqmt" WorkloadEndpoint="localhost-k8s-calico--apiserver--85fb8bd9d8--2rqmt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--85fb8bd9d8--2rqmt-eth0", GenerateName:"calico-apiserver-85fb8bd9d8-", Namespace:"calico-apiserver", SelfLink:"", UID:"57318330-371c-4f00-bb0e-4ef73267101a", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 21, 58, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85fb8bd9d8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"33002135697469f599327aaadbea61a9ae296ed7e68f82082e0e5efb9786a35e", Pod:"calico-apiserver-85fb8bd9d8-2rqmt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib9f65ba19bb", MAC:"2a:36:da:0b:16:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 21:59:44.789423 containerd[1576]: 2025-09-09 21:59:44.741 [INFO][4408] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="33002135697469f599327aaadbea61a9ae296ed7e68f82082e0e5efb9786a35e" Namespace="calico-apiserver" Pod="calico-apiserver-85fb8bd9d8-2rqmt" WorkloadEndpoint="localhost-k8s-calico--apiserver--85fb8bd9d8--2rqmt-eth0" Sep 9 21:59:44.875618 update_engine[1559]: I20250909 21:59:44.874643 1559 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 9 21:59:44.875618 update_engine[1559]: I20250909 21:59:44.874884 1559 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 9 21:59:44.875618 update_engine[1559]: I20250909 21:59:44.875531 1559 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 9 21:59:44.895021 update_engine[1559]: E20250909 21:59:44.894918 1559 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 9 21:59:44.895655 update_engine[1559]: I20250909 21:59:44.895323 1559 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 9 21:59:45.008292 containerd[1576]: time="2025-09-09T21:59:45.008216819Z" level=info msg="connecting to shim 33002135697469f599327aaadbea61a9ae296ed7e68f82082e0e5efb9786a35e" address="unix:///run/containerd/s/2eb2d62bef9815b59de2dbbd6e267fd71a6c9c1204ac296a0821a27fe8d69be3" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:59:45.013636 containerd[1576]: time="2025-09-09T21:59:45.011467102Z" level=info msg="connecting to shim 9cc1f61c3786ce97fc4f3440c8fbe8e5ca8a8f666ffbd8fcc8c24cdb37268fd6" address="unix:///run/containerd/s/22f3dddd7990f94af7dae8bef016a9221204033e1b7fcd49b63dde587d1b2dc5" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:59:45.013636 containerd[1576]: time="2025-09-09T21:59:45.012190715Z" level=info msg="connecting to shim 8ab313140b8a1a794eb8a0bc77b2bccd2307958e39e431201ed90df048ab3d2e" address="unix:///run/containerd/s/52666e45309e0e1bac47cb75b52cfabbdbbdb19c31b7f578853e5b8cce37d6c4" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:59:45.034602 systemd-networkd[1478]: cali09d9b1281d7: Link UP Sep 9 21:59:45.054216 systemd-networkd[1478]: cali09d9b1281d7: Gained carrier Sep 9 21:59:45.232091 containerd[1576]: 2025-09-09 21:59:44.114 [INFO][4534] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 21:59:45.232091 containerd[1576]: 2025-09-09 21:59:44.209 [INFO][4534] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--764bbbbb79--jz7jk-eth0 whisker-764bbbbb79- calico-system 1aea1796-6a61-4c7c-ab8e-cb07f5bfff91 1114 0 2025-09-09 21:59:43 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:764bbbbb79 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-764bbbbb79-jz7jk eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali09d9b1281d7 [] [] }} ContainerID="2d60c9d676fd7f5f1bb904d67b53c364c331bb81ca611b27e70cfa466f6447b7" Namespace="calico-system" Pod="whisker-764bbbbb79-jz7jk" WorkloadEndpoint="localhost-k8s-whisker--764bbbbb79--jz7jk-" Sep 9 21:59:45.232091 containerd[1576]: 2025-09-09 21:59:44.209 [INFO][4534] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2d60c9d676fd7f5f1bb904d67b53c364c331bb81ca611b27e70cfa466f6447b7" Namespace="calico-system" Pod="whisker-764bbbbb79-jz7jk" WorkloadEndpoint="localhost-k8s-whisker--764bbbbb79--jz7jk-eth0" Sep 9 21:59:45.232091 containerd[1576]: 2025-09-09 21:59:44.497 [INFO][4579] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2d60c9d676fd7f5f1bb904d67b53c364c331bb81ca611b27e70cfa466f6447b7" HandleID="k8s-pod-network.2d60c9d676fd7f5f1bb904d67b53c364c331bb81ca611b27e70cfa466f6447b7" Workload="localhost-k8s-whisker--764bbbbb79--jz7jk-eth0" Sep 9 21:59:45.244272 containerd[1576]: 2025-09-09 21:59:44.498 [INFO][4579] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2d60c9d676fd7f5f1bb904d67b53c364c331bb81ca611b27e70cfa466f6447b7" HandleID="k8s-pod-network.2d60c9d676fd7f5f1bb904d67b53c364c331bb81ca611b27e70cfa466f6447b7" Workload="localhost-k8s-whisker--764bbbbb79--jz7jk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e330), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-764bbbbb79-jz7jk", "timestamp":"2025-09-09 21:59:44.497916396 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 21:59:45.244272 containerd[1576]: 2025-09-09 21:59:44.498 [INFO][4579] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 21:59:45.244272 containerd[1576]: 2025-09-09 21:59:44.596 [INFO][4579] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 21:59:45.244272 containerd[1576]: 2025-09-09 21:59:44.597 [INFO][4579] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 21:59:45.244272 containerd[1576]: 2025-09-09 21:59:44.632 [INFO][4579] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2d60c9d676fd7f5f1bb904d67b53c364c331bb81ca611b27e70cfa466f6447b7" host="localhost" Sep 9 21:59:45.244272 containerd[1576]: 2025-09-09 21:59:44.682 [INFO][4579] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 21:59:45.244272 containerd[1576]: 2025-09-09 21:59:44.765 [INFO][4579] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 21:59:45.244272 containerd[1576]: 2025-09-09 21:59:44.814 [INFO][4579] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 21:59:45.244272 containerd[1576]: 2025-09-09 21:59:44.849 [INFO][4579] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 21:59:45.244272 containerd[1576]: 2025-09-09 21:59:44.849 [INFO][4579] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2d60c9d676fd7f5f1bb904d67b53c364c331bb81ca611b27e70cfa466f6447b7" host="localhost" Sep 9 21:59:45.260667 containerd[1576]: 2025-09-09 21:59:44.859 [INFO][4579] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2d60c9d676fd7f5f1bb904d67b53c364c331bb81ca611b27e70cfa466f6447b7 Sep 9 21:59:45.260667 containerd[1576]: 2025-09-09 21:59:44.896 [INFO][4579] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2d60c9d676fd7f5f1bb904d67b53c364c331bb81ca611b27e70cfa466f6447b7" host="localhost" Sep 9 21:59:45.260667 containerd[1576]: 2025-09-09 21:59:44.964 [INFO][4579] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.2d60c9d676fd7f5f1bb904d67b53c364c331bb81ca611b27e70cfa466f6447b7" host="localhost" Sep 9 21:59:45.260667 containerd[1576]: 2025-09-09 21:59:44.976 [INFO][4579] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.2d60c9d676fd7f5f1bb904d67b53c364c331bb81ca611b27e70cfa466f6447b7" host="localhost" Sep 9 21:59:45.260667 containerd[1576]: 2025-09-09 21:59:44.976 [INFO][4579] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 21:59:45.260667 containerd[1576]: 2025-09-09 21:59:44.976 [INFO][4579] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="2d60c9d676fd7f5f1bb904d67b53c364c331bb81ca611b27e70cfa466f6447b7" HandleID="k8s-pod-network.2d60c9d676fd7f5f1bb904d67b53c364c331bb81ca611b27e70cfa466f6447b7" Workload="localhost-k8s-whisker--764bbbbb79--jz7jk-eth0" Sep 9 21:59:45.260938 containerd[1576]: 2025-09-09 21:59:45.017 [INFO][4534] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2d60c9d676fd7f5f1bb904d67b53c364c331bb81ca611b27e70cfa466f6447b7" Namespace="calico-system" Pod="whisker-764bbbbb79-jz7jk" WorkloadEndpoint="localhost-k8s-whisker--764bbbbb79--jz7jk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--764bbbbb79--jz7jk-eth0", GenerateName:"whisker-764bbbbb79-", Namespace:"calico-system", SelfLink:"", UID:"1aea1796-6a61-4c7c-ab8e-cb07f5bfff91", ResourceVersion:"1114", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 21, 59, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"764bbbbb79", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-764bbbbb79-jz7jk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali09d9b1281d7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 21:59:45.260938 containerd[1576]: 2025-09-09 21:59:45.018 [INFO][4534] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="2d60c9d676fd7f5f1bb904d67b53c364c331bb81ca611b27e70cfa466f6447b7" Namespace="calico-system" Pod="whisker-764bbbbb79-jz7jk" WorkloadEndpoint="localhost-k8s-whisker--764bbbbb79--jz7jk-eth0" Sep 9 21:59:45.261097 containerd[1576]: 2025-09-09 21:59:45.018 [INFO][4534] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali09d9b1281d7 ContainerID="2d60c9d676fd7f5f1bb904d67b53c364c331bb81ca611b27e70cfa466f6447b7" Namespace="calico-system" Pod="whisker-764bbbbb79-jz7jk" WorkloadEndpoint="localhost-k8s-whisker--764bbbbb79--jz7jk-eth0" Sep 9 21:59:45.261097 containerd[1576]: 2025-09-09 21:59:45.057 [INFO][4534] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2d60c9d676fd7f5f1bb904d67b53c364c331bb81ca611b27e70cfa466f6447b7" Namespace="calico-system" Pod="whisker-764bbbbb79-jz7jk" WorkloadEndpoint="localhost-k8s-whisker--764bbbbb79--jz7jk-eth0" Sep 9 21:59:45.261174 containerd[1576]: 2025-09-09 21:59:45.104 [INFO][4534] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2d60c9d676fd7f5f1bb904d67b53c364c331bb81ca611b27e70cfa466f6447b7" Namespace="calico-system" Pod="whisker-764bbbbb79-jz7jk" WorkloadEndpoint="localhost-k8s-whisker--764bbbbb79--jz7jk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--764bbbbb79--jz7jk-eth0", GenerateName:"whisker-764bbbbb79-", Namespace:"calico-system", SelfLink:"", UID:"1aea1796-6a61-4c7c-ab8e-cb07f5bfff91", ResourceVersion:"1114", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 21, 59, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"764bbbbb79", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2d60c9d676fd7f5f1bb904d67b53c364c331bb81ca611b27e70cfa466f6447b7", Pod:"whisker-764bbbbb79-jz7jk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali09d9b1281d7", MAC:"56:94:2f:a9:69:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 21:59:45.261267 containerd[1576]: 2025-09-09 21:59:45.197 [INFO][4534] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2d60c9d676fd7f5f1bb904d67b53c364c331bb81ca611b27e70cfa466f6447b7" Namespace="calico-system" Pod="whisker-764bbbbb79-jz7jk" WorkloadEndpoint="localhost-k8s-whisker--764bbbbb79--jz7jk-eth0" Sep 9 21:59:45.346291 systemd[1]: Started cri-containerd-33002135697469f599327aaadbea61a9ae296ed7e68f82082e0e5efb9786a35e.scope - libcontainer container 33002135697469f599327aaadbea61a9ae296ed7e68f82082e0e5efb9786a35e. Sep 9 21:59:45.355920 systemd-networkd[1478]: caliacb11718bdd: Gained IPv6LL Sep 9 21:59:45.364675 systemd[1]: Started cri-containerd-9cc1f61c3786ce97fc4f3440c8fbe8e5ca8a8f666ffbd8fcc8c24cdb37268fd6.scope - libcontainer container 9cc1f61c3786ce97fc4f3440c8fbe8e5ca8a8f666ffbd8fcc8c24cdb37268fd6. Sep 9 21:59:45.386814 systemd[1]: Started cri-containerd-8ab313140b8a1a794eb8a0bc77b2bccd2307958e39e431201ed90df048ab3d2e.scope - libcontainer container 8ab313140b8a1a794eb8a0bc77b2bccd2307958e39e431201ed90df048ab3d2e. Sep 9 21:59:45.430019 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 21:59:45.451981 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 21:59:45.464093 containerd[1576]: time="2025-09-09T21:59:45.464019880Z" level=info msg="connecting to shim 2d60c9d676fd7f5f1bb904d67b53c364c331bb81ca611b27e70cfa466f6447b7" address="unix:///run/containerd/s/9592dca3360f9cc40f4441e26672a617755bea74d6afc89ccfc6831e461e4ec3" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:59:45.476923 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 21:59:45.581047 systemd[1]: Started cri-containerd-2d60c9d676fd7f5f1bb904d67b53c364c331bb81ca611b27e70cfa466f6447b7.scope - libcontainer container 2d60c9d676fd7f5f1bb904d67b53c364c331bb81ca611b27e70cfa466f6447b7. Sep 9 21:59:45.611261 containerd[1576]: time="2025-09-09T21:59:45.611157190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nzhdk,Uid:a3597f1e-72d0-40eb-b831-78b87602d9ab,Namespace:calico-system,Attempt:0,} returns sandbox id \"8ab313140b8a1a794eb8a0bc77b2bccd2307958e39e431201ed90df048ab3d2e\"" Sep 9 21:59:45.619341 containerd[1576]: time="2025-09-09T21:59:45.619174092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 9 21:59:45.674394 containerd[1576]: time="2025-09-09T21:59:45.673301591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c66bfd5cf-zxqkv,Uid:8a9661e9-5b18-43f4-b60f-9cb3c54276be,Namespace:calico-system,Attempt:0,} returns sandbox id \"9cc1f61c3786ce97fc4f3440c8fbe8e5ca8a8f666ffbd8fcc8c24cdb37268fd6\"" Sep 9 21:59:45.683436 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 21:59:45.695356 containerd[1576]: time="2025-09-09T21:59:45.695130129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85fb8bd9d8-2rqmt,Uid:57318330-371c-4f00-bb0e-4ef73267101a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"33002135697469f599327aaadbea61a9ae296ed7e68f82082e0e5efb9786a35e\"" Sep 9 21:59:45.753998 containerd[1576]: time="2025-09-09T21:59:45.753414408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-cbmvm,Uid:c9730904-d479-4f03-8945-1eae9818173f,Namespace:calico-system,Attempt:0,}" Sep 9 21:59:45.755018 containerd[1576]: time="2025-09-09T21:59:45.754959258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85fb8bd9d8-gxhlf,Uid:1edad75d-6a90-4301-b98f-fe1e20ca7dba,Namespace:calico-apiserver,Attempt:0,}" Sep 9 21:59:45.926089 containerd[1576]: time="2025-09-09T21:59:45.918229809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-764bbbbb79-jz7jk,Uid:1aea1796-6a61-4c7c-ab8e-cb07f5bfff91,Namespace:calico-system,Attempt:0,} returns sandbox id \"2d60c9d676fd7f5f1bb904d67b53c364c331bb81ca611b27e70cfa466f6447b7\"" Sep 9 21:59:46.156528 systemd-networkd[1478]: vxlan.calico: Link UP Sep 9 21:59:46.156929 systemd-networkd[1478]: vxlan.calico: Gained carrier Sep 9 21:59:46.312762 systemd-networkd[1478]: calie3565b33384: Gained IPv6LL Sep 9 21:59:46.315447 systemd-networkd[1478]: calib9f65ba19bb: Gained IPv6LL Sep 9 21:59:46.643984 systemd-networkd[1478]: cali09a13c3c146: Link UP Sep 9 21:59:46.644394 systemd-networkd[1478]: cali09a13c3c146: Gained carrier Sep 9 21:59:46.695692 systemd-networkd[1478]: cali09d9b1281d7: Gained IPv6LL Sep 9 21:59:46.708317 containerd[1576]: 2025-09-09 21:59:46.147 [INFO][4848] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--85fb8bd9d8--gxhlf-eth0 calico-apiserver-85fb8bd9d8- calico-apiserver 1edad75d-6a90-4301-b98f-fe1e20ca7dba 982 0 2025-09-09 21:58:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:85fb8bd9d8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-85fb8bd9d8-gxhlf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali09a13c3c146 [] [] }} ContainerID="1fb7ca34b29f8709faf42a7f42c3100f8d19574a2a10b8972fcbd37a6b6f3380" Namespace="calico-apiserver" Pod="calico-apiserver-85fb8bd9d8-gxhlf" WorkloadEndpoint="localhost-k8s-calico--apiserver--85fb8bd9d8--gxhlf-" Sep 9 21:59:46.708317 containerd[1576]: 2025-09-09 21:59:46.151 [INFO][4848] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1fb7ca34b29f8709faf42a7f42c3100f8d19574a2a10b8972fcbd37a6b6f3380" Namespace="calico-apiserver" Pod="calico-apiserver-85fb8bd9d8-gxhlf" WorkloadEndpoint="localhost-k8s-calico--apiserver--85fb8bd9d8--gxhlf-eth0" Sep 9 21:59:46.708317 containerd[1576]: 2025-09-09 21:59:46.349 [INFO][4873] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1fb7ca34b29f8709faf42a7f42c3100f8d19574a2a10b8972fcbd37a6b6f3380" HandleID="k8s-pod-network.1fb7ca34b29f8709faf42a7f42c3100f8d19574a2a10b8972fcbd37a6b6f3380" Workload="localhost-k8s-calico--apiserver--85fb8bd9d8--gxhlf-eth0" Sep 9 21:59:46.708627 containerd[1576]: 2025-09-09 21:59:46.360 [INFO][4873] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1fb7ca34b29f8709faf42a7f42c3100f8d19574a2a10b8972fcbd37a6b6f3380" HandleID="k8s-pod-network.1fb7ca34b29f8709faf42a7f42c3100f8d19574a2a10b8972fcbd37a6b6f3380" Workload="localhost-k8s-calico--apiserver--85fb8bd9d8--gxhlf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efb0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-85fb8bd9d8-gxhlf", "timestamp":"2025-09-09 21:59:46.348807484 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 21:59:46.708627 containerd[1576]: 2025-09-09 21:59:46.370 [INFO][4873] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 21:59:46.708627 containerd[1576]: 2025-09-09 21:59:46.378 [INFO][4873] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 21:59:46.708627 containerd[1576]: 2025-09-09 21:59:46.379 [INFO][4873] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 21:59:46.708627 containerd[1576]: 2025-09-09 21:59:46.441 [INFO][4873] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1fb7ca34b29f8709faf42a7f42c3100f8d19574a2a10b8972fcbd37a6b6f3380" host="localhost" Sep 9 21:59:46.708627 containerd[1576]: 2025-09-09 21:59:46.482 [INFO][4873] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 21:59:46.708627 containerd[1576]: 2025-09-09 21:59:46.524 [INFO][4873] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 21:59:46.708627 containerd[1576]: 2025-09-09 21:59:46.538 [INFO][4873] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 21:59:46.708627 containerd[1576]: 2025-09-09 21:59:46.544 [INFO][4873] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 21:59:46.708627 containerd[1576]: 2025-09-09 21:59:46.544 [INFO][4873] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1fb7ca34b29f8709faf42a7f42c3100f8d19574a2a10b8972fcbd37a6b6f3380" host="localhost" Sep 9 21:59:46.708937 containerd[1576]: 2025-09-09 21:59:46.550 [INFO][4873] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1fb7ca34b29f8709faf42a7f42c3100f8d19574a2a10b8972fcbd37a6b6f3380 Sep 9 21:59:46.708937 containerd[1576]: 2025-09-09 21:59:46.585 [INFO][4873] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1fb7ca34b29f8709faf42a7f42c3100f8d19574a2a10b8972fcbd37a6b6f3380" host="localhost" Sep 9 21:59:46.708937 containerd[1576]: 2025-09-09 21:59:46.610 [INFO][4873] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.1fb7ca34b29f8709faf42a7f42c3100f8d19574a2a10b8972fcbd37a6b6f3380" host="localhost" Sep 9 21:59:46.708937 containerd[1576]: 2025-09-09 21:59:46.610 [INFO][4873] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.1fb7ca34b29f8709faf42a7f42c3100f8d19574a2a10b8972fcbd37a6b6f3380" host="localhost" Sep 9 21:59:46.708937 containerd[1576]: 2025-09-09 21:59:46.610 [INFO][4873] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 21:59:46.708937 containerd[1576]: 2025-09-09 21:59:46.611 [INFO][4873] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="1fb7ca34b29f8709faf42a7f42c3100f8d19574a2a10b8972fcbd37a6b6f3380" HandleID="k8s-pod-network.1fb7ca34b29f8709faf42a7f42c3100f8d19574a2a10b8972fcbd37a6b6f3380" Workload="localhost-k8s-calico--apiserver--85fb8bd9d8--gxhlf-eth0" Sep 9 21:59:46.709100 containerd[1576]: 2025-09-09 21:59:46.628 [INFO][4848] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1fb7ca34b29f8709faf42a7f42c3100f8d19574a2a10b8972fcbd37a6b6f3380" Namespace="calico-apiserver" Pod="calico-apiserver-85fb8bd9d8-gxhlf" WorkloadEndpoint="localhost-k8s-calico--apiserver--85fb8bd9d8--gxhlf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--85fb8bd9d8--gxhlf-eth0", GenerateName:"calico-apiserver-85fb8bd9d8-", Namespace:"calico-apiserver", SelfLink:"", UID:"1edad75d-6a90-4301-b98f-fe1e20ca7dba", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 21, 58, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85fb8bd9d8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-85fb8bd9d8-gxhlf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali09a13c3c146", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 21:59:46.709165 containerd[1576]: 2025-09-09 21:59:46.628 [INFO][4848] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="1fb7ca34b29f8709faf42a7f42c3100f8d19574a2a10b8972fcbd37a6b6f3380" Namespace="calico-apiserver" Pod="calico-apiserver-85fb8bd9d8-gxhlf" WorkloadEndpoint="localhost-k8s-calico--apiserver--85fb8bd9d8--gxhlf-eth0" Sep 9 21:59:46.709165 containerd[1576]: 2025-09-09 21:59:46.628 [INFO][4848] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali09a13c3c146 ContainerID="1fb7ca34b29f8709faf42a7f42c3100f8d19574a2a10b8972fcbd37a6b6f3380" Namespace="calico-apiserver" Pod="calico-apiserver-85fb8bd9d8-gxhlf" WorkloadEndpoint="localhost-k8s-calico--apiserver--85fb8bd9d8--gxhlf-eth0" Sep 9 21:59:46.709165 containerd[1576]: 2025-09-09 21:59:46.648 [INFO][4848] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1fb7ca34b29f8709faf42a7f42c3100f8d19574a2a10b8972fcbd37a6b6f3380" Namespace="calico-apiserver" Pod="calico-apiserver-85fb8bd9d8-gxhlf" WorkloadEndpoint="localhost-k8s-calico--apiserver--85fb8bd9d8--gxhlf-eth0" Sep 9 21:59:46.709246 containerd[1576]: 2025-09-09 21:59:46.652 [INFO][4848] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1fb7ca34b29f8709faf42a7f42c3100f8d19574a2a10b8972fcbd37a6b6f3380" Namespace="calico-apiserver" Pod="calico-apiserver-85fb8bd9d8-gxhlf" WorkloadEndpoint="localhost-k8s-calico--apiserver--85fb8bd9d8--gxhlf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--85fb8bd9d8--gxhlf-eth0", GenerateName:"calico-apiserver-85fb8bd9d8-", Namespace:"calico-apiserver", SelfLink:"", UID:"1edad75d-6a90-4301-b98f-fe1e20ca7dba", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 21, 58, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85fb8bd9d8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1fb7ca34b29f8709faf42a7f42c3100f8d19574a2a10b8972fcbd37a6b6f3380", Pod:"calico-apiserver-85fb8bd9d8-gxhlf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali09a13c3c146", MAC:"36:10:6c:45:8c:0c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 21:59:46.709310 containerd[1576]: 2025-09-09 21:59:46.692 [INFO][4848] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1fb7ca34b29f8709faf42a7f42c3100f8d19574a2a10b8972fcbd37a6b6f3380" Namespace="calico-apiserver" Pod="calico-apiserver-85fb8bd9d8-gxhlf" WorkloadEndpoint="localhost-k8s-calico--apiserver--85fb8bd9d8--gxhlf-eth0" Sep 9 21:59:46.968221 systemd-networkd[1478]: cali0e51881b3c2: Link UP Sep 9 21:59:46.973020 systemd-networkd[1478]: cali0e51881b3c2: Gained carrier Sep 9 21:59:47.018263 containerd[1576]: 2025-09-09 21:59:46.227 [INFO][4838] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--cbmvm-eth0 goldmane-54d579b49d- calico-system c9730904-d479-4f03-8945-1eae9818173f 990 0 2025-09-09 21:58:44 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-cbmvm eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali0e51881b3c2 [] [] }} ContainerID="2501e19d84641a9e1dea13a4192f4e4d48ad963033afbc1fa208fe23d8e965bf" Namespace="calico-system" Pod="goldmane-54d579b49d-cbmvm" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--cbmvm-" Sep 9 21:59:47.018263 containerd[1576]: 2025-09-09 21:59:46.228 [INFO][4838] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2501e19d84641a9e1dea13a4192f4e4d48ad963033afbc1fa208fe23d8e965bf" Namespace="calico-system" Pod="goldmane-54d579b49d-cbmvm" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--cbmvm-eth0" Sep 9 21:59:47.018263 containerd[1576]: 2025-09-09 21:59:46.433 [INFO][4892] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2501e19d84641a9e1dea13a4192f4e4d48ad963033afbc1fa208fe23d8e965bf" HandleID="k8s-pod-network.2501e19d84641a9e1dea13a4192f4e4d48ad963033afbc1fa208fe23d8e965bf" Workload="localhost-k8s-goldmane--54d579b49d--cbmvm-eth0" Sep 9 21:59:47.019495 containerd[1576]: 2025-09-09 21:59:46.434 [INFO][4892] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2501e19d84641a9e1dea13a4192f4e4d48ad963033afbc1fa208fe23d8e965bf" HandleID="k8s-pod-network.2501e19d84641a9e1dea13a4192f4e4d48ad963033afbc1fa208fe23d8e965bf" Workload="localhost-k8s-goldmane--54d579b49d--cbmvm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000b7640), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-cbmvm", "timestamp":"2025-09-09 21:59:46.433940024 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 21:59:47.019495 containerd[1576]: 2025-09-09 21:59:46.434 [INFO][4892] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 21:59:47.019495 containerd[1576]: 2025-09-09 21:59:46.611 [INFO][4892] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 21:59:47.019495 containerd[1576]: 2025-09-09 21:59:46.611 [INFO][4892] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 21:59:47.019495 containerd[1576]: 2025-09-09 21:59:46.654 [INFO][4892] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2501e19d84641a9e1dea13a4192f4e4d48ad963033afbc1fa208fe23d8e965bf" host="localhost" Sep 9 21:59:47.019495 containerd[1576]: 2025-09-09 21:59:46.701 [INFO][4892] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 21:59:47.019495 containerd[1576]: 2025-09-09 21:59:46.730 [INFO][4892] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 21:59:47.019495 containerd[1576]: 2025-09-09 21:59:46.765 [INFO][4892] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 21:59:47.019495 containerd[1576]: 2025-09-09 21:59:46.791 [INFO][4892] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 21:59:47.019495 containerd[1576]: 2025-09-09 21:59:46.791 [INFO][4892] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2501e19d84641a9e1dea13a4192f4e4d48ad963033afbc1fa208fe23d8e965bf" host="localhost" Sep 9 21:59:47.019816 containerd[1576]: 2025-09-09 21:59:46.836 [INFO][4892] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2501e19d84641a9e1dea13a4192f4e4d48ad963033afbc1fa208fe23d8e965bf Sep 9 21:59:47.019816 containerd[1576]: 2025-09-09 21:59:46.875 [INFO][4892] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2501e19d84641a9e1dea13a4192f4e4d48ad963033afbc1fa208fe23d8e965bf" host="localhost" Sep 9 21:59:47.019816 containerd[1576]: 2025-09-09 21:59:46.918 [INFO][4892] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.2501e19d84641a9e1dea13a4192f4e4d48ad963033afbc1fa208fe23d8e965bf" host="localhost" Sep 9 21:59:47.019816 containerd[1576]: 2025-09-09 21:59:46.919 [INFO][4892] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.2501e19d84641a9e1dea13a4192f4e4d48ad963033afbc1fa208fe23d8e965bf" host="localhost" Sep 9 21:59:47.019816 containerd[1576]: 2025-09-09 21:59:46.919 [INFO][4892] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 21:59:47.019816 containerd[1576]: 2025-09-09 21:59:46.919 [INFO][4892] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="2501e19d84641a9e1dea13a4192f4e4d48ad963033afbc1fa208fe23d8e965bf" HandleID="k8s-pod-network.2501e19d84641a9e1dea13a4192f4e4d48ad963033afbc1fa208fe23d8e965bf" Workload="localhost-k8s-goldmane--54d579b49d--cbmvm-eth0" Sep 9 21:59:47.019992 containerd[1576]: 2025-09-09 21:59:46.927 [INFO][4838] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2501e19d84641a9e1dea13a4192f4e4d48ad963033afbc1fa208fe23d8e965bf" Namespace="calico-system" Pod="goldmane-54d579b49d-cbmvm" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--cbmvm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--cbmvm-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"c9730904-d479-4f03-8945-1eae9818173f", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 21, 58, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-cbmvm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0e51881b3c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 21:59:47.019992 containerd[1576]: 2025-09-09 21:59:46.927 [INFO][4838] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="2501e19d84641a9e1dea13a4192f4e4d48ad963033afbc1fa208fe23d8e965bf" Namespace="calico-system" Pod="goldmane-54d579b49d-cbmvm" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--cbmvm-eth0" Sep 9 21:59:47.020099 containerd[1576]: 2025-09-09 21:59:46.928 [INFO][4838] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0e51881b3c2 ContainerID="2501e19d84641a9e1dea13a4192f4e4d48ad963033afbc1fa208fe23d8e965bf" Namespace="calico-system" Pod="goldmane-54d579b49d-cbmvm" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--cbmvm-eth0" Sep 9 21:59:47.020099 containerd[1576]: 2025-09-09 21:59:46.975 [INFO][4838] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2501e19d84641a9e1dea13a4192f4e4d48ad963033afbc1fa208fe23d8e965bf" Namespace="calico-system" Pod="goldmane-54d579b49d-cbmvm" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--cbmvm-eth0" Sep 9 21:59:47.020159 containerd[1576]: 2025-09-09 21:59:46.978 [INFO][4838] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2501e19d84641a9e1dea13a4192f4e4d48ad963033afbc1fa208fe23d8e965bf" Namespace="calico-system" Pod="goldmane-54d579b49d-cbmvm" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--cbmvm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--cbmvm-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"c9730904-d479-4f03-8945-1eae9818173f", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 21, 58, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2501e19d84641a9e1dea13a4192f4e4d48ad963033afbc1fa208fe23d8e965bf", Pod:"goldmane-54d579b49d-cbmvm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0e51881b3c2", MAC:"5a:c9:82:da:e2:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 21:59:47.020234 containerd[1576]: 2025-09-09 21:59:47.009 [INFO][4838] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2501e19d84641a9e1dea13a4192f4e4d48ad963033afbc1fa208fe23d8e965bf" Namespace="calico-system" Pod="goldmane-54d579b49d-cbmvm" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--cbmvm-eth0" Sep 9 21:59:47.025827 containerd[1576]: time="2025-09-09T21:59:47.025110342Z" level=info msg="connecting to shim 1fb7ca34b29f8709faf42a7f42c3100f8d19574a2a10b8972fcbd37a6b6f3380" address="unix:///run/containerd/s/020c3bf15c612d1f1d9ac46a11e2a0dde612fc6124b26bbb252db78250bd0136" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:59:47.159729 systemd[1]: Started cri-containerd-1fb7ca34b29f8709faf42a7f42c3100f8d19574a2a10b8972fcbd37a6b6f3380.scope - libcontainer container 1fb7ca34b29f8709faf42a7f42c3100f8d19574a2a10b8972fcbd37a6b6f3380. Sep 9 21:59:47.246136 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 21:59:47.247998 containerd[1576]: time="2025-09-09T21:59:47.246512843Z" level=info msg="connecting to shim 2501e19d84641a9e1dea13a4192f4e4d48ad963033afbc1fa208fe23d8e965bf" address="unix:///run/containerd/s/ed55d293ab658320c88111b05d0edfe9d112531a72fe98a034f978922addf7cc" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:59:47.378186 systemd[1]: Started cri-containerd-2501e19d84641a9e1dea13a4192f4e4d48ad963033afbc1fa208fe23d8e965bf.scope - libcontainer container 2501e19d84641a9e1dea13a4192f4e4d48ad963033afbc1fa208fe23d8e965bf. Sep 9 21:59:47.481645 containerd[1576]: time="2025-09-09T21:59:47.472654110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85fb8bd9d8-gxhlf,Uid:1edad75d-6a90-4301-b98f-fe1e20ca7dba,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1fb7ca34b29f8709faf42a7f42c3100f8d19574a2a10b8972fcbd37a6b6f3380\"" Sep 9 21:59:47.508667 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 21:59:47.656126 systemd-networkd[1478]: vxlan.calico: Gained IPv6LL Sep 9 21:59:47.913782 systemd-networkd[1478]: cali09a13c3c146: Gained IPv6LL Sep 9 21:59:47.926566 containerd[1576]: time="2025-09-09T21:59:47.926423424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-cbmvm,Uid:c9730904-d479-4f03-8945-1eae9818173f,Namespace:calico-system,Attempt:0,} returns sandbox id \"2501e19d84641a9e1dea13a4192f4e4d48ad963033afbc1fa208fe23d8e965bf\"" Sep 9 21:59:48.168931 systemd-networkd[1478]: cali0e51881b3c2: Gained IPv6LL Sep 9 21:59:49.191995 containerd[1576]: time="2025-09-09T21:59:49.191024714Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:59:49.198394 containerd[1576]: time="2025-09-09T21:59:49.198323240Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 9 21:59:49.200603 containerd[1576]: time="2025-09-09T21:59:49.200507164Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:59:49.219259 containerd[1576]: time="2025-09-09T21:59:49.219142764Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:59:49.222320 containerd[1576]: time="2025-09-09T21:59:49.222148055Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 3.602804433s" Sep 9 21:59:49.222320 containerd[1576]: time="2025-09-09T21:59:49.222201185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 9 21:59:49.237790 containerd[1576]: time="2025-09-09T21:59:49.237687363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 9 21:59:49.258592 containerd[1576]: time="2025-09-09T21:59:49.258524820Z" level=info msg="CreateContainer within sandbox \"8ab313140b8a1a794eb8a0bc77b2bccd2307958e39e431201ed90df048ab3d2e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 9 21:59:49.315427 containerd[1576]: time="2025-09-09T21:59:49.314414077Z" level=info msg="Container ccf59fd38f712a2b30fedccfd79dd89d0d1bdbf52ce31b8596b3ffa85983621b: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:59:49.410563 containerd[1576]: time="2025-09-09T21:59:49.409535568Z" level=info msg="CreateContainer within sandbox \"8ab313140b8a1a794eb8a0bc77b2bccd2307958e39e431201ed90df048ab3d2e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ccf59fd38f712a2b30fedccfd79dd89d0d1bdbf52ce31b8596b3ffa85983621b\"" Sep 9 21:59:49.418543 containerd[1576]: time="2025-09-09T21:59:49.415421774Z" level=info msg="StartContainer for \"ccf59fd38f712a2b30fedccfd79dd89d0d1bdbf52ce31b8596b3ffa85983621b\"" Sep 9 21:59:49.418543 containerd[1576]: time="2025-09-09T21:59:49.418258437Z" level=info msg="connecting to shim ccf59fd38f712a2b30fedccfd79dd89d0d1bdbf52ce31b8596b3ffa85983621b" address="unix:///run/containerd/s/52666e45309e0e1bac47cb75b52cfabbdbbdb19c31b7f578853e5b8cce37d6c4" protocol=ttrpc version=3 Sep 9 21:59:49.527155 systemd[1]: Started cri-containerd-ccf59fd38f712a2b30fedccfd79dd89d0d1bdbf52ce31b8596b3ffa85983621b.scope - libcontainer container ccf59fd38f712a2b30fedccfd79dd89d0d1bdbf52ce31b8596b3ffa85983621b. Sep 9 21:59:49.722023 containerd[1576]: time="2025-09-09T21:59:49.721549534Z" level=info msg="StartContainer for \"ccf59fd38f712a2b30fedccfd79dd89d0d1bdbf52ce31b8596b3ffa85983621b\" returns successfully" Sep 9 21:59:52.727809 kubelet[2840]: E0909 21:59:52.726021 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:59:52.734421 containerd[1576]: time="2025-09-09T21:59:52.734338230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vg22c,Uid:a09151ad-a9b3-49e6-974f-bb1c62675974,Namespace:kube-system,Attempt:0,}" Sep 9 21:59:53.193076 systemd[1]: Started sshd@9-10.0.0.35:22-10.0.0.1:41326.service - OpenSSH per-connection server daemon (10.0.0.1:41326). Sep 9 21:59:53.562517 sshd[5125]: Accepted publickey for core from 10.0.0.1 port 41326 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:59:53.616672 sshd-session[5125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:59:53.656749 systemd-logind[1557]: New session 10 of user core. Sep 9 21:59:53.683831 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 21:59:53.728373 kubelet[2840]: E0909 21:59:53.728109 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:59:53.729394 containerd[1576]: time="2025-09-09T21:59:53.729345148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bjgk4,Uid:e8a89593-6c20-439c-928d-0e86b0f1dbe1,Namespace:kube-system,Attempt:0,}" Sep 9 21:59:54.162283 systemd-networkd[1478]: calif816e2d571d: Link UP Sep 9 21:59:54.183070 systemd-networkd[1478]: calif816e2d571d: Gained carrier Sep 9 21:59:54.296251 containerd[1576]: 2025-09-09 21:59:52.982 [INFO][5099] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--vg22c-eth0 coredns-674b8bbfcf- kube-system a09151ad-a9b3-49e6-974f-bb1c62675974 979 0 2025-09-09 21:58:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-vg22c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif816e2d571d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b76c2b168d5b97d5e8ae47b4eff889097c0b69bc2901ee979055964edb9392d6" Namespace="kube-system" Pod="coredns-674b8bbfcf-vg22c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--vg22c-" Sep 9 21:59:54.296251 containerd[1576]: 2025-09-09 21:59:52.983 [INFO][5099] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b76c2b168d5b97d5e8ae47b4eff889097c0b69bc2901ee979055964edb9392d6" Namespace="kube-system" Pod="coredns-674b8bbfcf-vg22c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--vg22c-eth0" Sep 9 21:59:54.296251 containerd[1576]: 2025-09-09 21:59:53.112 [INFO][5116] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b76c2b168d5b97d5e8ae47b4eff889097c0b69bc2901ee979055964edb9392d6" HandleID="k8s-pod-network.b76c2b168d5b97d5e8ae47b4eff889097c0b69bc2901ee979055964edb9392d6" Workload="localhost-k8s-coredns--674b8bbfcf--vg22c-eth0" Sep 9 21:59:54.297138 containerd[1576]: 2025-09-09 21:59:53.112 [INFO][5116] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b76c2b168d5b97d5e8ae47b4eff889097c0b69bc2901ee979055964edb9392d6" HandleID="k8s-pod-network.b76c2b168d5b97d5e8ae47b4eff889097c0b69bc2901ee979055964edb9392d6" Workload="localhost-k8s-coredns--674b8bbfcf--vg22c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a7620), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-vg22c", "timestamp":"2025-09-09 21:59:53.112056856 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 21:59:54.297138 containerd[1576]: 2025-09-09 21:59:53.112 [INFO][5116] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 21:59:54.297138 containerd[1576]: 2025-09-09 21:59:53.112 [INFO][5116] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 21:59:54.297138 containerd[1576]: 2025-09-09 21:59:53.112 [INFO][5116] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 21:59:54.297138 containerd[1576]: 2025-09-09 21:59:53.316 [INFO][5116] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b76c2b168d5b97d5e8ae47b4eff889097c0b69bc2901ee979055964edb9392d6" host="localhost" Sep 9 21:59:54.297138 containerd[1576]: 2025-09-09 21:59:53.520 [INFO][5116] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 21:59:54.297138 containerd[1576]: 2025-09-09 21:59:53.846 [INFO][5116] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 21:59:54.297138 containerd[1576]: 2025-09-09 21:59:53.870 [INFO][5116] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 21:59:54.297138 containerd[1576]: 2025-09-09 21:59:53.898 [INFO][5116] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 21:59:54.297138 containerd[1576]: 2025-09-09 21:59:53.898 [INFO][5116] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b76c2b168d5b97d5e8ae47b4eff889097c0b69bc2901ee979055964edb9392d6" host="localhost" Sep 9 21:59:54.297478 containerd[1576]: 2025-09-09 21:59:53.909 [INFO][5116] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b76c2b168d5b97d5e8ae47b4eff889097c0b69bc2901ee979055964edb9392d6 Sep 9 21:59:54.297478 containerd[1576]: 2025-09-09 21:59:53.950 [INFO][5116] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b76c2b168d5b97d5e8ae47b4eff889097c0b69bc2901ee979055964edb9392d6" host="localhost" Sep 9 21:59:54.297478 containerd[1576]: 2025-09-09 21:59:54.067 [INFO][5116] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.b76c2b168d5b97d5e8ae47b4eff889097c0b69bc2901ee979055964edb9392d6" host="localhost" Sep 9 21:59:54.297478 containerd[1576]: 2025-09-09 21:59:54.075 [INFO][5116] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.b76c2b168d5b97d5e8ae47b4eff889097c0b69bc2901ee979055964edb9392d6" host="localhost" Sep 9 21:59:54.297478 containerd[1576]: 2025-09-09 21:59:54.076 [INFO][5116] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 21:59:54.297478 containerd[1576]: 2025-09-09 21:59:54.076 [INFO][5116] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="b76c2b168d5b97d5e8ae47b4eff889097c0b69bc2901ee979055964edb9392d6" HandleID="k8s-pod-network.b76c2b168d5b97d5e8ae47b4eff889097c0b69bc2901ee979055964edb9392d6" Workload="localhost-k8s-coredns--674b8bbfcf--vg22c-eth0" Sep 9 21:59:54.297653 containerd[1576]: 2025-09-09 21:59:54.115 [INFO][5099] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b76c2b168d5b97d5e8ae47b4eff889097c0b69bc2901ee979055964edb9392d6" Namespace="kube-system" Pod="coredns-674b8bbfcf-vg22c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--vg22c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--vg22c-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a09151ad-a9b3-49e6-974f-bb1c62675974", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 21, 58, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-vg22c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif816e2d571d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 21:59:54.297757 containerd[1576]: 2025-09-09 21:59:54.116 [INFO][5099] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="b76c2b168d5b97d5e8ae47b4eff889097c0b69bc2901ee979055964edb9392d6" Namespace="kube-system" Pod="coredns-674b8bbfcf-vg22c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--vg22c-eth0" Sep 9 21:59:54.297757 containerd[1576]: 2025-09-09 21:59:54.116 [INFO][5099] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif816e2d571d ContainerID="b76c2b168d5b97d5e8ae47b4eff889097c0b69bc2901ee979055964edb9392d6" Namespace="kube-system" Pod="coredns-674b8bbfcf-vg22c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--vg22c-eth0" Sep 9 21:59:54.297757 containerd[1576]: 2025-09-09 21:59:54.175 [INFO][5099] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b76c2b168d5b97d5e8ae47b4eff889097c0b69bc2901ee979055964edb9392d6" Namespace="kube-system" Pod="coredns-674b8bbfcf-vg22c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--vg22c-eth0" Sep 9 21:59:54.297900 containerd[1576]: 2025-09-09 21:59:54.183 [INFO][5099] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b76c2b168d5b97d5e8ae47b4eff889097c0b69bc2901ee979055964edb9392d6" Namespace="kube-system" Pod="coredns-674b8bbfcf-vg22c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--vg22c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--vg22c-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a09151ad-a9b3-49e6-974f-bb1c62675974", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 21, 58, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b76c2b168d5b97d5e8ae47b4eff889097c0b69bc2901ee979055964edb9392d6", Pod:"coredns-674b8bbfcf-vg22c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif816e2d571d", MAC:"ea:7f:16:32:21:e6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 21:59:54.297900 containerd[1576]: 2025-09-09 21:59:54.273 [INFO][5099] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b76c2b168d5b97d5e8ae47b4eff889097c0b69bc2901ee979055964edb9392d6" Namespace="kube-system" Pod="coredns-674b8bbfcf-vg22c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--vg22c-eth0" Sep 9 21:59:54.348840 sshd[5128]: Connection closed by 10.0.0.1 port 41326 Sep 9 21:59:54.351047 sshd-session[5125]: pam_unix(sshd:session): session closed for user core Sep 9 21:59:54.360935 systemd[1]: sshd@9-10.0.0.35:22-10.0.0.1:41326.service: Deactivated successfully. Sep 9 21:59:54.363071 systemd-logind[1557]: Session 10 logged out. Waiting for processes to exit. Sep 9 21:59:54.370257 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 21:59:54.379828 systemd-logind[1557]: Removed session 10. Sep 9 21:59:54.456589 containerd[1576]: time="2025-09-09T21:59:54.455543906Z" level=info msg="connecting to shim b76c2b168d5b97d5e8ae47b4eff889097c0b69bc2901ee979055964edb9392d6" address="unix:///run/containerd/s/cdd23a0f990f6841eca98318ecba4d537dbe94e36aa8583a44f8874beac86e10" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:59:54.546737 systemd-networkd[1478]: cali01926316471: Link UP Sep 9 21:59:54.547183 systemd[1]: Started cri-containerd-b76c2b168d5b97d5e8ae47b4eff889097c0b69bc2901ee979055964edb9392d6.scope - libcontainer container b76c2b168d5b97d5e8ae47b4eff889097c0b69bc2901ee979055964edb9392d6. Sep 9 21:59:54.548927 systemd-networkd[1478]: cali01926316471: Gained carrier Sep 9 21:59:54.720557 containerd[1576]: 2025-09-09 21:59:54.093 [INFO][5137] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--bjgk4-eth0 coredns-674b8bbfcf- kube-system e8a89593-6c20-439c-928d-0e86b0f1dbe1 981 0 2025-09-09 21:58:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-bjgk4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali01926316471 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="572516219139724f6025caa24a2dcb97b720fad0c6caa003ada2b197c7140c53" Namespace="kube-system" Pod="coredns-674b8bbfcf-bjgk4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bjgk4-" Sep 9 21:59:54.720557 containerd[1576]: 2025-09-09 21:59:54.099 [INFO][5137] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="572516219139724f6025caa24a2dcb97b720fad0c6caa003ada2b197c7140c53" Namespace="kube-system" Pod="coredns-674b8bbfcf-bjgk4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bjgk4-eth0" Sep 9 21:59:54.720557 containerd[1576]: 2025-09-09 21:59:54.260 [INFO][5154] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="572516219139724f6025caa24a2dcb97b720fad0c6caa003ada2b197c7140c53" HandleID="k8s-pod-network.572516219139724f6025caa24a2dcb97b720fad0c6caa003ada2b197c7140c53" Workload="localhost-k8s-coredns--674b8bbfcf--bjgk4-eth0" Sep 9 21:59:54.720557 containerd[1576]: 2025-09-09 21:59:54.261 [INFO][5154] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="572516219139724f6025caa24a2dcb97b720fad0c6caa003ada2b197c7140c53" HandleID="k8s-pod-network.572516219139724f6025caa24a2dcb97b720fad0c6caa003ada2b197c7140c53" Workload="localhost-k8s-coredns--674b8bbfcf--bjgk4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d61e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-bjgk4", "timestamp":"2025-09-09 21:59:54.259989914 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 21:59:54.720557 containerd[1576]: 2025-09-09 21:59:54.262 [INFO][5154] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 21:59:54.720557 containerd[1576]: 2025-09-09 21:59:54.269 [INFO][5154] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 21:59:54.720557 containerd[1576]: 2025-09-09 21:59:54.269 [INFO][5154] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 21:59:54.720557 containerd[1576]: 2025-09-09 21:59:54.311 [INFO][5154] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.572516219139724f6025caa24a2dcb97b720fad0c6caa003ada2b197c7140c53" host="localhost" Sep 9 21:59:54.720557 containerd[1576]: 2025-09-09 21:59:54.351 [INFO][5154] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 21:59:54.720557 containerd[1576]: 2025-09-09 21:59:54.369 [INFO][5154] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 21:59:54.720557 containerd[1576]: 2025-09-09 21:59:54.387 [INFO][5154] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 21:59:54.720557 containerd[1576]: 2025-09-09 21:59:54.413 [INFO][5154] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 21:59:54.720557 containerd[1576]: 2025-09-09 21:59:54.413 [INFO][5154] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.572516219139724f6025caa24a2dcb97b720fad0c6caa003ada2b197c7140c53" host="localhost" Sep 9 21:59:54.720557 containerd[1576]: 2025-09-09 21:59:54.444 [INFO][5154] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.572516219139724f6025caa24a2dcb97b720fad0c6caa003ada2b197c7140c53 Sep 9 21:59:54.720557 containerd[1576]: 2025-09-09 21:59:54.469 [INFO][5154] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.572516219139724f6025caa24a2dcb97b720fad0c6caa003ada2b197c7140c53" host="localhost" Sep 9 21:59:54.720557 containerd[1576]: 2025-09-09 21:59:54.498 [INFO][5154] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.572516219139724f6025caa24a2dcb97b720fad0c6caa003ada2b197c7140c53" host="localhost" Sep 9 21:59:54.720557 containerd[1576]: 2025-09-09 21:59:54.498 [INFO][5154] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.572516219139724f6025caa24a2dcb97b720fad0c6caa003ada2b197c7140c53" host="localhost" Sep 9 21:59:54.720557 containerd[1576]: 2025-09-09 21:59:54.498 [INFO][5154] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 21:59:54.720557 containerd[1576]: 2025-09-09 21:59:54.498 [INFO][5154] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="572516219139724f6025caa24a2dcb97b720fad0c6caa003ada2b197c7140c53" HandleID="k8s-pod-network.572516219139724f6025caa24a2dcb97b720fad0c6caa003ada2b197c7140c53" Workload="localhost-k8s-coredns--674b8bbfcf--bjgk4-eth0" Sep 9 21:59:54.724879 containerd[1576]: 2025-09-09 21:59:54.518 [INFO][5137] cni-plugin/k8s.go 418: Populated endpoint ContainerID="572516219139724f6025caa24a2dcb97b720fad0c6caa003ada2b197c7140c53" Namespace="kube-system" Pod="coredns-674b8bbfcf-bjgk4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bjgk4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--bjgk4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e8a89593-6c20-439c-928d-0e86b0f1dbe1", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 21, 58, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-bjgk4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali01926316471", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 21:59:54.724879 containerd[1576]: 2025-09-09 21:59:54.519 [INFO][5137] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="572516219139724f6025caa24a2dcb97b720fad0c6caa003ada2b197c7140c53" Namespace="kube-system" Pod="coredns-674b8bbfcf-bjgk4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bjgk4-eth0" Sep 9 21:59:54.724879 containerd[1576]: 2025-09-09 21:59:54.519 [INFO][5137] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali01926316471 ContainerID="572516219139724f6025caa24a2dcb97b720fad0c6caa003ada2b197c7140c53" Namespace="kube-system" Pod="coredns-674b8bbfcf-bjgk4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bjgk4-eth0" Sep 9 21:59:54.724879 containerd[1576]: 2025-09-09 21:59:54.537 [INFO][5137] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="572516219139724f6025caa24a2dcb97b720fad0c6caa003ada2b197c7140c53" Namespace="kube-system" Pod="coredns-674b8bbfcf-bjgk4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bjgk4-eth0" Sep 9 21:59:54.724879 containerd[1576]: 2025-09-09 21:59:54.544 [INFO][5137] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="572516219139724f6025caa24a2dcb97b720fad0c6caa003ada2b197c7140c53" Namespace="kube-system" Pod="coredns-674b8bbfcf-bjgk4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bjgk4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--bjgk4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e8a89593-6c20-439c-928d-0e86b0f1dbe1", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 21, 58, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"572516219139724f6025caa24a2dcb97b720fad0c6caa003ada2b197c7140c53", Pod:"coredns-674b8bbfcf-bjgk4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali01926316471", MAC:"42:f9:13:82:d9:d8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 21:59:54.724879 containerd[1576]: 2025-09-09 21:59:54.670 [INFO][5137] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="572516219139724f6025caa24a2dcb97b720fad0c6caa003ada2b197c7140c53" Namespace="kube-system" Pod="coredns-674b8bbfcf-bjgk4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bjgk4-eth0" Sep 9 21:59:54.748136 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 21:59:54.880639 update_engine[1559]: I20250909 21:59:54.869356 1559 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 9 21:59:54.880639 update_engine[1559]: I20250909 21:59:54.875480 1559 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 9 21:59:54.880639 update_engine[1559]: I20250909 21:59:54.876062 1559 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 9 21:59:54.887929 containerd[1576]: time="2025-09-09T21:59:54.880811456Z" level=info msg="connecting to shim 572516219139724f6025caa24a2dcb97b720fad0c6caa003ada2b197c7140c53" address="unix:///run/containerd/s/afbb8b63b57d57467b24e21bb6fabd30e6a69025f8730fa0e4c3ed6060c446e7" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:59:54.892845 update_engine[1559]: E20250909 21:59:54.892635 1559 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 9 21:59:54.896611 update_engine[1559]: I20250909 21:59:54.895941 1559 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 9 21:59:54.896611 update_engine[1559]: I20250909 21:59:54.895987 1559 omaha_request_action.cc:617] Omaha request response: Sep 9 21:59:54.896611 update_engine[1559]: E20250909 21:59:54.896106 1559 omaha_request_action.cc:636] Omaha request network transfer failed. Sep 9 21:59:54.917893 update_engine[1559]: I20250909 21:59:54.917665 1559 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 9 21:59:54.917893 update_engine[1559]: I20250909 21:59:54.917797 1559 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 9 21:59:54.917893 update_engine[1559]: I20250909 21:59:54.917815 1559 update_attempter.cc:306] Processing Done. Sep 9 21:59:54.917893 update_engine[1559]: E20250909 21:59:54.917842 1559 update_attempter.cc:619] Update failed. Sep 9 21:59:54.918910 update_engine[1559]: I20250909 21:59:54.917852 1559 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 9 21:59:54.918910 update_engine[1559]: I20250909 21:59:54.917986 1559 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 9 21:59:54.918910 update_engine[1559]: I20250909 21:59:54.917995 1559 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 9 21:59:54.918910 update_engine[1559]: I20250909 21:59:54.918351 1559 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 9 21:59:54.918910 update_engine[1559]: I20250909 21:59:54.918511 1559 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 9 21:59:54.918910 update_engine[1559]: I20250909 21:59:54.918525 1559 omaha_request_action.cc:272] Request: Sep 9 21:59:54.918910 update_engine[1559]: Sep 9 21:59:54.918910 update_engine[1559]: Sep 9 21:59:54.918910 update_engine[1559]: Sep 9 21:59:54.918910 update_engine[1559]: Sep 9 21:59:54.918910 update_engine[1559]: Sep 9 21:59:54.918910 update_engine[1559]: Sep 9 21:59:54.918910 update_engine[1559]: I20250909 21:59:54.918656 1559 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 9 21:59:54.918910 update_engine[1559]: I20250909 21:59:54.918797 1559 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 9 21:59:54.921659 update_engine[1559]: I20250909 21:59:54.921622 1559 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 9 21:59:54.930264 locksmithd[1601]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 9 21:59:54.936602 update_engine[1559]: E20250909 21:59:54.936534 1559 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 9 21:59:54.937363 update_engine[1559]: I20250909 21:59:54.936918 1559 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 9 21:59:54.937363 update_engine[1559]: I20250909 21:59:54.936941 1559 omaha_request_action.cc:617] Omaha request response: Sep 9 21:59:54.937363 update_engine[1559]: I20250909 21:59:54.936955 1559 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 9 21:59:54.937363 update_engine[1559]: I20250909 21:59:54.936962 1559 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 9 21:59:54.937363 update_engine[1559]: I20250909 21:59:54.936968 1559 update_attempter.cc:306] Processing Done. Sep 9 21:59:54.937363 update_engine[1559]: I20250909 21:59:54.936977 1559 update_attempter.cc:310] Error event sent. Sep 9 21:59:54.937363 update_engine[1559]: I20250909 21:59:54.936991 1559 update_check_scheduler.cc:74] Next update check in 42m7s Sep 9 21:59:54.940431 locksmithd[1601]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 9 21:59:55.017527 containerd[1576]: time="2025-09-09T21:59:55.016422138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vg22c,Uid:a09151ad-a9b3-49e6-974f-bb1c62675974,Namespace:kube-system,Attempt:0,} returns sandbox id \"b76c2b168d5b97d5e8ae47b4eff889097c0b69bc2901ee979055964edb9392d6\"" Sep 9 21:59:55.026522 kubelet[2840]: E0909 21:59:55.023848 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:59:55.043557 systemd[1]: Started cri-containerd-572516219139724f6025caa24a2dcb97b720fad0c6caa003ada2b197c7140c53.scope - libcontainer container 572516219139724f6025caa24a2dcb97b720fad0c6caa003ada2b197c7140c53. Sep 9 21:59:55.126050 containerd[1576]: time="2025-09-09T21:59:55.125986294Z" level=info msg="CreateContainer within sandbox \"b76c2b168d5b97d5e8ae47b4eff889097c0b69bc2901ee979055964edb9392d6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 21:59:55.138598 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 21:59:55.257490 containerd[1576]: time="2025-09-09T21:59:55.257404156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bjgk4,Uid:e8a89593-6c20-439c-928d-0e86b0f1dbe1,Namespace:kube-system,Attempt:0,} returns sandbox id \"572516219139724f6025caa24a2dcb97b720fad0c6caa003ada2b197c7140c53\"" Sep 9 21:59:55.259554 kubelet[2840]: E0909 21:59:55.259515 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:59:55.422757 containerd[1576]: time="2025-09-09T21:59:55.421127928Z" level=info msg="CreateContainer within sandbox \"572516219139724f6025caa24a2dcb97b720fad0c6caa003ada2b197c7140c53\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 21:59:55.668451 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount263484889.mount: Deactivated successfully. Sep 9 21:59:55.668606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2380576386.mount: Deactivated successfully. Sep 9 21:59:55.669787 containerd[1576]: time="2025-09-09T21:59:55.669359558Z" level=info msg="Container 418c1d0b986b6f54ab49484093ea6f8409e6e0cab7aa0ae367b63aadc5cc13f7: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:59:55.671805 containerd[1576]: time="2025-09-09T21:59:55.671151082Z" level=info msg="Container 4ba20cc753ea27c05f96145fb24cc70328a7e2135a0e611374c2895b417f3821: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:59:55.804446 containerd[1576]: time="2025-09-09T21:59:55.804202982Z" level=info msg="CreateContainer within sandbox \"572516219139724f6025caa24a2dcb97b720fad0c6caa003ada2b197c7140c53\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"418c1d0b986b6f54ab49484093ea6f8409e6e0cab7aa0ae367b63aadc5cc13f7\"" Sep 9 21:59:55.806809 containerd[1576]: time="2025-09-09T21:59:55.805933661Z" level=info msg="StartContainer for \"418c1d0b986b6f54ab49484093ea6f8409e6e0cab7aa0ae367b63aadc5cc13f7\"" Sep 9 21:59:55.809237 containerd[1576]: time="2025-09-09T21:59:55.809178551Z" level=info msg="connecting to shim 418c1d0b986b6f54ab49484093ea6f8409e6e0cab7aa0ae367b63aadc5cc13f7" address="unix:///run/containerd/s/afbb8b63b57d57467b24e21bb6fabd30e6a69025f8730fa0e4c3ed6060c446e7" protocol=ttrpc version=3 Sep 9 21:59:55.831152 containerd[1576]: time="2025-09-09T21:59:55.830801162Z" level=info msg="CreateContainer within sandbox \"b76c2b168d5b97d5e8ae47b4eff889097c0b69bc2901ee979055964edb9392d6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4ba20cc753ea27c05f96145fb24cc70328a7e2135a0e611374c2895b417f3821\"" Sep 9 21:59:55.832965 containerd[1576]: time="2025-09-09T21:59:55.832918630Z" level=info msg="StartContainer for \"4ba20cc753ea27c05f96145fb24cc70328a7e2135a0e611374c2895b417f3821\"" Sep 9 21:59:55.835221 containerd[1576]: time="2025-09-09T21:59:55.834720453Z" level=info msg="connecting to shim 4ba20cc753ea27c05f96145fb24cc70328a7e2135a0e611374c2895b417f3821" address="unix:///run/containerd/s/cdd23a0f990f6841eca98318ecba4d537dbe94e36aa8583a44f8874beac86e10" protocol=ttrpc version=3 Sep 9 21:59:55.863099 systemd[1]: Started cri-containerd-418c1d0b986b6f54ab49484093ea6f8409e6e0cab7aa0ae367b63aadc5cc13f7.scope - libcontainer container 418c1d0b986b6f54ab49484093ea6f8409e6e0cab7aa0ae367b63aadc5cc13f7. Sep 9 21:59:55.907083 systemd[1]: Started cri-containerd-4ba20cc753ea27c05f96145fb24cc70328a7e2135a0e611374c2895b417f3821.scope - libcontainer container 4ba20cc753ea27c05f96145fb24cc70328a7e2135a0e611374c2895b417f3821. Sep 9 21:59:56.040925 systemd-networkd[1478]: cali01926316471: Gained IPv6LL Sep 9 21:59:56.087588 containerd[1576]: time="2025-09-09T21:59:56.078075227Z" level=info msg="StartContainer for \"418c1d0b986b6f54ab49484093ea6f8409e6e0cab7aa0ae367b63aadc5cc13f7\" returns successfully" Sep 9 21:59:56.166876 systemd-networkd[1478]: calif816e2d571d: Gained IPv6LL Sep 9 21:59:56.175454 containerd[1576]: time="2025-09-09T21:59:56.175331531Z" level=info msg="StartContainer for \"4ba20cc753ea27c05f96145fb24cc70328a7e2135a0e611374c2895b417f3821\" returns successfully" Sep 9 21:59:56.882605 containerd[1576]: time="2025-09-09T21:59:56.881452452Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:59:56.887615 containerd[1576]: time="2025-09-09T21:59:56.887541349Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 9 21:59:56.902823 containerd[1576]: time="2025-09-09T21:59:56.900883575Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:59:56.946455 containerd[1576]: time="2025-09-09T21:59:56.943653249Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:59:56.947783 containerd[1576]: time="2025-09-09T21:59:56.947683890Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 7.709914031s" Sep 9 21:59:56.947783 containerd[1576]: time="2025-09-09T21:59:56.947756335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 9 21:59:56.951834 containerd[1576]: time="2025-09-09T21:59:56.951793739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 21:59:57.005671 containerd[1576]: time="2025-09-09T21:59:57.005569450Z" level=info msg="CreateContainer within sandbox \"9cc1f61c3786ce97fc4f3440c8fbe8e5ca8a8f666ffbd8fcc8c24cdb37268fd6\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 9 21:59:57.086489 containerd[1576]: time="2025-09-09T21:59:57.085058932Z" level=info msg="Container fb4ab06418191389da070349ab4342a3975ccaf7a5dcf95ae1b7b5f0f5eaff1f: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:59:57.090057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2433647030.mount: Deactivated successfully. Sep 9 21:59:57.094687 kubelet[2840]: E0909 21:59:57.093535 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:59:57.111224 kubelet[2840]: E0909 21:59:57.110884 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:59:57.145561 containerd[1576]: time="2025-09-09T21:59:57.144780899Z" level=info msg="CreateContainer within sandbox \"9cc1f61c3786ce97fc4f3440c8fbe8e5ca8a8f666ffbd8fcc8c24cdb37268fd6\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"fb4ab06418191389da070349ab4342a3975ccaf7a5dcf95ae1b7b5f0f5eaff1f\"" Sep 9 21:59:57.150256 containerd[1576]: time="2025-09-09T21:59:57.150105806Z" level=info msg="StartContainer for \"fb4ab06418191389da070349ab4342a3975ccaf7a5dcf95ae1b7b5f0f5eaff1f\"" Sep 9 21:59:57.178312 kubelet[2840]: I0909 21:59:57.171156 2840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-bjgk4" podStartSLOduration=105.171092188 podStartE2EDuration="1m45.171092188s" podCreationTimestamp="2025-09-09 21:58:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 21:59:57.170251264 +0000 UTC m=+107.798104358" watchObservedRunningTime="2025-09-09 21:59:57.171092188 +0000 UTC m=+107.798945282" Sep 9 21:59:57.178602 containerd[1576]: time="2025-09-09T21:59:57.153127256Z" level=info msg="connecting to shim fb4ab06418191389da070349ab4342a3975ccaf7a5dcf95ae1b7b5f0f5eaff1f" address="unix:///run/containerd/s/22f3dddd7990f94af7dae8bef016a9221204033e1b7fcd49b63dde587d1b2dc5" protocol=ttrpc version=3 Sep 9 21:59:57.270438 systemd[1]: Started cri-containerd-fb4ab06418191389da070349ab4342a3975ccaf7a5dcf95ae1b7b5f0f5eaff1f.scope - libcontainer container fb4ab06418191389da070349ab4342a3975ccaf7a5dcf95ae1b7b5f0f5eaff1f. Sep 9 21:59:57.370890 kubelet[2840]: I0909 21:59:57.370602 2840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-vg22c" podStartSLOduration=105.37054964 podStartE2EDuration="1m45.37054964s" podCreationTimestamp="2025-09-09 21:58:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 21:59:57.368937244 +0000 UTC m=+107.996790338" watchObservedRunningTime="2025-09-09 21:59:57.37054964 +0000 UTC m=+107.998402734" Sep 9 21:59:57.558587 containerd[1576]: time="2025-09-09T21:59:57.558509411Z" level=info msg="StartContainer for \"fb4ab06418191389da070349ab4342a3975ccaf7a5dcf95ae1b7b5f0f5eaff1f\" returns successfully" Sep 9 21:59:58.132322 kubelet[2840]: E0909 21:59:58.128168 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:59:58.132322 kubelet[2840]: E0909 21:59:58.128303 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:59:58.185946 kubelet[2840]: I0909 21:59:58.185835 2840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7c66bfd5cf-zxqkv" podStartSLOduration=60.915199715 podStartE2EDuration="1m12.185798156s" podCreationTimestamp="2025-09-09 21:58:46 +0000 UTC" firstStartedPulling="2025-09-09 21:59:45.678527344 +0000 UTC m=+96.306380438" lastFinishedPulling="2025-09-09 21:59:56.949125785 +0000 UTC m=+107.576978879" observedRunningTime="2025-09-09 21:59:58.177651003 +0000 UTC m=+108.805504097" watchObservedRunningTime="2025-09-09 21:59:58.185798156 +0000 UTC m=+108.813651250" Sep 9 21:59:58.301540 containerd[1576]: time="2025-09-09T21:59:58.301334907Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fb4ab06418191389da070349ab4342a3975ccaf7a5dcf95ae1b7b5f0f5eaff1f\" id:\"128fb0eda4b67d893d56ed3cc66f0d45eb6ae3abaef0fb91d993a2aa8e7bacb2\" pid:5421 exited_at:{seconds:1757455198 nanos:300029549}" Sep 9 21:59:59.132508 kubelet[2840]: E0909 21:59:59.128632 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:59:59.132508 kubelet[2840]: E0909 21:59:59.129405 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:59:59.407673 systemd[1]: Started sshd@10-10.0.0.35:22-10.0.0.1:41340.service - OpenSSH per-connection server daemon (10.0.0.1:41340). Sep 9 21:59:59.735493 sshd[5437]: Accepted publickey for core from 10.0.0.1 port 41340 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 22:00:00.168232 kubelet[2840]: E0909 22:00:00.156148 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:00:00.226920 sshd-session[5437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:00:00.259127 systemd-logind[1557]: New session 11 of user core. Sep 9 22:00:00.275246 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 22:00:00.761701 sshd[5446]: Connection closed by 10.0.0.1 port 41340 Sep 9 22:00:00.762747 sshd-session[5437]: pam_unix(sshd:session): session closed for user core Sep 9 22:00:00.777551 systemd[1]: sshd@10-10.0.0.35:22-10.0.0.1:41340.service: Deactivated successfully. Sep 9 22:00:00.790642 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 22:00:00.792738 systemd-logind[1557]: Session 11 logged out. Waiting for processes to exit. Sep 9 22:00:00.796136 systemd-logind[1557]: Removed session 11. Sep 9 22:00:04.420201 containerd[1576]: time="2025-09-09T22:00:04.419028173Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 22:00:04.448854 containerd[1576]: time="2025-09-09T22:00:04.448148524Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 9 22:00:04.457222 containerd[1576]: time="2025-09-09T22:00:04.457098581Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 22:00:04.481314 containerd[1576]: time="2025-09-09T22:00:04.473424208Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 22:00:04.481314 containerd[1576]: time="2025-09-09T22:00:04.474413355Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 7.519589198s" Sep 9 22:00:04.481314 containerd[1576]: time="2025-09-09T22:00:04.474444647Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 9 22:00:04.481314 containerd[1576]: time="2025-09-09T22:00:04.478393699Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 9 22:00:04.500453 containerd[1576]: time="2025-09-09T22:00:04.499316773Z" level=info msg="CreateContainer within sandbox \"33002135697469f599327aaadbea61a9ae296ed7e68f82082e0e5efb9786a35e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 22:00:04.553035 containerd[1576]: time="2025-09-09T22:00:04.552960159Z" level=info msg="Container 63db31edaa241d9bb9bc43b45e84f588f017571c9e5f9dba90da145632f9f11f: CDI devices from CRI Config.CDIDevices: []" Sep 9 22:00:04.588973 containerd[1576]: time="2025-09-09T22:00:04.588251267Z" level=info msg="CreateContainer within sandbox \"33002135697469f599327aaadbea61a9ae296ed7e68f82082e0e5efb9786a35e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"63db31edaa241d9bb9bc43b45e84f588f017571c9e5f9dba90da145632f9f11f\"" Sep 9 22:00:04.591295 containerd[1576]: time="2025-09-09T22:00:04.589940619Z" level=info msg="StartContainer for \"63db31edaa241d9bb9bc43b45e84f588f017571c9e5f9dba90da145632f9f11f\"" Sep 9 22:00:04.593940 containerd[1576]: time="2025-09-09T22:00:04.593899592Z" level=info msg="connecting to shim 63db31edaa241d9bb9bc43b45e84f588f017571c9e5f9dba90da145632f9f11f" address="unix:///run/containerd/s/2eb2d62bef9815b59de2dbbd6e267fd71a6c9c1204ac296a0821a27fe8d69be3" protocol=ttrpc version=3 Sep 9 22:00:04.652061 systemd[1]: Started cri-containerd-63db31edaa241d9bb9bc43b45e84f588f017571c9e5f9dba90da145632f9f11f.scope - libcontainer container 63db31edaa241d9bb9bc43b45e84f588f017571c9e5f9dba90da145632f9f11f. Sep 9 22:00:04.786196 containerd[1576]: time="2025-09-09T22:00:04.786108109Z" level=info msg="StartContainer for \"63db31edaa241d9bb9bc43b45e84f588f017571c9e5f9dba90da145632f9f11f\" returns successfully" Sep 9 22:00:05.420188 kubelet[2840]: I0909 22:00:05.420081 2840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-85fb8bd9d8-2rqmt" podStartSLOduration=73.659478456 podStartE2EDuration="1m32.420049385s" podCreationTimestamp="2025-09-09 21:58:33 +0000 UTC" firstStartedPulling="2025-09-09 21:59:45.716388864 +0000 UTC m=+96.344241958" lastFinishedPulling="2025-09-09 22:00:04.476959793 +0000 UTC m=+115.104812887" observedRunningTime="2025-09-09 22:00:05.418073038 +0000 UTC m=+116.045926132" watchObservedRunningTime="2025-09-09 22:00:05.420049385 +0000 UTC m=+116.047902479" Sep 9 22:00:05.814436 systemd[1]: Started sshd@11-10.0.0.35:22-10.0.0.1:58702.service - OpenSSH per-connection server daemon (10.0.0.1:58702). Sep 9 22:00:06.027352 sshd[5505]: Accepted publickey for core from 10.0.0.1 port 58702 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 22:00:06.045078 sshd-session[5505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:00:06.074864 systemd-logind[1557]: New session 12 of user core. Sep 9 22:00:06.086757 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 22:00:06.609806 sshd[5508]: Connection closed by 10.0.0.1 port 58702 Sep 9 22:00:06.612603 sshd-session[5505]: pam_unix(sshd:session): session closed for user core Sep 9 22:00:06.632021 systemd[1]: sshd@11-10.0.0.35:22-10.0.0.1:58702.service: Deactivated successfully. Sep 9 22:00:06.639369 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 22:00:06.651937 systemd-logind[1557]: Session 12 logged out. Waiting for processes to exit. Sep 9 22:00:06.659525 systemd-logind[1557]: Removed session 12. Sep 9 22:00:11.669108 systemd[1]: Started sshd@12-10.0.0.35:22-10.0.0.1:43444.service - OpenSSH per-connection server daemon (10.0.0.1:43444). Sep 9 22:00:11.848257 sshd[5534]: Accepted publickey for core from 10.0.0.1 port 43444 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 22:00:11.854579 sshd-session[5534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:00:11.876839 systemd-logind[1557]: New session 13 of user core. Sep 9 22:00:11.900993 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 22:00:12.280090 sshd[5537]: Connection closed by 10.0.0.1 port 43444 Sep 9 22:00:12.280605 sshd-session[5534]: pam_unix(sshd:session): session closed for user core Sep 9 22:00:12.307206 systemd[1]: sshd@12-10.0.0.35:22-10.0.0.1:43444.service: Deactivated successfully. Sep 9 22:00:12.319144 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 22:00:12.324809 systemd-logind[1557]: Session 13 logged out. Waiting for processes to exit. Sep 9 22:00:12.335527 systemd-logind[1557]: Removed session 13. Sep 9 22:00:13.055282 containerd[1576]: time="2025-09-09T22:00:13.055040474Z" level=info msg="TaskExit event in podsandbox handler container_id:\"411d97edd1dbe41be78c267c65930cb6352ef988764faa76cce1f84f6dc9ce68\" id:\"380d8a6730afdbbca1c2aa1d89062bd213081de4656c3e3d40d7dbb3b258df5a\" pid:5562 exited_at:{seconds:1757455213 nanos:53680971}" Sep 9 22:00:13.731955 kubelet[2840]: E0909 22:00:13.730100 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:00:14.182072 containerd[1576]: time="2025-09-09T22:00:14.181507254Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 22:00:14.188476 containerd[1576]: time="2025-09-09T22:00:14.188274537Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 9 22:00:14.195806 containerd[1576]: time="2025-09-09T22:00:14.195249058Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 22:00:14.204705 containerd[1576]: time="2025-09-09T22:00:14.204612115Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 22:00:14.205679 containerd[1576]: time="2025-09-09T22:00:14.205625007Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 9.727176409s" Sep 9 22:00:14.205679 containerd[1576]: time="2025-09-09T22:00:14.205666147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 9 22:00:14.241848 containerd[1576]: time="2025-09-09T22:00:14.241549321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 22:00:14.256101 containerd[1576]: time="2025-09-09T22:00:14.255252019Z" level=info msg="CreateContainer within sandbox \"2d60c9d676fd7f5f1bb904d67b53c364c331bb81ca611b27e70cfa466f6447b7\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 9 22:00:14.318045 containerd[1576]: time="2025-09-09T22:00:14.316553693Z" level=info msg="Container a618194948b75296589afdb1799afcd76621db167102f6c109dba41494977667: CDI devices from CRI Config.CDIDevices: []" Sep 9 22:00:14.371286 containerd[1576]: time="2025-09-09T22:00:14.368576454Z" level=info msg="CreateContainer within sandbox \"2d60c9d676fd7f5f1bb904d67b53c364c331bb81ca611b27e70cfa466f6447b7\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"a618194948b75296589afdb1799afcd76621db167102f6c109dba41494977667\"" Sep 9 22:00:14.371286 containerd[1576]: time="2025-09-09T22:00:14.369493407Z" level=info msg="StartContainer for \"a618194948b75296589afdb1799afcd76621db167102f6c109dba41494977667\"" Sep 9 22:00:14.371924 containerd[1576]: time="2025-09-09T22:00:14.371784441Z" level=info msg="connecting to shim a618194948b75296589afdb1799afcd76621db167102f6c109dba41494977667" address="unix:///run/containerd/s/9592dca3360f9cc40f4441e26672a617755bea74d6afc89ccfc6831e461e4ec3" protocol=ttrpc version=3 Sep 9 22:00:14.443157 systemd[1]: Started cri-containerd-a618194948b75296589afdb1799afcd76621db167102f6c109dba41494977667.scope - libcontainer container a618194948b75296589afdb1799afcd76621db167102f6c109dba41494977667. Sep 9 22:00:14.574670 containerd[1576]: time="2025-09-09T22:00:14.574593906Z" level=info msg="StartContainer for \"a618194948b75296589afdb1799afcd76621db167102f6c109dba41494977667\" returns successfully" Sep 9 22:00:15.344528 containerd[1576]: time="2025-09-09T22:00:15.344461865Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fb4ab06418191389da070349ab4342a3975ccaf7a5dcf95ae1b7b5f0f5eaff1f\" id:\"1ea16f393cbba8edfcc6eccdfeba483616f1b739179eadd7d5a3d316dcfa7e15\" pid:5631 exited_at:{seconds:1757455215 nanos:334603817}" Sep 9 22:00:17.321328 systemd[1]: Started sshd@13-10.0.0.35:22-10.0.0.1:43454.service - OpenSSH per-connection server daemon (10.0.0.1:43454). Sep 9 22:00:17.496093 sshd[5642]: Accepted publickey for core from 10.0.0.1 port 43454 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 22:00:17.502628 sshd-session[5642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:00:17.534690 systemd-logind[1557]: New session 14 of user core. Sep 9 22:00:17.553274 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 22:00:18.294728 sshd[5645]: Connection closed by 10.0.0.1 port 43454 Sep 9 22:00:18.296039 sshd-session[5642]: pam_unix(sshd:session): session closed for user core Sep 9 22:00:18.310688 systemd[1]: sshd@13-10.0.0.35:22-10.0.0.1:43454.service: Deactivated successfully. Sep 9 22:00:18.321342 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 22:00:18.325832 systemd-logind[1557]: Session 14 logged out. Waiting for processes to exit. Sep 9 22:00:18.333831 systemd-logind[1557]: Removed session 14. Sep 9 22:00:18.337745 containerd[1576]: time="2025-09-09T22:00:18.337640092Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 22:00:18.345323 containerd[1576]: time="2025-09-09T22:00:18.345125174Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 9 22:00:18.348622 containerd[1576]: time="2025-09-09T22:00:18.347970304Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 4.106364472s" Sep 9 22:00:18.348622 containerd[1576]: time="2025-09-09T22:00:18.348013409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 9 22:00:18.362246 containerd[1576]: time="2025-09-09T22:00:18.362181408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 9 22:00:18.403518 containerd[1576]: time="2025-09-09T22:00:18.403463191Z" level=info msg="CreateContainer within sandbox \"1fb7ca34b29f8709faf42a7f42c3100f8d19574a2a10b8972fcbd37a6b6f3380\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 22:00:18.487802 containerd[1576]: time="2025-09-09T22:00:18.485717178Z" level=info msg="Container 6aea6605fe532a4aea2e098d6f90b10235bc1b27477a6f9ff640ed9714e4ff35: CDI devices from CRI Config.CDIDevices: []" Sep 9 22:00:18.528323 containerd[1576]: time="2025-09-09T22:00:18.527642613Z" level=info msg="CreateContainer within sandbox \"1fb7ca34b29f8709faf42a7f42c3100f8d19574a2a10b8972fcbd37a6b6f3380\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6aea6605fe532a4aea2e098d6f90b10235bc1b27477a6f9ff640ed9714e4ff35\"" Sep 9 22:00:18.535325 containerd[1576]: time="2025-09-09T22:00:18.532430905Z" level=info msg="StartContainer for \"6aea6605fe532a4aea2e098d6f90b10235bc1b27477a6f9ff640ed9714e4ff35\"" Sep 9 22:00:18.535325 containerd[1576]: time="2025-09-09T22:00:18.534190396Z" level=info msg="connecting to shim 6aea6605fe532a4aea2e098d6f90b10235bc1b27477a6f9ff640ed9714e4ff35" address="unix:///run/containerd/s/020c3bf15c612d1f1d9ac46a11e2a0dde612fc6124b26bbb252db78250bd0136" protocol=ttrpc version=3 Sep 9 22:00:18.655383 systemd[1]: Started cri-containerd-6aea6605fe532a4aea2e098d6f90b10235bc1b27477a6f9ff640ed9714e4ff35.scope - libcontainer container 6aea6605fe532a4aea2e098d6f90b10235bc1b27477a6f9ff640ed9714e4ff35. Sep 9 22:00:18.990936 containerd[1576]: time="2025-09-09T22:00:18.990611735Z" level=info msg="StartContainer for \"6aea6605fe532a4aea2e098d6f90b10235bc1b27477a6f9ff640ed9714e4ff35\" returns successfully" Sep 9 22:00:21.503318 kubelet[2840]: I0909 22:00:21.503273 2840 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 22:00:23.346318 systemd[1]: Started sshd@14-10.0.0.35:22-10.0.0.1:60136.service - OpenSSH per-connection server daemon (10.0.0.1:60136). Sep 9 22:00:24.098756 sshd[5698]: Accepted publickey for core from 10.0.0.1 port 60136 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 22:00:24.105609 sshd-session[5698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:00:24.135145 systemd-logind[1557]: New session 15 of user core. Sep 9 22:00:24.160037 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 22:00:24.762227 sshd[5705]: Connection closed by 10.0.0.1 port 60136 Sep 9 22:00:24.767107 sshd-session[5698]: pam_unix(sshd:session): session closed for user core Sep 9 22:00:24.773311 kubelet[2840]: I0909 22:00:24.772260 2840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-85fb8bd9d8-gxhlf" podStartSLOduration=80.887249037 podStartE2EDuration="1m51.772236772s" podCreationTimestamp="2025-09-09 21:58:33 +0000 UTC" firstStartedPulling="2025-09-09 21:59:47.475873203 +0000 UTC m=+98.103726297" lastFinishedPulling="2025-09-09 22:00:18.360860938 +0000 UTC m=+128.988714032" observedRunningTime="2025-09-09 22:00:19.553544319 +0000 UTC m=+130.181397504" watchObservedRunningTime="2025-09-09 22:00:24.772236772 +0000 UTC m=+135.400089886" Sep 9 22:00:24.808317 systemd[1]: sshd@14-10.0.0.35:22-10.0.0.1:60136.service: Deactivated successfully. Sep 9 22:00:24.823811 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 22:00:24.834676 systemd-logind[1557]: Session 15 logged out. Waiting for processes to exit. Sep 9 22:00:24.839816 systemd-logind[1557]: Removed session 15. Sep 9 22:00:26.718437 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3711801705.mount: Deactivated successfully. Sep 9 22:00:28.337967 containerd[1576]: time="2025-09-09T22:00:28.337733475Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fb4ab06418191389da070349ab4342a3975ccaf7a5dcf95ae1b7b5f0f5eaff1f\" id:\"27ee9f59cab819b6f75f61787196887d0e0aade85c516a2b54173f17e0feb0e0\" pid:5748 exited_at:{seconds:1757455228 nanos:334689214}" Sep 9 22:00:29.707892 containerd[1576]: time="2025-09-09T22:00:29.707716354Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 22:00:29.710108 containerd[1576]: time="2025-09-09T22:00:29.709902679Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 9 22:00:29.713819 containerd[1576]: time="2025-09-09T22:00:29.711892871Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 22:00:29.723412 containerd[1576]: time="2025-09-09T22:00:29.723287917Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 22:00:29.724445 containerd[1576]: time="2025-09-09T22:00:29.724251457Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 11.362004371s" Sep 9 22:00:29.724445 containerd[1576]: time="2025-09-09T22:00:29.724312165Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 9 22:00:29.736229 containerd[1576]: time="2025-09-09T22:00:29.736038458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 9 22:00:29.760824 containerd[1576]: time="2025-09-09T22:00:29.760617982Z" level=info msg="CreateContainer within sandbox \"2501e19d84641a9e1dea13a4192f4e4d48ad963033afbc1fa208fe23d8e965bf\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 9 22:00:29.801805 containerd[1576]: time="2025-09-09T22:00:29.801028364Z" level=info msg="Container f914cf59d9f7675b3f7717108cc49f8eafd1251cc891949cb04565e3ebbdfd51: CDI devices from CRI Config.CDIDevices: []" Sep 9 22:00:29.815967 systemd[1]: Started sshd@15-10.0.0.35:22-10.0.0.1:60146.service - OpenSSH per-connection server daemon (10.0.0.1:60146). Sep 9 22:00:29.850799 containerd[1576]: time="2025-09-09T22:00:29.850641738Z" level=info msg="CreateContainer within sandbox \"2501e19d84641a9e1dea13a4192f4e4d48ad963033afbc1fa208fe23d8e965bf\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"f914cf59d9f7675b3f7717108cc49f8eafd1251cc891949cb04565e3ebbdfd51\"" Sep 9 22:00:29.854357 containerd[1576]: time="2025-09-09T22:00:29.854314130Z" level=info msg="StartContainer for \"f914cf59d9f7675b3f7717108cc49f8eafd1251cc891949cb04565e3ebbdfd51\"" Sep 9 22:00:29.859026 containerd[1576]: time="2025-09-09T22:00:29.858901689Z" level=info msg="connecting to shim f914cf59d9f7675b3f7717108cc49f8eafd1251cc891949cb04565e3ebbdfd51" address="unix:///run/containerd/s/ed55d293ab658320c88111b05d0edfe9d112531a72fe98a034f978922addf7cc" protocol=ttrpc version=3 Sep 9 22:00:29.954115 systemd[1]: Started cri-containerd-f914cf59d9f7675b3f7717108cc49f8eafd1251cc891949cb04565e3ebbdfd51.scope - libcontainer container f914cf59d9f7675b3f7717108cc49f8eafd1251cc891949cb04565e3ebbdfd51. Sep 9 22:00:29.971085 sshd[5764]: Accepted publickey for core from 10.0.0.1 port 60146 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 22:00:29.974149 sshd-session[5764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:00:29.999371 systemd-logind[1557]: New session 16 of user core. Sep 9 22:00:30.014071 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 22:00:30.119796 containerd[1576]: time="2025-09-09T22:00:30.119279534Z" level=info msg="StartContainer for \"f914cf59d9f7675b3f7717108cc49f8eafd1251cc891949cb04565e3ebbdfd51\" returns successfully" Sep 9 22:00:30.368511 sshd[5786]: Connection closed by 10.0.0.1 port 60146 Sep 9 22:00:30.370415 sshd-session[5764]: pam_unix(sshd:session): session closed for user core Sep 9 22:00:30.394055 systemd[1]: sshd@15-10.0.0.35:22-10.0.0.1:60146.service: Deactivated successfully. Sep 9 22:00:30.401601 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 22:00:30.403544 systemd-logind[1557]: Session 16 logged out. Waiting for processes to exit. Sep 9 22:00:30.406332 systemd-logind[1557]: Removed session 16. Sep 9 22:00:31.010099 containerd[1576]: time="2025-09-09T22:00:31.009917529Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f914cf59d9f7675b3f7717108cc49f8eafd1251cc891949cb04565e3ebbdfd51\" id:\"4077002ae8eb50f4b0dd9266c728686a4de6a4294159a369fd3de10ea2727dbe\" pid:5827 exited_at:{seconds:1757455231 nanos:7495880}" Sep 9 22:00:31.114716 kubelet[2840]: I0909 22:00:31.107697 2840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-cbmvm" podStartSLOduration=65.317664628 podStartE2EDuration="1m47.107669985s" podCreationTimestamp="2025-09-09 21:58:44 +0000 UTC" firstStartedPulling="2025-09-09 21:59:47.941565847 +0000 UTC m=+98.569418951" lastFinishedPulling="2025-09-09 22:00:29.731571214 +0000 UTC m=+140.359424308" observedRunningTime="2025-09-09 22:00:30.776992636 +0000 UTC m=+141.404845740" watchObservedRunningTime="2025-09-09 22:00:31.107669985 +0000 UTC m=+141.735523079" Sep 9 22:00:32.728896 kubelet[2840]: E0909 22:00:32.725144 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:00:35.412917 systemd[1]: Started sshd@16-10.0.0.35:22-10.0.0.1:59636.service - OpenSSH per-connection server daemon (10.0.0.1:59636). Sep 9 22:00:35.629266 sshd[5846]: Accepted publickey for core from 10.0.0.1 port 59636 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 22:00:35.640689 sshd-session[5846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:00:35.679723 systemd-logind[1557]: New session 17 of user core. Sep 9 22:00:35.690152 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 22:00:36.320166 sshd[5849]: Connection closed by 10.0.0.1 port 59636 Sep 9 22:00:36.305195 sshd-session[5846]: pam_unix(sshd:session): session closed for user core Sep 9 22:00:36.366304 systemd[1]: sshd@16-10.0.0.35:22-10.0.0.1:59636.service: Deactivated successfully. Sep 9 22:00:36.389934 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 22:00:36.395994 systemd-logind[1557]: Session 17 logged out. Waiting for processes to exit. Sep 9 22:00:36.414179 systemd-logind[1557]: Removed session 17. Sep 9 22:00:36.417861 systemd[1]: Started sshd@17-10.0.0.35:22-10.0.0.1:59642.service - OpenSSH per-connection server daemon (10.0.0.1:59642). Sep 9 22:00:36.692660 sshd[5863]: Accepted publickey for core from 10.0.0.1 port 59642 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 22:00:36.700665 sshd-session[5863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:00:36.742425 systemd-logind[1557]: New session 18 of user core. Sep 9 22:00:36.758216 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 22:00:37.413069 sshd[5866]: Connection closed by 10.0.0.1 port 59642 Sep 9 22:00:37.412086 sshd-session[5863]: pam_unix(sshd:session): session closed for user core Sep 9 22:00:37.427759 systemd[1]: sshd@17-10.0.0.35:22-10.0.0.1:59642.service: Deactivated successfully. Sep 9 22:00:37.432661 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 22:00:37.435830 systemd-logind[1557]: Session 18 logged out. Waiting for processes to exit. Sep 9 22:00:37.441170 systemd[1]: Started sshd@18-10.0.0.35:22-10.0.0.1:59658.service - OpenSSH per-connection server daemon (10.0.0.1:59658). Sep 9 22:00:37.446672 systemd-logind[1557]: Removed session 18. Sep 9 22:00:37.671402 sshd[5881]: Accepted publickey for core from 10.0.0.1 port 59658 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 22:00:37.684059 sshd-session[5881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:00:37.705367 systemd-logind[1557]: New session 19 of user core. Sep 9 22:00:37.746329 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 22:00:38.189303 containerd[1576]: time="2025-09-09T22:00:38.187912237Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 22:00:38.197507 containerd[1576]: time="2025-09-09T22:00:38.197421272Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 9 22:00:38.201427 containerd[1576]: time="2025-09-09T22:00:38.201347611Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 22:00:38.213124 containerd[1576]: time="2025-09-09T22:00:38.213034057Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 22:00:38.217832 containerd[1576]: time="2025-09-09T22:00:38.215169616Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 8.479064508s" Sep 9 22:00:38.217832 containerd[1576]: time="2025-09-09T22:00:38.215235013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 9 22:00:38.226096 containerd[1576]: time="2025-09-09T22:00:38.225161269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 9 22:00:38.261379 sshd[5884]: Connection closed by 10.0.0.1 port 59658 Sep 9 22:00:38.261490 sshd-session[5881]: pam_unix(sshd:session): session closed for user core Sep 9 22:00:38.274374 systemd[1]: sshd@18-10.0.0.35:22-10.0.0.1:59658.service: Deactivated successfully. Sep 9 22:00:38.279659 containerd[1576]: time="2025-09-09T22:00:38.279585983Z" level=info msg="CreateContainer within sandbox \"8ab313140b8a1a794eb8a0bc77b2bccd2307958e39e431201ed90df048ab3d2e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 9 22:00:38.288506 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 22:00:38.297189 systemd-logind[1557]: Session 19 logged out. Waiting for processes to exit. Sep 9 22:00:38.312823 systemd-logind[1557]: Removed session 19. Sep 9 22:00:38.691831 containerd[1576]: time="2025-09-09T22:00:38.691168321Z" level=info msg="Container 6893b4bdbc6c2392e20e054726a223d5ca7bb8a13d5241f6677dfc8529804cae: CDI devices from CRI Config.CDIDevices: []" Sep 9 22:00:38.794071 containerd[1576]: time="2025-09-09T22:00:38.790980048Z" level=info msg="CreateContainer within sandbox \"8ab313140b8a1a794eb8a0bc77b2bccd2307958e39e431201ed90df048ab3d2e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6893b4bdbc6c2392e20e054726a223d5ca7bb8a13d5241f6677dfc8529804cae\"" Sep 9 22:00:38.794071 containerd[1576]: time="2025-09-09T22:00:38.791895527Z" level=info msg="StartContainer for \"6893b4bdbc6c2392e20e054726a223d5ca7bb8a13d5241f6677dfc8529804cae\"" Sep 9 22:00:38.804010 containerd[1576]: time="2025-09-09T22:00:38.802412250Z" level=info msg="connecting to shim 6893b4bdbc6c2392e20e054726a223d5ca7bb8a13d5241f6677dfc8529804cae" address="unix:///run/containerd/s/52666e45309e0e1bac47cb75b52cfabbdbbdb19c31b7f578853e5b8cce37d6c4" protocol=ttrpc version=3 Sep 9 22:00:38.912288 systemd[1]: Started cri-containerd-6893b4bdbc6c2392e20e054726a223d5ca7bb8a13d5241f6677dfc8529804cae.scope - libcontainer container 6893b4bdbc6c2392e20e054726a223d5ca7bb8a13d5241f6677dfc8529804cae. Sep 9 22:00:39.141345 containerd[1576]: time="2025-09-09T22:00:39.141130679Z" level=info msg="StartContainer for \"6893b4bdbc6c2392e20e054726a223d5ca7bb8a13d5241f6677dfc8529804cae\" returns successfully" Sep 9 22:00:40.130170 kubelet[2840]: I0909 22:00:40.128857 2840 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 9 22:00:40.190607 kubelet[2840]: I0909 22:00:40.183461 2840 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 9 22:00:43.054705 containerd[1576]: time="2025-09-09T22:00:43.053971357Z" level=info msg="TaskExit event in podsandbox handler container_id:\"411d97edd1dbe41be78c267c65930cb6352ef988764faa76cce1f84f6dc9ce68\" id:\"ca85d4a21fe796001965398dddf68567256cff62634844b701d8e25e1f95cb2c\" pid:5942 exited_at:{seconds:1757455243 nanos:51386315}" Sep 9 22:00:43.327917 systemd[1]: Started sshd@19-10.0.0.35:22-10.0.0.1:53082.service - OpenSSH per-connection server daemon (10.0.0.1:53082). Sep 9 22:00:43.744370 sshd[5955]: Accepted publickey for core from 10.0.0.1 port 53082 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 22:00:43.750396 sshd-session[5955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:00:43.783950 systemd-logind[1557]: New session 20 of user core. Sep 9 22:00:43.807239 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 22:00:44.737796 sshd[5958]: Connection closed by 10.0.0.1 port 53082 Sep 9 22:00:44.738033 sshd-session[5955]: pam_unix(sshd:session): session closed for user core Sep 9 22:00:44.773123 systemd[1]: sshd@19-10.0.0.35:22-10.0.0.1:53082.service: Deactivated successfully. Sep 9 22:00:44.791087 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 22:00:44.818945 systemd-logind[1557]: Session 20 logged out. Waiting for processes to exit. Sep 9 22:00:44.855555 systemd-logind[1557]: Removed session 20. Sep 9 22:00:48.295617 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3837045376.mount: Deactivated successfully. Sep 9 22:00:48.459642 containerd[1576]: time="2025-09-09T22:00:48.459365606Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 9 22:00:48.459642 containerd[1576]: time="2025-09-09T22:00:48.459473996Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 22:00:48.478621 containerd[1576]: time="2025-09-09T22:00:48.475051218Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 22:00:48.481805 containerd[1576]: time="2025-09-09T22:00:48.480919116Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 22:00:48.482147 containerd[1576]: time="2025-09-09T22:00:48.482096064Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 10.256874277s" Sep 9 22:00:48.482303 containerd[1576]: time="2025-09-09T22:00:48.482282345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 9 22:00:48.506938 containerd[1576]: time="2025-09-09T22:00:48.506877452Z" level=info msg="CreateContainer within sandbox \"2d60c9d676fd7f5f1bb904d67b53c364c331bb81ca611b27e70cfa466f6447b7\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 9 22:00:48.552051 containerd[1576]: time="2025-09-09T22:00:48.547987827Z" level=info msg="Container 80de7c5f993f1821899fbf1c826ca2039080b0ce8ce48bcb06687ae61e18fe89: CDI devices from CRI Config.CDIDevices: []" Sep 9 22:00:48.621438 containerd[1576]: time="2025-09-09T22:00:48.621097823Z" level=info msg="CreateContainer within sandbox \"2d60c9d676fd7f5f1bb904d67b53c364c331bb81ca611b27e70cfa466f6447b7\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"80de7c5f993f1821899fbf1c826ca2039080b0ce8ce48bcb06687ae61e18fe89\"" Sep 9 22:00:48.633516 containerd[1576]: time="2025-09-09T22:00:48.628677965Z" level=info msg="StartContainer for \"80de7c5f993f1821899fbf1c826ca2039080b0ce8ce48bcb06687ae61e18fe89\"" Sep 9 22:00:48.641332 containerd[1576]: time="2025-09-09T22:00:48.637954141Z" level=info msg="connecting to shim 80de7c5f993f1821899fbf1c826ca2039080b0ce8ce48bcb06687ae61e18fe89" address="unix:///run/containerd/s/9592dca3360f9cc40f4441e26672a617755bea74d6afc89ccfc6831e461e4ec3" protocol=ttrpc version=3 Sep 9 22:00:48.729411 kubelet[2840]: E0909 22:00:48.726618 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:00:48.732625 systemd[1]: Started cri-containerd-80de7c5f993f1821899fbf1c826ca2039080b0ce8ce48bcb06687ae61e18fe89.scope - libcontainer container 80de7c5f993f1821899fbf1c826ca2039080b0ce8ce48bcb06687ae61e18fe89. Sep 9 22:00:49.034489 containerd[1576]: time="2025-09-09T22:00:49.033925358Z" level=info msg="StartContainer for \"80de7c5f993f1821899fbf1c826ca2039080b0ce8ce48bcb06687ae61e18fe89\" returns successfully" Sep 9 22:00:49.730799 kubelet[2840]: E0909 22:00:49.730217 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:00:49.815525 systemd[1]: Started sshd@20-10.0.0.35:22-10.0.0.1:53090.service - OpenSSH per-connection server daemon (10.0.0.1:53090). Sep 9 22:00:49.917491 kubelet[2840]: I0909 22:00:49.914206 2840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-nzhdk" podStartSLOduration=71.310529278 podStartE2EDuration="2m3.914184891s" podCreationTimestamp="2025-09-09 21:58:46 +0000 UTC" firstStartedPulling="2025-09-09 21:59:45.618081584 +0000 UTC m=+96.245934678" lastFinishedPulling="2025-09-09 22:00:38.221737197 +0000 UTC m=+148.849590291" observedRunningTime="2025-09-09 22:00:39.756921218 +0000 UTC m=+150.384774343" watchObservedRunningTime="2025-09-09 22:00:49.914184891 +0000 UTC m=+160.542037985" Sep 9 22:00:50.246564 sshd[6017]: Accepted publickey for core from 10.0.0.1 port 53090 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 22:00:50.252984 sshd-session[6017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:00:50.281280 systemd-logind[1557]: New session 21 of user core. Sep 9 22:00:50.293461 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 22:00:51.001635 sshd[6022]: Connection closed by 10.0.0.1 port 53090 Sep 9 22:00:51.001746 sshd-session[6017]: pam_unix(sshd:session): session closed for user core Sep 9 22:00:51.009610 systemd-logind[1557]: Session 21 logged out. Waiting for processes to exit. Sep 9 22:00:51.010663 systemd[1]: sshd@20-10.0.0.35:22-10.0.0.1:53090.service: Deactivated successfully. Sep 9 22:00:51.023166 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 22:00:51.035327 systemd-logind[1557]: Removed session 21. Sep 9 22:00:52.729801 kubelet[2840]: E0909 22:00:52.729619 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:00:56.038729 systemd[1]: Started sshd@21-10.0.0.35:22-10.0.0.1:42720.service - OpenSSH per-connection server daemon (10.0.0.1:42720). Sep 9 22:00:56.161169 sshd[6042]: Accepted publickey for core from 10.0.0.1 port 42720 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 22:00:56.175213 sshd-session[6042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:00:56.208904 systemd-logind[1557]: New session 22 of user core. Sep 9 22:00:56.232999 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 22:00:56.796086 sshd[6045]: Connection closed by 10.0.0.1 port 42720 Sep 9 22:00:56.796851 sshd-session[6042]: pam_unix(sshd:session): session closed for user core Sep 9 22:00:56.804996 systemd[1]: sshd@21-10.0.0.35:22-10.0.0.1:42720.service: Deactivated successfully. Sep 9 22:00:56.811033 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 22:00:56.833529 systemd-logind[1557]: Session 22 logged out. Waiting for processes to exit. Sep 9 22:00:56.842057 systemd-logind[1557]: Removed session 22. Sep 9 22:00:58.373737 containerd[1576]: time="2025-09-09T22:00:58.373663105Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fb4ab06418191389da070349ab4342a3975ccaf7a5dcf95ae1b7b5f0f5eaff1f\" id:\"10517d869eb45f989c0dcb268d04ca6bfaedb619c84082b5cf4822ced5d0db66\" pid:6070 exited_at:{seconds:1757455258 nanos:359024210}" Sep 9 22:01:01.014675 containerd[1576]: time="2025-09-09T22:01:01.014608730Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f914cf59d9f7675b3f7717108cc49f8eafd1251cc891949cb04565e3ebbdfd51\" id:\"769bdf97e70d5bde6e480645a9dd396850ebbad320a6a3135ffb9eb6c18921a5\" pid:6092 exited_at:{seconds:1757455261 nanos:13916786}" Sep 9 22:01:01.853924 systemd[1]: Started sshd@22-10.0.0.35:22-10.0.0.1:58446.service - OpenSSH per-connection server daemon (10.0.0.1:58446). Sep 9 22:01:02.149511 sshd[6104]: Accepted publickey for core from 10.0.0.1 port 58446 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 22:01:02.153695 sshd-session[6104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:01:02.182943 systemd-logind[1557]: New session 23 of user core. Sep 9 22:01:02.216146 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 22:01:02.890708 sshd[6107]: Connection closed by 10.0.0.1 port 58446 Sep 9 22:01:02.895551 sshd-session[6104]: pam_unix(sshd:session): session closed for user core Sep 9 22:01:02.919127 systemd[1]: sshd@22-10.0.0.35:22-10.0.0.1:58446.service: Deactivated successfully. Sep 9 22:01:02.923177 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 22:01:02.926157 systemd-logind[1557]: Session 23 logged out. Waiting for processes to exit. Sep 9 22:01:02.932834 systemd-logind[1557]: Removed session 23. Sep 9 22:01:05.726547 kubelet[2840]: E0909 22:01:05.726008 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:01:07.945126 systemd[1]: Started sshd@23-10.0.0.35:22-10.0.0.1:58458.service - OpenSSH per-connection server daemon (10.0.0.1:58458). Sep 9 22:01:08.100008 sshd[6130]: Accepted publickey for core from 10.0.0.1 port 58458 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 22:01:08.111739 sshd-session[6130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:01:08.149228 systemd-logind[1557]: New session 24 of user core. Sep 9 22:01:08.169136 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 22:01:08.829390 sshd[6133]: Connection closed by 10.0.0.1 port 58458 Sep 9 22:01:08.829240 sshd-session[6130]: pam_unix(sshd:session): session closed for user core Sep 9 22:01:08.842677 systemd[1]: sshd@23-10.0.0.35:22-10.0.0.1:58458.service: Deactivated successfully. Sep 9 22:01:08.844087 systemd-logind[1557]: Session 24 logged out. Waiting for processes to exit. Sep 9 22:01:08.848133 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 22:01:08.864364 systemd-logind[1557]: Removed session 24. Sep 9 22:01:13.261929 containerd[1576]: time="2025-09-09T22:01:13.261735622Z" level=info msg="TaskExit event in podsandbox handler container_id:\"411d97edd1dbe41be78c267c65930cb6352ef988764faa76cce1f84f6dc9ce68\" id:\"87202806b5ee76dfb366436f58870a4ccdcadbe2c632444f7e30190c1bdcd5ef\" pid:6160 exited_at:{seconds:1757455273 nanos:168082465}" Sep 9 22:01:13.891663 systemd[1]: Started sshd@24-10.0.0.35:22-10.0.0.1:42300.service - OpenSSH per-connection server daemon (10.0.0.1:42300). Sep 9 22:01:14.116735 sshd[6174]: Accepted publickey for core from 10.0.0.1 port 42300 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 22:01:14.122892 sshd-session[6174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:01:14.157677 systemd-logind[1557]: New session 25 of user core. Sep 9 22:01:14.173298 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 9 22:01:14.782869 sshd[6179]: Connection closed by 10.0.0.1 port 42300 Sep 9 22:01:14.783846 sshd-session[6174]: pam_unix(sshd:session): session closed for user core Sep 9 22:01:14.792287 systemd[1]: sshd@24-10.0.0.35:22-10.0.0.1:42300.service: Deactivated successfully. Sep 9 22:01:14.793317 systemd-logind[1557]: Session 25 logged out. Waiting for processes to exit. Sep 9 22:01:14.796660 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 22:01:14.805480 systemd-logind[1557]: Removed session 25. Sep 9 22:01:15.399555 containerd[1576]: time="2025-09-09T22:01:15.398209918Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fb4ab06418191389da070349ab4342a3975ccaf7a5dcf95ae1b7b5f0f5eaff1f\" id:\"09c56b730591c522232c8b3e9a7f418bc814210595863b607b8c219f5dd4bd47\" pid:6204 exited_at:{seconds:1757455275 nanos:397038157}" Sep 9 22:01:15.696707 containerd[1576]: time="2025-09-09T22:01:15.696348786Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f914cf59d9f7675b3f7717108cc49f8eafd1251cc891949cb04565e3ebbdfd51\" id:\"6e7535bd2cc5bb021ca001030460c3d97b35faf11384a42573de860a541f90a1\" pid:6227 exited_at:{seconds:1757455275 nanos:695851671}" Sep 9 22:01:19.835894 systemd[1]: Started sshd@25-10.0.0.35:22-10.0.0.1:42302.service - OpenSSH per-connection server daemon (10.0.0.1:42302). Sep 9 22:01:20.015253 sshd[6260]: Accepted publickey for core from 10.0.0.1 port 42302 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 22:01:20.020403 sshd-session[6260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:01:20.042739 systemd-logind[1557]: New session 26 of user core. Sep 9 22:01:20.057936 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 9 22:01:20.474581 sshd[6266]: Connection closed by 10.0.0.1 port 42302 Sep 9 22:01:20.477230 sshd-session[6260]: pam_unix(sshd:session): session closed for user core Sep 9 22:01:20.496884 systemd[1]: sshd@25-10.0.0.35:22-10.0.0.1:42302.service: Deactivated successfully. Sep 9 22:01:20.508189 systemd[1]: session-26.scope: Deactivated successfully. Sep 9 22:01:20.516309 systemd-logind[1557]: Session 26 logged out. Waiting for processes to exit. Sep 9 22:01:20.524633 systemd-logind[1557]: Removed session 26. Sep 9 22:01:22.734186 kubelet[2840]: E0909 22:01:22.734070 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:01:25.532162 systemd[1]: Started sshd@26-10.0.0.35:22-10.0.0.1:55626.service - OpenSSH per-connection server daemon (10.0.0.1:55626). Sep 9 22:01:25.727944 sshd[6279]: Accepted publickey for core from 10.0.0.1 port 55626 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 22:01:25.732190 sshd-session[6279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:01:25.764523 systemd-logind[1557]: New session 27 of user core. Sep 9 22:01:25.785318 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 9 22:01:26.017785 sshd[6282]: Connection closed by 10.0.0.1 port 55626 Sep 9 22:01:26.018955 sshd-session[6279]: pam_unix(sshd:session): session closed for user core Sep 9 22:01:26.030713 systemd[1]: sshd@26-10.0.0.35:22-10.0.0.1:55626.service: Deactivated successfully. Sep 9 22:01:26.045746 systemd[1]: session-27.scope: Deactivated successfully. Sep 9 22:01:26.054879 systemd-logind[1557]: Session 27 logged out. Waiting for processes to exit. Sep 9 22:01:26.058565 systemd-logind[1557]: Removed session 27. Sep 9 22:01:28.261253 containerd[1576]: time="2025-09-09T22:01:28.261190497Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fb4ab06418191389da070349ab4342a3975ccaf7a5dcf95ae1b7b5f0f5eaff1f\" id:\"ebf4e5297f63c0be915088e9cd0937d976774c9a140ea7db17a8075dd6125a7b\" pid:6307 exited_at:{seconds:1757455288 nanos:260910561}" Sep 9 22:01:30.963432 containerd[1576]: time="2025-09-09T22:01:30.963117174Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f914cf59d9f7675b3f7717108cc49f8eafd1251cc891949cb04565e3ebbdfd51\" id:\"ca437d43c703d17fe301fa4d6208fedb2171c4d5dd419d9dd5d1c8b7d38028c8\" pid:6329 exited_at:{seconds:1757455290 nanos:961832604}" Sep 9 22:01:31.068406 systemd[1]: Started sshd@27-10.0.0.35:22-10.0.0.1:43538.service - OpenSSH per-connection server daemon (10.0.0.1:43538). Sep 9 22:01:31.318350 sshd[6340]: Accepted publickey for core from 10.0.0.1 port 43538 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 22:01:31.328379 sshd-session[6340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:01:31.364061 systemd-logind[1557]: New session 28 of user core. Sep 9 22:01:31.386924 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 9 22:01:31.899480 sshd[6343]: Connection closed by 10.0.0.1 port 43538 Sep 9 22:01:31.902327 sshd-session[6340]: pam_unix(sshd:session): session closed for user core Sep 9 22:01:31.924118 systemd[1]: sshd@27-10.0.0.35:22-10.0.0.1:43538.service: Deactivated successfully. Sep 9 22:01:31.934819 systemd[1]: session-28.scope: Deactivated successfully. Sep 9 22:01:31.947132 systemd-logind[1557]: Session 28 logged out. Waiting for processes to exit. Sep 9 22:01:31.970161 systemd-logind[1557]: Removed session 28. Sep 9 22:01:33.728671 kubelet[2840]: E0909 22:01:33.726618 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:01:36.726380 kubelet[2840]: E0909 22:01:36.726209 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:01:36.913984 systemd[1]: Started sshd@28-10.0.0.35:22-10.0.0.1:43540.service - OpenSSH per-connection server daemon (10.0.0.1:43540). Sep 9 22:01:37.032043 sshd[6357]: Accepted publickey for core from 10.0.0.1 port 43540 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 22:01:37.035190 sshd-session[6357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:01:37.048934 systemd-logind[1557]: New session 29 of user core. Sep 9 22:01:37.060297 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 9 22:01:37.486643 sshd[6360]: Connection closed by 10.0.0.1 port 43540 Sep 9 22:01:37.487038 sshd-session[6357]: pam_unix(sshd:session): session closed for user core Sep 9 22:01:37.494401 systemd[1]: sshd@28-10.0.0.35:22-10.0.0.1:43540.service: Deactivated successfully. Sep 9 22:01:37.497901 systemd[1]: session-29.scope: Deactivated successfully. Sep 9 22:01:37.499502 systemd-logind[1557]: Session 29 logged out. Waiting for processes to exit. Sep 9 22:01:37.502549 systemd-logind[1557]: Removed session 29. Sep 9 22:01:42.567568 systemd[1]: Started sshd@29-10.0.0.35:22-10.0.0.1:57848.service - OpenSSH per-connection server daemon (10.0.0.1:57848). Sep 9 22:01:42.669944 sshd[6373]: Accepted publickey for core from 10.0.0.1 port 57848 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 22:01:42.677477 sshd-session[6373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:01:42.708004 systemd-logind[1557]: New session 30 of user core. Sep 9 22:01:42.732151 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 9 22:01:43.010077 containerd[1576]: time="2025-09-09T22:01:43.009996361Z" level=info msg="TaskExit event in podsandbox handler container_id:\"411d97edd1dbe41be78c267c65930cb6352ef988764faa76cce1f84f6dc9ce68\" id:\"bd7e73e7cd4e7ffa83dde69bca1f021eb749b04ff0c6c24232dabb4868b3a7d5\" pid:6389 exited_at:{seconds:1757455303 nanos:2042252}" Sep 9 22:01:43.385116 sshd[6376]: Connection closed by 10.0.0.1 port 57848 Sep 9 22:01:43.385276 sshd-session[6373]: pam_unix(sshd:session): session closed for user core Sep 9 22:01:43.418018 systemd[1]: sshd@29-10.0.0.35:22-10.0.0.1:57848.service: Deactivated successfully. Sep 9 22:01:43.429197 systemd[1]: session-30.scope: Deactivated successfully. Sep 9 22:01:43.442885 systemd-logind[1557]: Session 30 logged out. Waiting for processes to exit. Sep 9 22:01:43.457081 systemd[1]: Started sshd@30-10.0.0.35:22-10.0.0.1:57862.service - OpenSSH per-connection server daemon (10.0.0.1:57862). Sep 9 22:01:43.468239 systemd-logind[1557]: Removed session 30. Sep 9 22:01:43.888433 sshd[6414]: Accepted publickey for core from 10.0.0.1 port 57862 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 22:01:43.890530 sshd-session[6414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:01:43.926481 systemd-logind[1557]: New session 31 of user core. Sep 9 22:01:43.940306 systemd[1]: Started session-31.scope - Session 31 of User core. Sep 9 22:01:46.410548 sshd[6417]: Connection closed by 10.0.0.1 port 57862 Sep 9 22:01:46.413292 sshd-session[6414]: pam_unix(sshd:session): session closed for user core Sep 9 22:01:46.443686 systemd[1]: sshd@30-10.0.0.35:22-10.0.0.1:57862.service: Deactivated successfully. Sep 9 22:01:46.448792 systemd[1]: session-31.scope: Deactivated successfully. Sep 9 22:01:46.451250 systemd-logind[1557]: Session 31 logged out. Waiting for processes to exit. Sep 9 22:01:46.460401 systemd[1]: Started sshd@31-10.0.0.35:22-10.0.0.1:57870.service - OpenSSH per-connection server daemon (10.0.0.1:57870). Sep 9 22:01:46.466209 systemd-logind[1557]: Removed session 31. Sep 9 22:01:46.550922 sshd[6432]: Accepted publickey for core from 10.0.0.1 port 57870 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 22:01:46.553312 sshd-session[6432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:01:46.561457 systemd-logind[1557]: New session 32 of user core. Sep 9 22:01:46.576150 systemd[1]: Started session-32.scope - Session 32 of User core. Sep 9 22:01:47.897384 sshd[6435]: Connection closed by 10.0.0.1 port 57870 Sep 9 22:01:47.897551 sshd-session[6432]: pam_unix(sshd:session): session closed for user core Sep 9 22:01:47.923143 systemd[1]: sshd@31-10.0.0.35:22-10.0.0.1:57870.service: Deactivated successfully. Sep 9 22:01:47.928691 systemd[1]: session-32.scope: Deactivated successfully. Sep 9 22:01:47.934227 systemd-logind[1557]: Session 32 logged out. Waiting for processes to exit. Sep 9 22:01:47.945032 systemd[1]: Started sshd@32-10.0.0.35:22-10.0.0.1:57882.service - OpenSSH per-connection server daemon (10.0.0.1:57882). Sep 9 22:01:47.948232 systemd-logind[1557]: Removed session 32. Sep 9 22:01:48.053526 sshd[6475]: Accepted publickey for core from 10.0.0.1 port 57882 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 22:01:48.055161 sshd-session[6475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:01:48.083852 systemd-logind[1557]: New session 33 of user core. Sep 9 22:01:48.101531 systemd[1]: Started session-33.scope - Session 33 of User core. Sep 9 22:01:50.007656 sshd[6478]: Connection closed by 10.0.0.1 port 57882 Sep 9 22:01:50.006573 sshd-session[6475]: pam_unix(sshd:session): session closed for user core Sep 9 22:01:50.028444 systemd[1]: sshd@32-10.0.0.35:22-10.0.0.1:57882.service: Deactivated successfully. Sep 9 22:01:50.037790 systemd[1]: session-33.scope: Deactivated successfully. Sep 9 22:01:50.039444 systemd-logind[1557]: Session 33 logged out. Waiting for processes to exit. Sep 9 22:01:50.047888 systemd[1]: Started sshd@33-10.0.0.35:22-10.0.0.1:54074.service - OpenSSH per-connection server daemon (10.0.0.1:54074). Sep 9 22:01:50.049950 systemd-logind[1557]: Removed session 33. Sep 9 22:01:50.232249 sshd[6489]: Accepted publickey for core from 10.0.0.1 port 54074 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 22:01:50.234805 sshd-session[6489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:01:50.247915 systemd-logind[1557]: New session 34 of user core. Sep 9 22:01:50.281333 systemd[1]: Started session-34.scope - Session 34 of User core. Sep 9 22:01:51.042619 sshd[6492]: Connection closed by 10.0.0.1 port 54074 Sep 9 22:01:51.051592 sshd-session[6489]: pam_unix(sshd:session): session closed for user core Sep 9 22:01:51.062061 systemd[1]: sshd@33-10.0.0.35:22-10.0.0.1:54074.service: Deactivated successfully. Sep 9 22:01:51.069020 systemd[1]: session-34.scope: Deactivated successfully. Sep 9 22:01:51.072946 systemd-logind[1557]: Session 34 logged out. Waiting for processes to exit. Sep 9 22:01:51.080935 systemd-logind[1557]: Removed session 34. Sep 9 22:01:52.789798 kubelet[2840]: E0909 22:01:52.789311 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:01:56.043785 systemd[1]: Started sshd@34-10.0.0.35:22-10.0.0.1:54078.service - OpenSSH per-connection server daemon (10.0.0.1:54078). Sep 9 22:01:56.123567 sshd[6505]: Accepted publickey for core from 10.0.0.1 port 54078 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 22:01:56.125853 sshd-session[6505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:01:56.131175 systemd-logind[1557]: New session 35 of user core. Sep 9 22:01:56.140682 systemd[1]: Started session-35.scope - Session 35 of User core. Sep 9 22:01:56.637830 sshd[6508]: Connection closed by 10.0.0.1 port 54078 Sep 9 22:01:56.639165 sshd-session[6505]: pam_unix(sshd:session): session closed for user core Sep 9 22:01:56.647130 systemd[1]: sshd@34-10.0.0.35:22-10.0.0.1:54078.service: Deactivated successfully. Sep 9 22:01:56.650712 systemd[1]: session-35.scope: Deactivated successfully. Sep 9 22:01:56.654649 systemd-logind[1557]: Session 35 logged out. Waiting for processes to exit. Sep 9 22:01:56.657004 systemd-logind[1557]: Removed session 35. Sep 9 22:01:58.261463 containerd[1576]: time="2025-09-09T22:01:58.261401035Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fb4ab06418191389da070349ab4342a3975ccaf7a5dcf95ae1b7b5f0f5eaff1f\" id:\"68f582d72ff8ba79ee7bf25aa692837577a65e7847592a48fe4c89cd4b536358\" pid:6533 exited_at:{seconds:1757455318 nanos:261003367}" Sep 9 22:02:00.755275 containerd[1576]: time="2025-09-09T22:02:00.755215308Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f914cf59d9f7675b3f7717108cc49f8eafd1251cc891949cb04565e3ebbdfd51\" id:\"00199a0074a9ce2ce73179d1a67dea099c4c4e12b12c812577c7f35446f871e1\" pid:6554 exited_at:{seconds:1757455320 nanos:754853448}" Sep 9 22:02:01.653349 systemd[1]: Started sshd@35-10.0.0.35:22-10.0.0.1:53626.service - OpenSSH per-connection server daemon (10.0.0.1:53626). Sep 9 22:02:01.710159 sshd[6567]: Accepted publickey for core from 10.0.0.1 port 53626 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 22:02:01.711605 sshd-session[6567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:02:01.715854 systemd-logind[1557]: New session 36 of user core. Sep 9 22:02:01.730940 systemd[1]: Started session-36.scope - Session 36 of User core. Sep 9 22:02:02.269250 sshd[6570]: Connection closed by 10.0.0.1 port 53626 Sep 9 22:02:02.269583 sshd-session[6567]: pam_unix(sshd:session): session closed for user core Sep 9 22:02:02.274499 systemd[1]: sshd@35-10.0.0.35:22-10.0.0.1:53626.service: Deactivated successfully. Sep 9 22:02:02.276614 systemd[1]: session-36.scope: Deactivated successfully. Sep 9 22:02:02.277647 systemd-logind[1557]: Session 36 logged out. Waiting for processes to exit. Sep 9 22:02:02.279529 systemd-logind[1557]: Removed session 36. Sep 9 22:02:07.290472 systemd[1]: Started sshd@36-10.0.0.35:22-10.0.0.1:53628.service - OpenSSH per-connection server daemon (10.0.0.1:53628). Sep 9 22:02:07.360713 sshd[6584]: Accepted publickey for core from 10.0.0.1 port 53628 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 22:02:07.362476 sshd-session[6584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:02:07.367121 systemd-logind[1557]: New session 37 of user core. Sep 9 22:02:07.378084 systemd[1]: Started session-37.scope - Session 37 of User core. Sep 9 22:02:07.692588 sshd[6587]: Connection closed by 10.0.0.1 port 53628 Sep 9 22:02:07.693213 sshd-session[6584]: pam_unix(sshd:session): session closed for user core Sep 9 22:02:07.697633 systemd[1]: sshd@36-10.0.0.35:22-10.0.0.1:53628.service: Deactivated successfully. Sep 9 22:02:07.700184 systemd[1]: session-37.scope: Deactivated successfully. Sep 9 22:02:07.701315 systemd-logind[1557]: Session 37 logged out. Waiting for processes to exit. Sep 9 22:02:07.702699 systemd-logind[1557]: Removed session 37. Sep 9 22:02:12.714621 systemd[1]: Started sshd@37-10.0.0.35:22-10.0.0.1:56580.service - OpenSSH per-connection server daemon (10.0.0.1:56580). Sep 9 22:02:12.725159 kubelet[2840]: E0909 22:02:12.725116 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:02:12.768967 sshd[6602]: Accepted publickey for core from 10.0.0.1 port 56580 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 22:02:12.770502 sshd-session[6602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:02:12.775076 systemd-logind[1557]: New session 38 of user core. Sep 9 22:02:12.784907 systemd[1]: Started session-38.scope - Session 38 of User core. Sep 9 22:02:12.956456 sshd[6624]: Connection closed by 10.0.0.1 port 56580 Sep 9 22:02:12.957969 sshd-session[6602]: pam_unix(sshd:session): session closed for user core Sep 9 22:02:12.962368 systemd-logind[1557]: Session 38 logged out. Waiting for processes to exit. Sep 9 22:02:12.962733 systemd[1]: sshd@37-10.0.0.35:22-10.0.0.1:56580.service: Deactivated successfully. Sep 9 22:02:12.965726 systemd[1]: session-38.scope: Deactivated successfully. Sep 9 22:02:12.970617 systemd-logind[1557]: Removed session 38. Sep 9 22:02:12.996686 containerd[1576]: time="2025-09-09T22:02:12.996624998Z" level=info msg="TaskExit event in podsandbox handler container_id:\"411d97edd1dbe41be78c267c65930cb6352ef988764faa76cce1f84f6dc9ce68\" id:\"70a4d10ce94291362f668797dbf5e2e336250ec7cc820e843f0a9b46d2580016\" pid:6618 exited_at:{seconds:1757455332 nanos:996264852}" Sep 9 22:02:15.204268 containerd[1576]: time="2025-09-09T22:02:15.204219225Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fb4ab06418191389da070349ab4342a3975ccaf7a5dcf95ae1b7b5f0f5eaff1f\" id:\"0efb17ae9b26178bc48389f8d8c0cbb5a572291565ff68ec04a4b9502fe1ac53\" pid:6656 exited_at:{seconds:1757455335 nanos:203953429}" Sep 9 22:02:15.569669 containerd[1576]: time="2025-09-09T22:02:15.569611888Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f914cf59d9f7675b3f7717108cc49f8eafd1251cc891949cb04565e3ebbdfd51\" id:\"c667089f1528bbce1290e1ecd8d627f9af8f6958c121191d42247349b5ede598\" pid:6678 exited_at:{seconds:1757455335 nanos:569029589}" Sep 9 22:02:15.726502 kubelet[2840]: E0909 22:02:15.724868 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:02:15.726502 kubelet[2840]: E0909 22:02:15.725057 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:02:17.973753 systemd[1]: Started sshd@38-10.0.0.35:22-10.0.0.1:56586.service - OpenSSH per-connection server daemon (10.0.0.1:56586). Sep 9 22:02:18.405513 sshd[6691]: Accepted publickey for core from 10.0.0.1 port 56586 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 22:02:18.407286 sshd-session[6691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:02:18.412504 systemd-logind[1557]: New session 39 of user core. Sep 9 22:02:18.421936 systemd[1]: Started session-39.scope - Session 39 of User core. Sep 9 22:02:19.377462 sshd[6694]: Connection closed by 10.0.0.1 port 56586 Sep 9 22:02:19.377916 sshd-session[6691]: pam_unix(sshd:session): session closed for user core Sep 9 22:02:19.383063 systemd[1]: sshd@38-10.0.0.35:22-10.0.0.1:56586.service: Deactivated successfully. Sep 9 22:02:19.386007 systemd[1]: session-39.scope: Deactivated successfully. Sep 9 22:02:19.387137 systemd-logind[1557]: Session 39 logged out. Waiting for processes to exit. Sep 9 22:02:19.389137 systemd-logind[1557]: Removed session 39. Sep 9 22:02:24.391214 systemd[1]: Started sshd@39-10.0.0.35:22-10.0.0.1:50648.service - OpenSSH per-connection server daemon (10.0.0.1:50648). Sep 9 22:02:24.452124 sshd[6707]: Accepted publickey for core from 10.0.0.1 port 50648 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 22:02:24.453956 sshd-session[6707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:02:24.458872 systemd-logind[1557]: New session 40 of user core. Sep 9 22:02:24.470096 systemd[1]: Started session-40.scope - Session 40 of User core. Sep 9 22:02:24.609687 sshd[6710]: Connection closed by 10.0.0.1 port 50648 Sep 9 22:02:24.610128 sshd-session[6707]: pam_unix(sshd:session): session closed for user core Sep 9 22:02:24.614597 systemd[1]: sshd@39-10.0.0.35:22-10.0.0.1:50648.service: Deactivated successfully. Sep 9 22:02:24.617051 systemd[1]: session-40.scope: Deactivated successfully. Sep 9 22:02:24.620546 systemd-logind[1557]: Session 40 logged out. Waiting for processes to exit. Sep 9 22:02:24.621824 systemd-logind[1557]: Removed session 40. Sep 9 22:02:28.217374 containerd[1576]: time="2025-09-09T22:02:28.217303331Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fb4ab06418191389da070349ab4342a3975ccaf7a5dcf95ae1b7b5f0f5eaff1f\" id:\"2ff4cdc8b92eb4c0280d4134f6d54510679e1d9240a92ef1431ca60474377df2\" pid:6741 exited_at:{seconds:1757455348 nanos:216994273}" Sep 9 22:02:28.727184 kubelet[2840]: E0909 22:02:28.725544 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:02:29.643826 systemd[1]: Started sshd@40-10.0.0.35:22-10.0.0.1:50660.service - OpenSSH per-connection server daemon (10.0.0.1:50660). Sep 9 22:02:29.811828 sshd[6750]: Accepted publickey for core from 10.0.0.1 port 50660 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 22:02:29.818238 sshd-session[6750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:02:29.854836 systemd-logind[1557]: New session 41 of user core. Sep 9 22:02:29.881247 systemd[1]: Started session-41.scope - Session 41 of User core. Sep 9 22:02:30.231065 sshd[6753]: Connection closed by 10.0.0.1 port 50660 Sep 9 22:02:30.231607 sshd-session[6750]: pam_unix(sshd:session): session closed for user core Sep 9 22:02:30.246600 systemd[1]: sshd@40-10.0.0.35:22-10.0.0.1:50660.service: Deactivated successfully. Sep 9 22:02:30.252170 systemd[1]: session-41.scope: Deactivated successfully. Sep 9 22:02:30.254982 systemd-logind[1557]: Session 41 logged out. Waiting for processes to exit. Sep 9 22:02:30.269202 systemd-logind[1557]: Removed session 41. Sep 9 22:02:30.957582 containerd[1576]: time="2025-09-09T22:02:30.957522169Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f914cf59d9f7675b3f7717108cc49f8eafd1251cc891949cb04565e3ebbdfd51\" id:\"236b4fd2d63f21ccc080f7184ebc916dab899c8f6dfa5deedbae5689089166e0\" pid:6778 exited_at:{seconds:1757455350 nanos:957020775}" Sep 9 22:02:34.734477 kubelet[2840]: E0909 22:02:34.729446 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:02:35.297895 systemd[1]: Started sshd@41-10.0.0.35:22-10.0.0.1:46486.service - OpenSSH per-connection server daemon (10.0.0.1:46486). Sep 9 22:02:35.465829 sshd[6793]: Accepted publickey for core from 10.0.0.1 port 46486 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 22:02:35.471448 sshd-session[6793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:02:35.505006 systemd-logind[1557]: New session 42 of user core. Sep 9 22:02:35.532313 systemd[1]: Started session-42.scope - Session 42 of User core. Sep 9 22:02:35.989638 sshd[6796]: Connection closed by 10.0.0.1 port 46486 Sep 9 22:02:35.987990 sshd-session[6793]: pam_unix(sshd:session): session closed for user core Sep 9 22:02:35.995810 systemd-logind[1557]: Session 42 logged out. Waiting for processes to exit. Sep 9 22:02:35.997429 systemd[1]: sshd@41-10.0.0.35:22-10.0.0.1:46486.service: Deactivated successfully. Sep 9 22:02:36.008152 systemd[1]: session-42.scope: Deactivated successfully. Sep 9 22:02:36.011427 systemd-logind[1557]: Removed session 42. Sep 9 22:02:41.024293 systemd[1]: Started sshd@42-10.0.0.35:22-10.0.0.1:55078.service - OpenSSH per-connection server daemon (10.0.0.1:55078). Sep 9 22:02:41.312407 sshd[6811]: Accepted publickey for core from 10.0.0.1 port 55078 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 22:02:41.315516 sshd-session[6811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:02:41.344488 systemd-logind[1557]: New session 43 of user core. Sep 9 22:02:41.356011 systemd[1]: Started session-43.scope - Session 43 of User core. Sep 9 22:02:42.003126 sshd[6814]: Connection closed by 10.0.0.1 port 55078 Sep 9 22:02:42.005068 sshd-session[6811]: pam_unix(sshd:session): session closed for user core Sep 9 22:02:42.021705 systemd[1]: sshd@42-10.0.0.35:22-10.0.0.1:55078.service: Deactivated successfully. Sep 9 22:02:42.026821 systemd[1]: session-43.scope: Deactivated successfully. Sep 9 22:02:42.029604 systemd-logind[1557]: Session 43 logged out. Waiting for processes to exit. Sep 9 22:02:42.034015 systemd-logind[1557]: Removed session 43. Sep 9 22:02:43.069992 containerd[1576]: time="2025-09-09T22:02:43.069922448Z" level=info msg="TaskExit event in podsandbox handler container_id:\"411d97edd1dbe41be78c267c65930cb6352ef988764faa76cce1f84f6dc9ce68\" id:\"02c4a20ea07192e0bb83d88f93b8434c3c038e97ab7085456c6e2fb822db5cc0\" pid:6839 exited_at:{seconds:1757455363 nanos:69326816}" Sep 9 22:02:43.727434 kubelet[2840]: E0909 22:02:43.726960 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:02:47.031003 systemd[1]: Started sshd@43-10.0.0.35:22-10.0.0.1:55080.service - OpenSSH per-connection server daemon (10.0.0.1:55080). Sep 9 22:02:47.299242 sshd[6855]: Accepted publickey for core from 10.0.0.1 port 55080 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 22:02:47.310319 sshd-session[6855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:02:47.327671 systemd-logind[1557]: New session 44 of user core. Sep 9 22:02:47.345147 systemd[1]: Started session-44.scope - Session 44 of User core. Sep 9 22:02:47.819106 sshd[6859]: Connection closed by 10.0.0.1 port 55080 Sep 9 22:02:47.819935 sshd-session[6855]: pam_unix(sshd:session): session closed for user core Sep 9 22:02:47.831373 systemd[1]: sshd@43-10.0.0.35:22-10.0.0.1:55080.service: Deactivated successfully. Sep 9 22:02:47.838022 systemd[1]: session-44.scope: Deactivated successfully. Sep 9 22:02:47.847999 systemd-logind[1557]: Session 44 logged out. Waiting for processes to exit. Sep 9 22:02:47.849861 systemd-logind[1557]: Removed session 44.