Nov 1 09:59:56.091140 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Sat Nov 1 08:12:41 -00 2025 Nov 1 09:59:56.091174 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=91cbcb3658f876d239d31cc29b206c4e950f20e536a8e14bd58a23c6f0ecf128 Nov 1 09:59:56.091187 kernel: BIOS-provided physical RAM map: Nov 1 09:59:56.091200 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 1 09:59:56.091209 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Nov 1 09:59:56.091218 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Nov 1 09:59:56.091229 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Nov 1 09:59:56.091238 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Nov 1 09:59:56.091253 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Nov 1 09:59:56.091263 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Nov 1 09:59:56.091273 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Nov 1 09:59:56.091286 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Nov 1 09:59:56.091295 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Nov 1 09:59:56.091305 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Nov 1 09:59:56.091318 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Nov 1 09:59:56.091328 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Nov 1 09:59:56.091344 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Nov 1 09:59:56.091355 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Nov 1 09:59:56.091366 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Nov 1 09:59:56.091376 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Nov 1 09:59:56.091387 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Nov 1 09:59:56.091398 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Nov 1 09:59:56.091408 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 1 09:59:56.091419 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 1 09:59:56.091429 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Nov 1 09:59:56.091440 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 1 09:59:56.091453 kernel: NX (Execute Disable) protection: active Nov 1 09:59:56.091463 kernel: APIC: Static calls initialized Nov 1 09:59:56.091474 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Nov 1 09:59:56.091484 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Nov 1 09:59:56.091495 kernel: extended physical RAM map: Nov 1 09:59:56.091505 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 1 09:59:56.091516 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Nov 1 09:59:56.091526 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Nov 1 09:59:56.091537 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Nov 1 09:59:56.091547 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Nov 1 09:59:56.091558 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Nov 1 09:59:56.091571 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Nov 1 09:59:56.091581 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Nov 1 09:59:56.091592 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Nov 1 09:59:56.091607 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Nov 1 09:59:56.091620 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Nov 1 09:59:56.091631 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Nov 1 09:59:56.091642 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Nov 1 09:59:56.091652 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Nov 1 09:59:56.091663 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Nov 1 09:59:56.091673 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Nov 1 09:59:56.091683 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Nov 1 09:59:56.091694 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Nov 1 09:59:56.091704 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Nov 1 09:59:56.091718 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Nov 1 09:59:56.091728 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Nov 1 09:59:56.091748 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Nov 1 09:59:56.091794 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Nov 1 09:59:56.091807 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 1 09:59:56.091815 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 1 09:59:56.091823 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Nov 1 09:59:56.091830 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 1 09:59:56.091842 kernel: efi: EFI v2.7 by EDK II Nov 1 09:59:56.091869 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Nov 1 09:59:56.091877 kernel: random: crng init done Nov 1 09:59:56.091893 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Nov 1 09:59:56.091901 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Nov 1 09:59:56.091910 kernel: secureboot: Secure boot disabled Nov 1 09:59:56.091918 kernel: SMBIOS 2.8 present. Nov 1 09:59:56.091926 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Nov 1 09:59:56.091934 kernel: DMI: Memory slots populated: 1/1 Nov 1 09:59:56.091941 kernel: Hypervisor detected: KVM Nov 1 09:59:56.091949 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Nov 1 09:59:56.091956 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 1 09:59:56.091964 kernel: kvm-clock: using sched offset of 4775490765 cycles Nov 1 09:59:56.091973 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 1 09:59:56.091983 kernel: tsc: Detected 2794.748 MHz processor Nov 1 09:59:56.091992 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 09:59:56.092000 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 09:59:56.092008 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Nov 1 09:59:56.092016 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 1 09:59:56.092024 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 09:59:56.092033 kernel: Using GB pages for direct mapping Nov 1 09:59:56.092043 kernel: ACPI: Early table checksum verification disabled Nov 1 09:59:56.092051 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Nov 1 09:59:56.092059 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Nov 1 09:59:56.092067 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 09:59:56.092075 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 09:59:56.092083 kernel: ACPI: FACS 0x000000009CBDD000 000040 Nov 1 09:59:56.092091 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 09:59:56.092101 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 09:59:56.092109 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 09:59:56.092117 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 09:59:56.092125 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Nov 1 09:59:56.092133 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Nov 1 09:59:56.092141 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Nov 1 09:59:56.092149 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Nov 1 09:59:56.092159 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Nov 1 09:59:56.092167 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Nov 1 09:59:56.092175 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Nov 1 09:59:56.092183 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Nov 1 09:59:56.092190 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Nov 1 09:59:56.092198 kernel: No NUMA configuration found Nov 1 09:59:56.092206 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Nov 1 09:59:56.092214 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Nov 1 09:59:56.092224 kernel: Zone ranges: Nov 1 09:59:56.092232 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 09:59:56.092240 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Nov 1 09:59:56.092249 kernel: Normal empty Nov 1 09:59:56.092256 kernel: Device empty Nov 1 09:59:56.092264 kernel: Movable zone start for each node Nov 1 09:59:56.092272 kernel: Early memory node ranges Nov 1 09:59:56.092282 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 1 09:59:56.092293 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Nov 1 09:59:56.092301 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Nov 1 09:59:56.092308 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Nov 1 09:59:56.092316 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Nov 1 09:59:56.092324 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Nov 1 09:59:56.092332 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Nov 1 09:59:56.092340 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Nov 1 09:59:56.092353 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Nov 1 09:59:56.092361 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 09:59:56.092375 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 1 09:59:56.092386 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Nov 1 09:59:56.092394 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 09:59:56.092402 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Nov 1 09:59:56.092410 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Nov 1 09:59:56.092419 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Nov 1 09:59:56.092427 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Nov 1 09:59:56.092437 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Nov 1 09:59:56.092446 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 1 09:59:56.092454 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 1 09:59:56.092462 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 1 09:59:56.092473 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 1 09:59:56.092481 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 1 09:59:56.092489 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 09:59:56.092498 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 1 09:59:56.092509 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 1 09:59:56.092533 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 09:59:56.092551 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 1 09:59:56.092563 kernel: TSC deadline timer available Nov 1 09:59:56.092571 kernel: CPU topo: Max. logical packages: 1 Nov 1 09:59:56.092580 kernel: CPU topo: Max. logical dies: 1 Nov 1 09:59:56.092588 kernel: CPU topo: Max. dies per package: 1 Nov 1 09:59:56.092596 kernel: CPU topo: Max. threads per core: 1 Nov 1 09:59:56.092604 kernel: CPU topo: Num. cores per package: 4 Nov 1 09:59:56.092612 kernel: CPU topo: Num. threads per package: 4 Nov 1 09:59:56.092620 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Nov 1 09:59:56.092631 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 1 09:59:56.092639 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 1 09:59:56.092647 kernel: kvm-guest: setup PV sched yield Nov 1 09:59:56.092656 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Nov 1 09:59:56.092664 kernel: Booting paravirtualized kernel on KVM Nov 1 09:59:56.092672 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 09:59:56.092681 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 1 09:59:56.092691 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Nov 1 09:59:56.092699 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Nov 1 09:59:56.092708 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 1 09:59:56.092716 kernel: kvm-guest: PV spinlocks enabled Nov 1 09:59:56.092724 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 1 09:59:56.092746 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=91cbcb3658f876d239d31cc29b206c4e950f20e536a8e14bd58a23c6f0ecf128 Nov 1 09:59:56.092755 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 1 09:59:56.092766 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 09:59:56.092775 kernel: Fallback order for Node 0: 0 Nov 1 09:59:56.092783 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Nov 1 09:59:56.092791 kernel: Policy zone: DMA32 Nov 1 09:59:56.092800 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 09:59:56.092808 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 1 09:59:56.092817 kernel: ftrace: allocating 40092 entries in 157 pages Nov 1 09:59:56.092827 kernel: ftrace: allocated 157 pages with 5 groups Nov 1 09:59:56.092835 kernel: Dynamic Preempt: voluntary Nov 1 09:59:56.092844 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 1 09:59:56.092866 kernel: rcu: RCU event tracing is enabled. Nov 1 09:59:56.092874 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 1 09:59:56.092883 kernel: Trampoline variant of Tasks RCU enabled. Nov 1 09:59:56.092892 kernel: Rude variant of Tasks RCU enabled. Nov 1 09:59:56.092900 kernel: Tracing variant of Tasks RCU enabled. Nov 1 09:59:56.092911 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 09:59:56.092919 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 1 09:59:56.092930 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 1 09:59:56.092939 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 1 09:59:56.092947 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 1 09:59:56.092955 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 1 09:59:56.092964 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 1 09:59:56.092974 kernel: Console: colour dummy device 80x25 Nov 1 09:59:56.092983 kernel: printk: legacy console [ttyS0] enabled Nov 1 09:59:56.092991 kernel: ACPI: Core revision 20240827 Nov 1 09:59:56.092999 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 1 09:59:56.093008 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 09:59:56.093016 kernel: x2apic enabled Nov 1 09:59:56.093024 kernel: APIC: Switched APIC routing to: physical x2apic Nov 1 09:59:56.093035 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 1 09:59:56.093043 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 1 09:59:56.093051 kernel: kvm-guest: setup PV IPIs Nov 1 09:59:56.093060 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 1 09:59:56.093068 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Nov 1 09:59:56.093076 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Nov 1 09:59:56.093085 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 1 09:59:56.093095 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 1 09:59:56.093103 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 1 09:59:56.093112 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 09:59:56.093120 kernel: Spectre V2 : Mitigation: Retpolines Nov 1 09:59:56.093128 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 1 09:59:56.093136 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 1 09:59:56.093145 kernel: active return thunk: retbleed_return_thunk Nov 1 09:59:56.093155 kernel: RETBleed: Mitigation: untrained return thunk Nov 1 09:59:56.093166 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 09:59:56.093175 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 1 09:59:56.093183 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 1 09:59:56.093193 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 1 09:59:56.093203 kernel: active return thunk: srso_return_thunk Nov 1 09:59:56.093212 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 1 09:59:56.093225 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 09:59:56.093233 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 09:59:56.093241 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 09:59:56.093250 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 09:59:56.093258 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 1 09:59:56.093266 kernel: Freeing SMP alternatives memory: 32K Nov 1 09:59:56.093275 kernel: pid_max: default: 32768 minimum: 301 Nov 1 09:59:56.093285 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 1 09:59:56.093293 kernel: landlock: Up and running. Nov 1 09:59:56.093301 kernel: SELinux: Initializing. Nov 1 09:59:56.093310 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 09:59:56.093318 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 09:59:56.093327 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 1 09:59:56.093335 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 1 09:59:56.093345 kernel: ... version: 0 Nov 1 09:59:56.093354 kernel: ... bit width: 48 Nov 1 09:59:56.093362 kernel: ... generic registers: 6 Nov 1 09:59:56.093370 kernel: ... value mask: 0000ffffffffffff Nov 1 09:59:56.093378 kernel: ... max period: 00007fffffffffff Nov 1 09:59:56.093386 kernel: ... fixed-purpose events: 0 Nov 1 09:59:56.093395 kernel: ... event mask: 000000000000003f Nov 1 09:59:56.093405 kernel: signal: max sigframe size: 1776 Nov 1 09:59:56.093413 kernel: rcu: Hierarchical SRCU implementation. Nov 1 09:59:56.093421 kernel: rcu: Max phase no-delay instances is 400. Nov 1 09:59:56.093432 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 1 09:59:56.093441 kernel: smp: Bringing up secondary CPUs ... Nov 1 09:59:56.093449 kernel: smpboot: x86: Booting SMP configuration: Nov 1 09:59:56.093457 kernel: .... node #0, CPUs: #1 #2 #3 Nov 1 09:59:56.093468 kernel: smp: Brought up 1 node, 4 CPUs Nov 1 09:59:56.093477 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Nov 1 09:59:56.093485 kernel: Memory: 2441096K/2565800K available (14336K kernel code, 2443K rwdata, 29892K rodata, 15356K init, 2688K bss, 118764K reserved, 0K cma-reserved) Nov 1 09:59:56.093494 kernel: devtmpfs: initialized Nov 1 09:59:56.093502 kernel: x86/mm: Memory block size: 128MB Nov 1 09:59:56.093510 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Nov 1 09:59:56.093519 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Nov 1 09:59:56.093529 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Nov 1 09:59:56.093538 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Nov 1 09:59:56.093546 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Nov 1 09:59:56.093554 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Nov 1 09:59:56.093563 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 09:59:56.093571 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 1 09:59:56.093580 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 09:59:56.093590 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 09:59:56.093598 kernel: audit: initializing netlink subsys (disabled) Nov 1 09:59:56.093606 kernel: audit: type=2000 audit(1761991193.063:1): state=initialized audit_enabled=0 res=1 Nov 1 09:59:56.093615 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 09:59:56.093623 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 09:59:56.093631 kernel: cpuidle: using governor menu Nov 1 09:59:56.093639 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 09:59:56.093650 kernel: dca service started, version 1.12.1 Nov 1 09:59:56.093658 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Nov 1 09:59:56.093666 kernel: PCI: Using configuration type 1 for base access Nov 1 09:59:56.093675 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 09:59:56.093683 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 09:59:56.093691 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 1 09:59:56.093700 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 09:59:56.093710 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 1 09:59:56.093718 kernel: ACPI: Added _OSI(Module Device) Nov 1 09:59:56.093726 kernel: ACPI: Added _OSI(Processor Device) Nov 1 09:59:56.093743 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 09:59:56.093751 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 1 09:59:56.093759 kernel: ACPI: Interpreter enabled Nov 1 09:59:56.093767 kernel: ACPI: PM: (supports S0 S3 S5) Nov 1 09:59:56.093776 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 09:59:56.093787 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 09:59:56.093795 kernel: PCI: Using E820 reservations for host bridge windows Nov 1 09:59:56.093803 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 1 09:59:56.093812 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 1 09:59:56.094073 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 1 09:59:56.094257 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 1 09:59:56.094440 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 1 09:59:56.094456 kernel: PCI host bridge to bus 0000:00 Nov 1 09:59:56.094639 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 09:59:56.094810 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 09:59:56.094986 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 09:59:56.095150 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Nov 1 09:59:56.095313 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Nov 1 09:59:56.095470 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Nov 1 09:59:56.095628 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 1 09:59:56.095831 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 1 09:59:56.096031 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Nov 1 09:59:56.096209 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Nov 1 09:59:56.096389 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Nov 1 09:59:56.096560 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Nov 1 09:59:56.096740 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 09:59:56.096946 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 1 09:59:56.097142 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Nov 1 09:59:56.097323 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Nov 1 09:59:56.097495 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Nov 1 09:59:56.097677 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 1 09:59:56.097877 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Nov 1 09:59:56.098053 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Nov 1 09:59:56.098232 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Nov 1 09:59:56.098412 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 1 09:59:56.098585 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Nov 1 09:59:56.098765 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Nov 1 09:59:56.098954 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Nov 1 09:59:56.099128 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Nov 1 09:59:56.099315 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 1 09:59:56.099486 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 1 09:59:56.099666 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 1 09:59:56.099865 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Nov 1 09:59:56.100042 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Nov 1 09:59:56.100260 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 1 09:59:56.100433 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Nov 1 09:59:56.100445 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 1 09:59:56.100454 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 1 09:59:56.100462 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 1 09:59:56.100470 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 1 09:59:56.100479 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 1 09:59:56.100491 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 1 09:59:56.100500 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 1 09:59:56.100508 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 1 09:59:56.100517 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 1 09:59:56.100526 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 1 09:59:56.100534 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 1 09:59:56.100542 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 1 09:59:56.100553 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 1 09:59:56.100561 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 1 09:59:56.100569 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 1 09:59:56.100578 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 1 09:59:56.100586 kernel: iommu: Default domain type: Translated Nov 1 09:59:56.100594 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 09:59:56.100603 kernel: efivars: Registered efivars operations Nov 1 09:59:56.100613 kernel: PCI: Using ACPI for IRQ routing Nov 1 09:59:56.100621 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 09:59:56.100630 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Nov 1 09:59:56.100638 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Nov 1 09:59:56.100646 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Nov 1 09:59:56.100655 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Nov 1 09:59:56.100663 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Nov 1 09:59:56.100673 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Nov 1 09:59:56.100682 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Nov 1 09:59:56.100690 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Nov 1 09:59:56.100887 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 1 09:59:56.101060 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 1 09:59:56.101230 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 09:59:56.101245 kernel: vgaarb: loaded Nov 1 09:59:56.101254 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 1 09:59:56.101262 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 1 09:59:56.101271 kernel: clocksource: Switched to clocksource kvm-clock Nov 1 09:59:56.101279 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 09:59:56.101288 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 09:59:56.101297 kernel: pnp: PnP ACPI init Nov 1 09:59:56.101495 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Nov 1 09:59:56.101512 kernel: pnp: PnP ACPI: found 6 devices Nov 1 09:59:56.101521 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 09:59:56.101530 kernel: NET: Registered PF_INET protocol family Nov 1 09:59:56.101539 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 09:59:56.101548 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 1 09:59:56.101556 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 09:59:56.101567 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 1 09:59:56.101576 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 1 09:59:56.101585 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 1 09:59:56.101593 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 09:59:56.101603 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 09:59:56.101611 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 09:59:56.101620 kernel: NET: Registered PF_XDP protocol family Nov 1 09:59:56.101805 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Nov 1 09:59:56.101993 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Nov 1 09:59:56.102155 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 09:59:56.102332 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 09:59:56.102493 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 09:59:56.102651 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Nov 1 09:59:56.102825 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Nov 1 09:59:56.103003 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Nov 1 09:59:56.103015 kernel: PCI: CLS 0 bytes, default 64 Nov 1 09:59:56.103025 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Nov 1 09:59:56.103040 kernel: Initialise system trusted keyrings Nov 1 09:59:56.103048 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 1 09:59:56.103057 kernel: Key type asymmetric registered Nov 1 09:59:56.103066 kernel: Asymmetric key parser 'x509' registered Nov 1 09:59:56.103075 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 1 09:59:56.103084 kernel: io scheduler mq-deadline registered Nov 1 09:59:56.103092 kernel: io scheduler kyber registered Nov 1 09:59:56.103101 kernel: io scheduler bfq registered Nov 1 09:59:56.103112 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 09:59:56.103121 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 1 09:59:56.103130 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 1 09:59:56.103139 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 1 09:59:56.103147 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 09:59:56.103156 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 09:59:56.103165 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 1 09:59:56.103176 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 1 09:59:56.103185 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 1 09:59:56.103369 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 1 09:59:56.103383 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 1 09:59:56.103546 kernel: rtc_cmos 00:04: registered as rtc0 Nov 1 09:59:56.103710 kernel: rtc_cmos 00:04: setting system clock to 2025-11-01T09:59:54 UTC (1761991194) Nov 1 09:59:56.103941 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Nov 1 09:59:56.103971 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 1 09:59:56.103991 kernel: efifb: probing for efifb Nov 1 09:59:56.104011 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Nov 1 09:59:56.104030 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Nov 1 09:59:56.104049 kernel: efifb: scrolling: redraw Nov 1 09:59:56.104068 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 1 09:59:56.104095 kernel: Console: switching to colour frame buffer device 160x50 Nov 1 09:59:56.104119 kernel: fb0: EFI VGA frame buffer device Nov 1 09:59:56.104138 kernel: pstore: Using crash dump compression: deflate Nov 1 09:59:56.104157 kernel: pstore: Registered efi_pstore as persistent store backend Nov 1 09:59:56.104177 kernel: NET: Registered PF_INET6 protocol family Nov 1 09:59:56.104196 kernel: Segment Routing with IPv6 Nov 1 09:59:56.104216 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 09:59:56.104241 kernel: NET: Registered PF_PACKET protocol family Nov 1 09:59:56.104262 kernel: Key type dns_resolver registered Nov 1 09:59:56.104281 kernel: IPI shorthand broadcast: enabled Nov 1 09:59:56.104300 kernel: sched_clock: Marking stable (1879003146, 285370645)->(2220301618, -55927827) Nov 1 09:59:56.104319 kernel: registered taskstats version 1 Nov 1 09:59:56.104338 kernel: Loading compiled-in X.509 certificates Nov 1 09:59:56.104357 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: d8ad6d63e9d0f6e32055e659cacaf9092255a45e' Nov 1 09:59:56.104382 kernel: Demotion targets for Node 0: null Nov 1 09:59:56.104402 kernel: Key type .fscrypt registered Nov 1 09:59:56.104421 kernel: Key type fscrypt-provisioning registered Nov 1 09:59:56.104441 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 1 09:59:56.104460 kernel: ima: Allocated hash algorithm: sha1 Nov 1 09:59:56.104483 kernel: ima: No architecture policies found Nov 1 09:59:56.104503 kernel: clk: Disabling unused clocks Nov 1 09:59:56.104528 kernel: Freeing unused kernel image (initmem) memory: 15356K Nov 1 09:59:56.104549 kernel: Write protecting the kernel read-only data: 45056k Nov 1 09:59:56.104569 kernel: Freeing unused kernel image (rodata/data gap) memory: 828K Nov 1 09:59:56.104592 kernel: Run /init as init process Nov 1 09:59:56.104607 kernel: with arguments: Nov 1 09:59:56.104631 kernel: /init Nov 1 09:59:56.104650 kernel: with environment: Nov 1 09:59:56.104669 kernel: HOME=/ Nov 1 09:59:56.104696 kernel: TERM=linux Nov 1 09:59:56.104715 kernel: SCSI subsystem initialized Nov 1 09:59:56.104741 kernel: libata version 3.00 loaded. Nov 1 09:59:56.104941 kernel: ahci 0000:00:1f.2: version 3.0 Nov 1 09:59:56.104954 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 1 09:59:56.105132 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 1 09:59:56.105311 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 1 09:59:56.105488 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 1 09:59:56.105682 kernel: scsi host0: ahci Nov 1 09:59:56.105895 kernel: scsi host1: ahci Nov 1 09:59:56.106082 kernel: scsi host2: ahci Nov 1 09:59:56.106266 kernel: scsi host3: ahci Nov 1 09:59:56.106455 kernel: scsi host4: ahci Nov 1 09:59:56.106640 kernel: scsi host5: ahci Nov 1 09:59:56.106653 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Nov 1 09:59:56.106662 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Nov 1 09:59:56.106671 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Nov 1 09:59:56.106679 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Nov 1 09:59:56.106691 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Nov 1 09:59:56.106700 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Nov 1 09:59:56.106709 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 1 09:59:56.106718 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 1 09:59:56.106726 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 1 09:59:56.106743 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 1 09:59:56.106752 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 1 09:59:56.106763 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 1 09:59:56.106772 kernel: ata3.00: LPM support broken, forcing max_power Nov 1 09:59:56.106781 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 1 09:59:56.106789 kernel: ata3.00: applying bridge limits Nov 1 09:59:56.106798 kernel: ata3.00: LPM support broken, forcing max_power Nov 1 09:59:56.106807 kernel: ata3.00: configured for UDMA/100 Nov 1 09:59:56.107027 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 1 09:59:56.107222 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 1 09:59:56.107425 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Nov 1 09:59:56.107438 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 09:59:56.107447 kernel: GPT:16515071 != 27000831 Nov 1 09:59:56.107456 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 09:59:56.107465 kernel: GPT:16515071 != 27000831 Nov 1 09:59:56.107477 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 09:59:56.107486 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 09:59:56.107688 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 1 09:59:56.107705 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 1 09:59:56.107966 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 1 09:59:56.107982 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 09:59:56.107991 kernel: device-mapper: uevent: version 1.0.3 Nov 1 09:59:56.108004 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 1 09:59:56.108013 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 1 09:59:56.108021 kernel: raid6: avx2x4 gen() 29973 MB/s Nov 1 09:59:56.108030 kernel: raid6: avx2x2 gen() 30442 MB/s Nov 1 09:59:56.108039 kernel: raid6: avx2x1 gen() 25826 MB/s Nov 1 09:59:56.108048 kernel: raid6: using algorithm avx2x2 gen() 30442 MB/s Nov 1 09:59:56.108056 kernel: raid6: .... xor() 19847 MB/s, rmw enabled Nov 1 09:59:56.108067 kernel: raid6: using avx2x2 recovery algorithm Nov 1 09:59:56.108076 kernel: xor: automatically using best checksumming function avx Nov 1 09:59:56.108085 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 1 09:59:56.108094 kernel: BTRFS: device fsid 8763e8a0-bf7f-4ffe-acc8-da149b03dd0b devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (181) Nov 1 09:59:56.108103 kernel: BTRFS info (device dm-0): first mount of filesystem 8763e8a0-bf7f-4ffe-acc8-da149b03dd0b Nov 1 09:59:56.108112 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 1 09:59:56.108121 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 1 09:59:56.108133 kernel: BTRFS info (device dm-0): enabling free space tree Nov 1 09:59:56.108142 kernel: loop: module loaded Nov 1 09:59:56.108151 kernel: loop0: detected capacity change from 0 to 100136 Nov 1 09:59:56.108160 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 09:59:56.108170 systemd[1]: Successfully made /usr/ read-only. Nov 1 09:59:56.108181 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 1 09:59:56.108193 systemd[1]: Detected virtualization kvm. Nov 1 09:59:56.108203 systemd[1]: Detected architecture x86-64. Nov 1 09:59:56.108211 systemd[1]: Running in initrd. Nov 1 09:59:56.108220 systemd[1]: No hostname configured, using default hostname. Nov 1 09:59:56.108230 systemd[1]: Hostname set to . Nov 1 09:59:56.108239 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 1 09:59:56.108248 systemd[1]: Queued start job for default target initrd.target. Nov 1 09:59:56.108260 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 1 09:59:56.108269 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 09:59:56.108278 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 09:59:56.108288 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 1 09:59:56.108298 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 09:59:56.108309 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 1 09:59:56.108319 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 1 09:59:56.108329 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 09:59:56.108338 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 09:59:56.108347 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 1 09:59:56.108357 systemd[1]: Reached target paths.target - Path Units. Nov 1 09:59:56.108366 systemd[1]: Reached target slices.target - Slice Units. Nov 1 09:59:56.108377 systemd[1]: Reached target swap.target - Swaps. Nov 1 09:59:56.108386 systemd[1]: Reached target timers.target - Timer Units. Nov 1 09:59:56.108395 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 09:59:56.108404 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 09:59:56.108413 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 1 09:59:56.108423 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 1 09:59:56.108432 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 09:59:56.108443 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 09:59:56.108452 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 09:59:56.108462 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 09:59:56.108471 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 1 09:59:56.108481 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 1 09:59:56.108490 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 09:59:56.108502 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 1 09:59:56.108511 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 1 09:59:56.108521 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 09:59:56.108530 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 09:59:56.108539 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 09:59:56.108548 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 09:59:56.108560 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 1 09:59:56.108570 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 09:59:56.108579 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 09:59:56.108588 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 09:59:56.108623 systemd-journald[317]: Collecting audit messages is disabled. Nov 1 09:59:56.108646 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 09:59:56.108655 systemd-journald[317]: Journal started Nov 1 09:59:56.108682 systemd-journald[317]: Runtime Journal (/run/log/journal/b07e0e7f915245698a3ffab7da615c2a) is 6M, max 48.1M, 42M free. Nov 1 09:59:56.111695 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 09:59:56.111800 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 09:59:56.117357 kernel: Bridge firewalling registered Nov 1 09:59:56.114552 systemd-modules-load[319]: Inserted module 'br_netfilter' Nov 1 09:59:56.116129 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 09:59:56.122771 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 09:59:56.134033 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 09:59:56.137420 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 09:59:56.144288 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 09:59:56.148206 systemd-tmpfiles[336]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 1 09:59:56.148950 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 09:59:56.153550 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 09:59:56.163058 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 09:59:56.177363 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 09:59:56.179986 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 09:59:56.192041 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 09:59:56.194229 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 1 09:59:56.215763 dracut-cmdline[360]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=91cbcb3658f876d239d31cc29b206c4e950f20e536a8e14bd58a23c6f0ecf128 Nov 1 09:59:56.248906 systemd-resolved[353]: Positive Trust Anchors: Nov 1 09:59:56.248921 systemd-resolved[353]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 09:59:56.248925 systemd-resolved[353]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 1 09:59:56.248955 systemd-resolved[353]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 09:59:56.274749 systemd-resolved[353]: Defaulting to hostname 'linux'. Nov 1 09:59:56.276073 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 09:59:56.276903 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 09:59:56.362895 kernel: Loading iSCSI transport class v2.0-870. Nov 1 09:59:56.378904 kernel: iscsi: registered transport (tcp) Nov 1 09:59:56.403979 kernel: iscsi: registered transport (qla4xxx) Nov 1 09:59:56.404079 kernel: QLogic iSCSI HBA Driver Nov 1 09:59:56.433518 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 1 09:59:56.468512 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 1 09:59:56.474610 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 1 09:59:56.536862 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 1 09:59:56.539344 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 1 09:59:56.541508 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 1 09:59:56.582011 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 1 09:59:56.585829 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 09:59:56.616196 systemd-udevd[599]: Using default interface naming scheme 'v257'. Nov 1 09:59:56.630331 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 09:59:56.635051 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 1 09:59:56.664141 dracut-pre-trigger[658]: rd.md=0: removing MD RAID activation Nov 1 09:59:56.680117 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 09:59:56.682653 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 09:59:56.704580 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 09:59:56.706608 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 09:59:56.740331 systemd-networkd[719]: lo: Link UP Nov 1 09:59:56.740342 systemd-networkd[719]: lo: Gained carrier Nov 1 09:59:56.741008 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 09:59:56.742461 systemd[1]: Reached target network.target - Network. Nov 1 09:59:56.801231 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 09:59:56.806689 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 1 09:59:56.863210 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 1 09:59:56.875445 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 1 09:59:56.893158 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 1 09:59:56.901210 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 09:59:56.912234 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 1 09:59:56.918833 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 1 09:59:56.926664 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 1 09:59:56.932293 systemd-networkd[719]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 1 09:59:56.932306 systemd-networkd[719]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 09:59:56.933498 systemd-networkd[719]: eth0: Link UP Nov 1 09:59:56.941649 kernel: AES CTR mode by8 optimization enabled Nov 1 09:59:56.934089 systemd-networkd[719]: eth0: Gained carrier Nov 1 09:59:56.934099 systemd-networkd[719]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 1 09:59:56.940544 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 09:59:56.940730 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 09:59:57.239680 disk-uuid[816]: Primary Header is updated. Nov 1 09:59:57.239680 disk-uuid[816]: Secondary Entries is updated. Nov 1 09:59:57.239680 disk-uuid[816]: Secondary Header is updated. Nov 1 09:59:56.947545 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 09:59:56.953947 systemd-networkd[719]: eth0: DHCPv4 address 10.0.0.25/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 1 09:59:56.958824 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 09:59:57.253544 systemd-resolved[353]: Detected conflict on linux IN A 10.0.0.25 Nov 1 09:59:57.253557 systemd-resolved[353]: Hostname conflict, changing published hostname from 'linux' to 'linux2'. Nov 1 09:59:57.289226 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 09:59:57.331389 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 1 09:59:57.333088 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 09:59:57.335479 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 09:59:57.336295 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 09:59:57.343924 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 1 09:59:57.371266 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 1 09:59:58.279630 disk-uuid[821]: Warning: The kernel is still using the old partition table. Nov 1 09:59:58.279630 disk-uuid[821]: The new table will be used at the next reboot or after you Nov 1 09:59:58.279630 disk-uuid[821]: run partprobe(8) or kpartx(8) Nov 1 09:59:58.279630 disk-uuid[821]: The operation has completed successfully. Nov 1 09:59:58.299554 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 09:59:58.299729 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 1 09:59:58.305318 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 1 09:59:58.348394 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (868) Nov 1 09:59:58.348444 kernel: BTRFS info (device vda6): first mount of filesystem 75c18d9e-3deb-43e1-a433-af20f45ab517 Nov 1 09:59:58.350869 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 09:59:58.353834 kernel: BTRFS info (device vda6): turning on async discard Nov 1 09:59:58.353870 kernel: BTRFS info (device vda6): enabling free space tree Nov 1 09:59:58.360872 kernel: BTRFS info (device vda6): last unmount of filesystem 75c18d9e-3deb-43e1-a433-af20f45ab517 Nov 1 09:59:58.362236 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 1 09:59:58.367092 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 1 09:59:58.640051 ignition[887]: Ignition 2.22.0 Nov 1 09:59:58.640063 ignition[887]: Stage: fetch-offline Nov 1 09:59:58.640132 ignition[887]: no configs at "/usr/lib/ignition/base.d" Nov 1 09:59:58.640150 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 09:59:58.644091 ignition[887]: parsed url from cmdline: "" Nov 1 09:59:58.644098 ignition[887]: no config URL provided Nov 1 09:59:58.645714 ignition[887]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 09:59:58.645740 ignition[887]: no config at "/usr/lib/ignition/user.ign" Nov 1 09:59:58.645812 ignition[887]: op(1): [started] loading QEMU firmware config module Nov 1 09:59:58.646824 ignition[887]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 1 09:59:58.659414 ignition[887]: op(1): [finished] loading QEMU firmware config module Nov 1 09:59:58.744441 ignition[887]: parsing config with SHA512: b51da5abfea745a945a27178203d7106dc7cea5ef36af2bae7e33b0280c98c8ab97d4c930a42ad8d5bea2e4179a91c513df1a3d0a70b0834a88d82d48a35a457 Nov 1 09:59:58.792552 unknown[887]: fetched base config from "system" Nov 1 09:59:58.792566 unknown[887]: fetched user config from "qemu" Nov 1 09:59:58.796384 ignition[887]: fetch-offline: fetch-offline passed Nov 1 09:59:58.796554 ignition[887]: Ignition finished successfully Nov 1 09:59:58.800944 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 09:59:58.801865 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 1 09:59:58.802978 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 1 09:59:58.814040 systemd-networkd[719]: eth0: Gained IPv6LL Nov 1 09:59:58.850815 ignition[899]: Ignition 2.22.0 Nov 1 09:59:58.850829 ignition[899]: Stage: kargs Nov 1 09:59:58.850995 ignition[899]: no configs at "/usr/lib/ignition/base.d" Nov 1 09:59:58.851010 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 09:59:58.851709 ignition[899]: kargs: kargs passed Nov 1 09:59:58.851757 ignition[899]: Ignition finished successfully Nov 1 09:59:58.862045 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 1 09:59:58.866463 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 1 09:59:58.910505 ignition[908]: Ignition 2.22.0 Nov 1 09:59:58.910522 ignition[908]: Stage: disks Nov 1 09:59:58.910677 ignition[908]: no configs at "/usr/lib/ignition/base.d" Nov 1 09:59:58.910688 ignition[908]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 09:59:58.912139 ignition[908]: disks: disks passed Nov 1 09:59:58.912201 ignition[908]: Ignition finished successfully Nov 1 09:59:58.920556 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 1 09:59:58.921452 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 1 09:59:58.924283 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 1 09:59:58.924835 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 09:59:58.931397 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 09:59:58.937178 systemd[1]: Reached target basic.target - Basic System. Nov 1 09:59:58.941517 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 1 09:59:58.978575 systemd-fsck[918]: ROOT: clean, 15/456736 files, 38230/456704 blocks Nov 1 09:59:58.986611 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 1 09:59:58.988784 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 1 09:59:59.105880 kernel: EXT4-fs (vda9): mounted filesystem 9a0b584a-8c68-48a6-a0f9-92613ad0f15d r/w with ordered data mode. Quota mode: none. Nov 1 09:59:59.106593 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 1 09:59:59.109873 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 1 09:59:59.114874 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 09:59:59.117689 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 1 09:59:59.118592 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 1 09:59:59.118629 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 09:59:59.118666 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 09:59:59.139424 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 1 09:59:59.145546 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (926) Nov 1 09:59:59.145570 kernel: BTRFS info (device vda6): first mount of filesystem 75c18d9e-3deb-43e1-a433-af20f45ab517 Nov 1 09:59:59.145582 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 09:59:59.149160 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 1 09:59:59.154624 kernel: BTRFS info (device vda6): turning on async discard Nov 1 09:59:59.154641 kernel: BTRFS info (device vda6): enabling free space tree Nov 1 09:59:59.155688 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 09:59:59.224632 initrd-setup-root[950]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 09:59:59.231925 initrd-setup-root[957]: cut: /sysroot/etc/group: No such file or directory Nov 1 09:59:59.237193 initrd-setup-root[964]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 09:59:59.243077 initrd-setup-root[971]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 09:59:59.347887 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 1 09:59:59.350103 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 1 09:59:59.354410 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 1 09:59:59.469777 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 1 09:59:59.473292 kernel: BTRFS info (device vda6): last unmount of filesystem 75c18d9e-3deb-43e1-a433-af20f45ab517 Nov 1 09:59:59.484040 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 1 09:59:59.506609 ignition[1040]: INFO : Ignition 2.22.0 Nov 1 09:59:59.506609 ignition[1040]: INFO : Stage: mount Nov 1 09:59:59.509423 ignition[1040]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 09:59:59.509423 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 09:59:59.509423 ignition[1040]: INFO : mount: mount passed Nov 1 09:59:59.509423 ignition[1040]: INFO : Ignition finished successfully Nov 1 09:59:59.510627 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 1 09:59:59.514025 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 1 09:59:59.548275 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 09:59:59.572835 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1052) Nov 1 09:59:59.572902 kernel: BTRFS info (device vda6): first mount of filesystem 75c18d9e-3deb-43e1-a433-af20f45ab517 Nov 1 09:59:59.572920 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 09:59:59.578115 kernel: BTRFS info (device vda6): turning on async discard Nov 1 09:59:59.578141 kernel: BTRFS info (device vda6): enabling free space tree Nov 1 09:59:59.580106 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 09:59:59.659259 ignition[1069]: INFO : Ignition 2.22.0 Nov 1 09:59:59.659259 ignition[1069]: INFO : Stage: files Nov 1 09:59:59.662276 ignition[1069]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 09:59:59.662276 ignition[1069]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 09:59:59.662276 ignition[1069]: DEBUG : files: compiled without relabeling support, skipping Nov 1 09:59:59.667721 ignition[1069]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 09:59:59.667721 ignition[1069]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 09:59:59.672282 ignition[1069]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 09:59:59.672282 ignition[1069]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 09:59:59.676698 unknown[1069]: wrote ssh authorized keys file for user: core Nov 1 09:59:59.678556 ignition[1069]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 09:59:59.681905 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 09:59:59.685016 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 1 09:59:59.733741 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 1 09:59:59.827704 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 09:59:59.827704 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 1 09:59:59.834206 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 09:59:59.834206 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 09:59:59.834206 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 09:59:59.834206 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 09:59:59.834206 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 09:59:59.834206 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 09:59:59.834206 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 09:59:59.834206 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 09:59:59.834206 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 09:59:59.834206 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 1 09:59:59.863738 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 1 09:59:59.863738 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 1 09:59:59.863738 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 1 10:00:00.271924 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 1 10:00:00.964308 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 1 10:00:00.964308 ignition[1069]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 1 10:00:00.970547 ignition[1069]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 10:00:00.970547 ignition[1069]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 10:00:00.970547 ignition[1069]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 1 10:00:00.970547 ignition[1069]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 1 10:00:00.970547 ignition[1069]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 1 10:00:00.970547 ignition[1069]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 1 10:00:00.970547 ignition[1069]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 1 10:00:00.970547 ignition[1069]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 1 10:00:01.010305 ignition[1069]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 1 10:00:01.017630 ignition[1069]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 1 10:00:01.020482 ignition[1069]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 1 10:00:01.020482 ignition[1069]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 1 10:00:01.020482 ignition[1069]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 10:00:01.020482 ignition[1069]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 10:00:01.020482 ignition[1069]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 10:00:01.020482 ignition[1069]: INFO : files: files passed Nov 1 10:00:01.020482 ignition[1069]: INFO : Ignition finished successfully Nov 1 10:00:01.038018 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 1 10:00:01.043265 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 1 10:00:01.045135 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 1 10:00:01.078612 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 10:00:01.078747 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 1 10:00:01.088074 initrd-setup-root-after-ignition[1099]: grep: /sysroot/oem/oem-release: No such file or directory Nov 1 10:00:01.093452 initrd-setup-root-after-ignition[1102]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 10:00:01.096372 initrd-setup-root-after-ignition[1102]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 1 10:00:01.099184 initrd-setup-root-after-ignition[1106]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 10:00:01.099444 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 10:00:01.106446 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 1 10:00:01.111249 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 1 10:00:01.187033 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 10:00:01.188824 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 1 10:00:01.193523 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 1 10:00:01.197189 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 1 10:00:01.201290 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 1 10:00:01.204958 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 1 10:00:01.243582 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 10:00:01.248615 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 1 10:00:01.274725 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 1 10:00:01.274959 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 1 10:00:01.275815 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 10:00:01.280711 systemd[1]: Stopped target timers.target - Timer Units. Nov 1 10:00:01.284465 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 10:00:01.284582 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 10:00:01.289976 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 1 10:00:01.290802 systemd[1]: Stopped target basic.target - Basic System. Nov 1 10:00:01.295678 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 1 10:00:01.298511 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 10:00:01.301967 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 1 10:00:01.302471 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 1 10:00:01.308974 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 1 10:00:01.312313 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 10:00:01.318597 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 1 10:00:01.322069 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 1 10:00:01.325378 systemd[1]: Stopped target swap.target - Swaps. Nov 1 10:00:01.326455 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 10:00:01.326688 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 1 10:00:01.332040 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 1 10:00:01.333791 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 10:00:01.338562 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 1 10:00:01.342366 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 10:00:01.344009 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 10:00:01.344212 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 1 10:00:01.345151 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 10:00:01.345324 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 10:00:01.345910 systemd[1]: Stopped target paths.target - Path Units. Nov 1 10:00:01.356491 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 10:00:01.360001 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 10:00:01.360909 systemd[1]: Stopped target slices.target - Slice Units. Nov 1 10:00:01.364915 systemd[1]: Stopped target sockets.target - Socket Units. Nov 1 10:00:01.368063 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 10:00:01.368168 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 10:00:01.370171 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 10:00:01.370261 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 10:00:01.370764 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 10:00:01.370926 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 10:00:01.379596 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 10:00:01.379759 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 1 10:00:01.384148 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 1 10:00:01.389312 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 1 10:00:01.390280 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 1 10:00:01.390450 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 10:00:01.394461 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 10:00:01.394578 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 10:00:01.397743 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 10:00:01.397901 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 10:00:01.411924 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 10:00:01.492232 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 1 10:00:01.523724 ignition[1126]: INFO : Ignition 2.22.0 Nov 1 10:00:01.523724 ignition[1126]: INFO : Stage: umount Nov 1 10:00:01.526738 ignition[1126]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 10:00:01.526738 ignition[1126]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 10:00:01.526738 ignition[1126]: INFO : umount: umount passed Nov 1 10:00:01.526738 ignition[1126]: INFO : Ignition finished successfully Nov 1 10:00:01.524874 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 10:00:01.533995 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 10:00:01.534200 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 1 10:00:01.538290 systemd[1]: Stopped target network.target - Network. Nov 1 10:00:01.539429 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 10:00:01.539521 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 1 10:00:01.543297 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 10:00:01.543362 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 1 10:00:01.547785 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 10:00:01.547914 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 1 10:00:01.548808 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 1 10:00:01.548888 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 1 10:00:01.553812 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 1 10:00:01.557975 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 1 10:00:01.573665 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 10:00:01.573920 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 1 10:00:01.587983 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 10:00:01.588138 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 1 10:00:01.595293 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 1 10:00:01.597927 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 10:00:01.597970 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 1 10:00:01.600476 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 1 10:00:01.603412 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 10:00:01.603488 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 10:00:01.606355 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 10:00:01.606421 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 1 10:00:01.609788 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 10:00:01.609843 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 1 10:00:01.613426 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 10:00:01.618054 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 10:00:01.624162 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 1 10:00:01.625404 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 10:00:01.625528 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 1 10:00:01.646295 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 10:00:01.646562 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 10:00:01.652815 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 10:00:01.652966 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 1 10:00:01.654285 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 10:00:01.654332 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 10:00:01.658190 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 10:00:01.658261 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 1 10:00:01.663771 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 10:00:01.663826 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 1 10:00:01.668355 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 10:00:01.668439 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 10:00:01.674203 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 1 10:00:01.675432 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 1 10:00:01.675507 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 1 10:00:01.676299 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 1 10:00:01.676345 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 10:00:01.682658 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 1 10:00:01.682724 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 10:00:01.686443 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 10:00:01.686495 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 10:00:01.690229 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 10:00:01.690285 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 10:00:01.694462 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 10:00:01.696018 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 1 10:00:01.704881 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 10:00:01.705004 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 1 10:00:01.708631 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 1 10:00:01.712770 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 1 10:00:01.735064 systemd[1]: Switching root. Nov 1 10:00:01.775523 systemd-journald[317]: Journal stopped Nov 1 10:00:03.302428 systemd-journald[317]: Received SIGTERM from PID 1 (systemd). Nov 1 10:00:03.302783 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 10:00:03.302798 kernel: SELinux: policy capability open_perms=1 Nov 1 10:00:03.302810 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 10:00:03.302827 kernel: SELinux: policy capability always_check_network=0 Nov 1 10:00:03.302839 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 10:00:03.302865 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 10:00:03.302883 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 10:00:03.302904 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 10:00:03.302917 kernel: SELinux: policy capability userspace_initial_context=0 Nov 1 10:00:03.302929 kernel: audit: type=1403 audit(1761991202.275:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 10:00:03.302947 systemd[1]: Successfully loaded SELinux policy in 71.424ms. Nov 1 10:00:03.302966 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.406ms. Nov 1 10:00:03.302980 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 1 10:00:03.302993 systemd[1]: Detected virtualization kvm. Nov 1 10:00:03.303014 systemd[1]: Detected architecture x86-64. Nov 1 10:00:03.303027 systemd[1]: Detected first boot. Nov 1 10:00:03.303040 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 1 10:00:03.303055 zram_generator::config[1172]: No configuration found. Nov 1 10:00:03.303069 kernel: Guest personality initialized and is inactive Nov 1 10:00:03.303086 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 1 10:00:03.303106 kernel: Initialized host personality Nov 1 10:00:03.303119 kernel: NET: Registered PF_VSOCK protocol family Nov 1 10:00:03.303132 systemd[1]: Populated /etc with preset unit settings. Nov 1 10:00:03.303145 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 1 10:00:03.303158 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 1 10:00:03.303171 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 1 10:00:03.303185 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 1 10:00:03.303207 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 1 10:00:03.303220 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 1 10:00:03.303233 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 1 10:00:03.303246 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 1 10:00:03.303260 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 1 10:00:03.303273 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 1 10:00:03.303289 systemd[1]: Created slice user.slice - User and Session Slice. Nov 1 10:00:03.303313 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 10:00:03.303327 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 10:00:03.303340 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 1 10:00:03.303353 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 1 10:00:03.303367 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 1 10:00:03.303381 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 10:00:03.303403 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 1 10:00:03.303417 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 10:00:03.303430 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 10:00:03.303443 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 1 10:00:03.303455 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 1 10:00:03.303468 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 1 10:00:03.303481 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 1 10:00:03.303507 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 10:00:03.303520 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 10:00:03.303543 systemd[1]: Reached target slices.target - Slice Units. Nov 1 10:00:03.303556 systemd[1]: Reached target swap.target - Swaps. Nov 1 10:00:03.303570 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 1 10:00:03.303583 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 1 10:00:03.303596 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 1 10:00:03.303609 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 10:00:03.303631 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 10:00:03.303643 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 10:00:03.303656 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 1 10:00:03.303669 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 1 10:00:03.303684 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 1 10:00:03.303696 systemd[1]: Mounting media.mount - External Media Directory... Nov 1 10:00:03.303714 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 10:00:03.303735 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 1 10:00:03.303748 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 1 10:00:03.303761 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 1 10:00:03.303775 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 10:00:03.303788 systemd[1]: Reached target machines.target - Containers. Nov 1 10:00:03.303801 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 1 10:00:03.303822 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 10:00:03.303835 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 10:00:03.303859 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 1 10:00:03.303872 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 10:00:03.303885 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 10:00:03.303898 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 10:00:03.303911 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 1 10:00:03.303933 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 10:00:03.303946 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 10:00:03.303959 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 1 10:00:03.303972 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 1 10:00:03.303985 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 1 10:00:03.303999 systemd[1]: Stopped systemd-fsck-usr.service. Nov 1 10:00:03.304013 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 1 10:00:03.304034 kernel: fuse: init (API version 7.41) Nov 1 10:00:03.304047 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 10:00:03.304060 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 10:00:03.304073 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 1 10:00:03.304085 kernel: ACPI: bus type drm_connector registered Nov 1 10:00:03.304098 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 1 10:00:03.304112 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 1 10:00:03.304136 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 10:00:03.304150 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 10:00:03.304162 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 1 10:00:03.304194 systemd-journald[1253]: Collecting audit messages is disabled. Nov 1 10:00:03.304226 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 1 10:00:03.304239 systemd-journald[1253]: Journal started Nov 1 10:00:03.304261 systemd-journald[1253]: Runtime Journal (/run/log/journal/b07e0e7f915245698a3ffab7da615c2a) is 6M, max 48.1M, 42M free. Nov 1 10:00:03.307343 systemd[1]: Mounted media.mount - External Media Directory. Nov 1 10:00:02.938678 systemd[1]: Queued start job for default target multi-user.target. Nov 1 10:00:02.959049 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 1 10:00:02.959707 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 1 10:00:03.311100 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 10:00:03.314814 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 1 10:00:03.316929 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 1 10:00:03.319126 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 1 10:00:03.321143 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 1 10:00:03.323756 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 10:00:03.326455 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 10:00:03.326844 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 1 10:00:03.329419 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 10:00:03.329721 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 10:00:03.332050 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 10:00:03.332373 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 10:00:03.334565 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 10:00:03.334808 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 10:00:03.337509 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 10:00:03.337771 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 1 10:00:03.339819 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 10:00:03.340364 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 10:00:03.342824 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 10:00:03.346094 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 1 10:00:03.350651 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 1 10:00:03.353438 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 1 10:00:03.372010 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 1 10:00:03.374388 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 1 10:00:03.379445 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 1 10:00:03.382493 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 1 10:00:03.384547 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 10:00:03.384644 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 10:00:03.386753 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 1 10:00:03.389741 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 10:00:03.396033 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 1 10:00:03.400160 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 1 10:00:03.404912 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 10:00:03.406040 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 1 10:00:03.408097 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 10:00:03.410986 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 10:00:03.414386 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 1 10:00:03.419057 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 10:00:03.422237 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 10:00:03.425709 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 1 10:00:03.428966 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 1 10:00:03.495134 systemd-journald[1253]: Time spent on flushing to /var/log/journal/b07e0e7f915245698a3ffab7da615c2a is 33.719ms for 1056 entries. Nov 1 10:00:03.495134 systemd-journald[1253]: System Journal (/var/log/journal/b07e0e7f915245698a3ffab7da615c2a) is 8M, max 163.5M, 155.5M free. Nov 1 10:00:03.546060 systemd-journald[1253]: Received client request to flush runtime journal. Nov 1 10:00:03.546133 kernel: loop1: detected capacity change from 0 to 119080 Nov 1 10:00:03.505634 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 1 10:00:03.508674 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 1 10:00:03.513154 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 1 10:00:03.515899 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 10:00:03.523988 systemd-tmpfiles[1292]: ACLs are not supported, ignoring. Nov 1 10:00:03.524003 systemd-tmpfiles[1292]: ACLs are not supported, ignoring. Nov 1 10:00:03.543647 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 10:00:03.547710 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 1 10:00:03.553223 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 1 10:00:03.561067 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 1 10:00:03.569060 kernel: loop2: detected capacity change from 0 to 111544 Nov 1 10:00:03.602049 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 1 10:00:03.603988 kernel: loop3: detected capacity change from 0 to 229808 Nov 1 10:00:03.606281 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 10:00:03.610070 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 10:00:03.624291 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 1 10:00:03.632925 kernel: loop4: detected capacity change from 0 to 119080 Nov 1 10:00:03.637299 systemd-tmpfiles[1313]: ACLs are not supported, ignoring. Nov 1 10:00:03.638904 systemd-tmpfiles[1313]: ACLs are not supported, ignoring. Nov 1 10:00:03.642874 kernel: loop5: detected capacity change from 0 to 111544 Nov 1 10:00:03.645199 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 10:00:03.654064 kernel: loop6: detected capacity change from 0 to 229808 Nov 1 10:00:03.660170 (sd-merge)[1317]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Nov 1 10:00:03.664824 (sd-merge)[1317]: Merged extensions into '/usr'. Nov 1 10:00:03.669618 systemd[1]: Reload requested from client PID 1291 ('systemd-sysext') (unit systemd-sysext.service)... Nov 1 10:00:03.669643 systemd[1]: Reloading... Nov 1 10:00:03.741895 zram_generator::config[1351]: No configuration found. Nov 1 10:00:03.767826 systemd-resolved[1312]: Positive Trust Anchors: Nov 1 10:00:03.767936 systemd-resolved[1312]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 10:00:03.767943 systemd-resolved[1312]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 1 10:00:03.767976 systemd-resolved[1312]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 10:00:03.772004 systemd-resolved[1312]: Defaulting to hostname 'linux'. Nov 1 10:00:04.131962 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 10:00:04.132244 systemd[1]: Reloading finished in 462 ms. Nov 1 10:00:04.167138 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 1 10:00:04.169369 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 10:00:04.171685 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 1 10:00:04.176522 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 10:00:04.190353 systemd[1]: Starting ensure-sysext.service... Nov 1 10:00:04.192979 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 10:00:04.207752 systemd[1]: Reload requested from client PID 1387 ('systemctl') (unit ensure-sysext.service)... Nov 1 10:00:04.207925 systemd[1]: Reloading... Nov 1 10:00:04.216631 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 1 10:00:04.216673 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 1 10:00:04.217014 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 10:00:04.217353 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 1 10:00:04.218465 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 10:00:04.218818 systemd-tmpfiles[1388]: ACLs are not supported, ignoring. Nov 1 10:00:04.219132 systemd-tmpfiles[1388]: ACLs are not supported, ignoring. Nov 1 10:00:04.226301 systemd-tmpfiles[1388]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 10:00:04.226314 systemd-tmpfiles[1388]: Skipping /boot Nov 1 10:00:04.237322 systemd-tmpfiles[1388]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 10:00:04.237336 systemd-tmpfiles[1388]: Skipping /boot Nov 1 10:00:04.274920 zram_generator::config[1421]: No configuration found. Nov 1 10:00:04.469666 systemd[1]: Reloading finished in 261 ms. Nov 1 10:00:04.492962 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 1 10:00:04.525056 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 10:00:04.536461 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 1 10:00:04.539583 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 1 10:00:04.542671 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 1 10:00:04.550104 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 1 10:00:04.556117 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 10:00:04.563146 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 1 10:00:04.569717 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 10:00:04.570960 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 10:00:04.572443 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 10:00:04.576246 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 10:00:04.582108 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 10:00:04.583971 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 10:00:04.584116 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 1 10:00:04.584240 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 10:00:04.589983 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 10:00:04.590378 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 10:00:04.594763 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 10:00:04.595094 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 10:00:04.595330 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 10:00:04.595430 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 1 10:00:04.595547 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 10:00:04.595643 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 10:00:04.600567 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 10:00:04.600781 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 10:00:04.609318 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 10:00:04.612793 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 10:00:04.615087 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 10:00:04.615210 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 1 10:00:04.615355 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 10:00:04.616780 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 1 10:00:04.620127 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 10:00:04.625582 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 10:00:04.628138 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 10:00:04.628384 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 10:00:04.631140 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 10:00:04.631616 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 10:00:04.633836 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 10:00:04.634306 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 10:00:04.635622 systemd-udevd[1461]: Using default interface naming scheme 'v257'. Nov 1 10:00:04.643005 augenrules[1488]: No rules Nov 1 10:00:04.645139 systemd[1]: audit-rules.service: Deactivated successfully. Nov 1 10:00:04.645898 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 1 10:00:04.648251 systemd[1]: Finished ensure-sysext.service. Nov 1 10:00:04.659296 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 10:00:04.659546 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 10:00:04.662733 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 1 10:00:04.665413 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 1 10:00:04.669437 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 10:00:04.683577 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 10:00:04.688699 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 1 10:00:04.692215 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 10:00:04.796090 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 1 10:00:04.830379 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 1 10:00:04.834531 systemd[1]: Reached target time-set.target - System Time Set. Nov 1 10:00:04.856034 systemd-networkd[1513]: lo: Link UP Nov 1 10:00:04.856048 systemd-networkd[1513]: lo: Gained carrier Nov 1 10:00:04.858007 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 10:00:04.858015 systemd-networkd[1513]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 1 10:00:04.858020 systemd-networkd[1513]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 10:00:04.860043 systemd[1]: Reached target network.target - Network. Nov 1 10:00:04.862266 systemd-networkd[1513]: eth0: Link UP Nov 1 10:00:04.862549 systemd-networkd[1513]: eth0: Gained carrier Nov 1 10:00:04.862569 systemd-networkd[1513]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 1 10:00:04.864046 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 1 10:00:04.868600 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 1 10:00:04.879915 systemd-networkd[1513]: eth0: DHCPv4 address 10.0.0.25/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 1 10:00:04.880606 systemd-timesyncd[1500]: Network configuration changed, trying to establish connection. Nov 1 10:00:05.412233 systemd-timesyncd[1500]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 1 10:00:05.412450 systemd-timesyncd[1500]: Initial clock synchronization to Sat 2025-11-01 10:00:05.412016 UTC. Nov 1 10:00:05.413542 systemd-resolved[1312]: Clock change detected. Flushing caches. Nov 1 10:00:05.419429 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 10:00:05.444426 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 1 10:00:05.448179 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 1 10:00:05.451089 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 1 10:00:05.455714 kernel: ACPI: button: Power Button [PWRF] Nov 1 10:00:05.456626 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 1 10:00:05.481270 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Nov 1 10:00:05.481843 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 1 10:00:05.482260 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 1 10:00:05.485485 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 1 10:00:05.642036 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 10:00:05.655334 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 10:00:05.655640 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 10:00:05.659431 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 10:00:05.818117 kernel: kvm_amd: TSC scaling supported Nov 1 10:00:05.818496 kernel: kvm_amd: Nested Virtualization enabled Nov 1 10:00:05.818518 kernel: kvm_amd: Nested Paging enabled Nov 1 10:00:05.819743 kernel: kvm_amd: LBR virtualization supported Nov 1 10:00:05.821185 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 1 10:00:05.821275 kernel: kvm_amd: Virtual GIF supported Nov 1 10:00:05.837314 ldconfig[1459]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 10:00:05.845816 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 1 10:00:05.851323 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 1 10:00:05.859599 kernel: EDAC MC: Ver: 3.0.0 Nov 1 10:00:05.865533 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 10:00:05.905242 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 1 10:00:05.907515 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 10:00:05.909366 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 1 10:00:05.911423 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 1 10:00:05.913484 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 1 10:00:05.915587 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 1 10:00:05.917435 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 1 10:00:05.919558 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 1 10:00:05.921931 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 10:00:05.921969 systemd[1]: Reached target paths.target - Path Units. Nov 1 10:00:05.923501 systemd[1]: Reached target timers.target - Timer Units. Nov 1 10:00:05.926228 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 1 10:00:05.929888 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 1 10:00:05.933587 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 1 10:00:05.935766 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 1 10:00:05.937764 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 1 10:00:05.944902 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 1 10:00:05.946968 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 1 10:00:05.949824 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 1 10:00:05.952322 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 10:00:05.953859 systemd[1]: Reached target basic.target - Basic System. Nov 1 10:00:05.955400 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 1 10:00:05.955431 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 1 10:00:05.956590 systemd[1]: Starting containerd.service - containerd container runtime... Nov 1 10:00:05.959570 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 1 10:00:05.979484 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 1 10:00:05.983025 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 1 10:00:05.986163 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 1 10:00:05.987997 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 1 10:00:05.989146 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 1 10:00:05.992475 jq[1576]: false Nov 1 10:00:05.993131 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 1 10:00:05.996489 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 1 10:00:06.001679 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 1 10:00:06.006097 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 1 10:00:06.011154 extend-filesystems[1577]: Found /dev/vda6 Nov 1 10:00:06.012463 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 1 10:00:06.015799 google_oslogin_nss_cache[1578]: oslogin_cache_refresh[1578]: Refreshing passwd entry cache Nov 1 10:00:06.015317 oslogin_cache_refresh[1578]: Refreshing passwd entry cache Nov 1 10:00:06.015438 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 1 10:00:06.016447 extend-filesystems[1577]: Found /dev/vda9 Nov 1 10:00:06.016080 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 1 10:00:06.019191 extend-filesystems[1577]: Checking size of /dev/vda9 Nov 1 10:00:06.020904 systemd[1]: Starting update-engine.service - Update Engine... Nov 1 10:00:06.024586 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 1 10:00:06.031787 oslogin_cache_refresh[1578]: Failure getting users, quitting Nov 1 10:00:06.033982 google_oslogin_nss_cache[1578]: oslogin_cache_refresh[1578]: Failure getting users, quitting Nov 1 10:00:06.033982 google_oslogin_nss_cache[1578]: oslogin_cache_refresh[1578]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 1 10:00:06.033982 google_oslogin_nss_cache[1578]: oslogin_cache_refresh[1578]: Refreshing group entry cache Nov 1 10:00:06.029455 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 1 10:00:06.031812 oslogin_cache_refresh[1578]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 1 10:00:06.031954 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 10:00:06.031865 oslogin_cache_refresh[1578]: Refreshing group entry cache Nov 1 10:00:06.032226 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 1 10:00:06.033177 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 10:00:06.034591 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 1 10:00:06.041684 oslogin_cache_refresh[1578]: Failure getting groups, quitting Nov 1 10:00:06.043650 jq[1597]: true Nov 1 10:00:06.043816 google_oslogin_nss_cache[1578]: oslogin_cache_refresh[1578]: Failure getting groups, quitting Nov 1 10:00:06.043816 google_oslogin_nss_cache[1578]: oslogin_cache_refresh[1578]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 1 10:00:06.043884 extend-filesystems[1577]: Resized partition /dev/vda9 Nov 1 10:00:06.041696 oslogin_cache_refresh[1578]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 1 10:00:06.044780 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 10:00:06.045082 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 1 10:00:06.046610 extend-filesystems[1611]: resize2fs 1.47.3 (8-Jul-2025) Nov 1 10:00:06.056490 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Nov 1 10:00:06.047922 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 1 10:00:06.048603 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 1 10:00:06.083767 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Nov 1 10:00:06.083925 jq[1612]: true Nov 1 10:00:06.106777 extend-filesystems[1611]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 1 10:00:06.106777 extend-filesystems[1611]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 1 10:00:06.106777 extend-filesystems[1611]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Nov 1 10:00:06.125581 extend-filesystems[1577]: Resized filesystem in /dev/vda9 Nov 1 10:00:06.129026 update_engine[1591]: I20251101 10:00:06.123339 1591 main.cc:92] Flatcar Update Engine starting Nov 1 10:00:06.112264 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 10:00:06.112816 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 1 10:00:06.151231 tar[1608]: linux-amd64/LICENSE Nov 1 10:00:06.151231 tar[1608]: linux-amd64/helm Nov 1 10:00:06.173639 systemd-logind[1588]: Watching system buttons on /dev/input/event2 (Power Button) Nov 1 10:00:06.173678 systemd-logind[1588]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 1 10:00:06.174529 systemd-logind[1588]: New seat seat0. Nov 1 10:00:06.176980 systemd[1]: Started systemd-logind.service - User Login Management. Nov 1 10:00:06.186894 dbus-daemon[1574]: [system] SELinux support is enabled Nov 1 10:00:06.187260 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 1 10:00:06.192964 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 10:00:06.193030 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 1 10:00:06.195640 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 10:00:06.195687 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 1 10:00:06.195922 update_engine[1591]: I20251101 10:00:06.195843 1591 update_check_scheduler.cc:74] Next update check in 7m50s Nov 1 10:00:06.201351 systemd[1]: Started update-engine.service - Update Engine. Nov 1 10:00:06.204052 dbus-daemon[1574]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 1 10:00:06.218840 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 1 10:00:06.222468 bash[1644]: Updated "/home/core/.ssh/authorized_keys" Nov 1 10:00:06.227482 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 1 10:00:06.232180 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 1 10:00:06.380698 locksmithd[1645]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 10:00:06.616183 containerd[1615]: time="2025-11-01T10:00:06Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 1 10:00:06.617806 containerd[1615]: time="2025-11-01T10:00:06.617767323Z" level=info msg="starting containerd" revision=75cb2b7193e4e490e9fbdc236c0e811ccaba3376 version=v2.1.4 Nov 1 10:00:06.632407 containerd[1615]: time="2025-11-01T10:00:06.631534521Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.869µs" Nov 1 10:00:06.632407 containerd[1615]: time="2025-11-01T10:00:06.631582190Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 1 10:00:06.632407 containerd[1615]: time="2025-11-01T10:00:06.631633987Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 1 10:00:06.632407 containerd[1615]: time="2025-11-01T10:00:06.631647653Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 1 10:00:06.632407 containerd[1615]: time="2025-11-01T10:00:06.631865472Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 1 10:00:06.632407 containerd[1615]: time="2025-11-01T10:00:06.631880540Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 1 10:00:06.632407 containerd[1615]: time="2025-11-01T10:00:06.631956452Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 1 10:00:06.632407 containerd[1615]: time="2025-11-01T10:00:06.631967783Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 1 10:00:06.632407 containerd[1615]: time="2025-11-01T10:00:06.632259230Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 1 10:00:06.632407 containerd[1615]: time="2025-11-01T10:00:06.632274328Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 1 10:00:06.632407 containerd[1615]: time="2025-11-01T10:00:06.632284898Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 1 10:00:06.632407 containerd[1615]: time="2025-11-01T10:00:06.632294005Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Nov 1 10:00:06.632820 containerd[1615]: time="2025-11-01T10:00:06.632799974Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Nov 1 10:00:06.632888 containerd[1615]: time="2025-11-01T10:00:06.632874143Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 1 10:00:06.633142 containerd[1615]: time="2025-11-01T10:00:06.633122599Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 1 10:00:06.633470 containerd[1615]: time="2025-11-01T10:00:06.633451185Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 1 10:00:06.633659 containerd[1615]: time="2025-11-01T10:00:06.633642604Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 1 10:00:06.633712 containerd[1615]: time="2025-11-01T10:00:06.633696926Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 1 10:00:06.633804 containerd[1615]: time="2025-11-01T10:00:06.633787716Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 1 10:00:06.634691 containerd[1615]: time="2025-11-01T10:00:06.634645826Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 1 10:00:06.634823 containerd[1615]: time="2025-11-01T10:00:06.634794354Z" level=info msg="metadata content store policy set" policy=shared Nov 1 10:00:06.641846 containerd[1615]: time="2025-11-01T10:00:06.641811804Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 1 10:00:06.641897 containerd[1615]: time="2025-11-01T10:00:06.641872959Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Nov 1 10:00:06.642037 containerd[1615]: time="2025-11-01T10:00:06.642001079Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Nov 1 10:00:06.642037 containerd[1615]: time="2025-11-01T10:00:06.642022379Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 1 10:00:06.642081 containerd[1615]: time="2025-11-01T10:00:06.642038329Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 1 10:00:06.642081 containerd[1615]: time="2025-11-01T10:00:06.642051744Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 1 10:00:06.642081 containerd[1615]: time="2025-11-01T10:00:06.642064508Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 1 10:00:06.642081 containerd[1615]: time="2025-11-01T10:00:06.642074617Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 1 10:00:06.642155 containerd[1615]: time="2025-11-01T10:00:06.642088734Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 1 10:00:06.642155 containerd[1615]: time="2025-11-01T10:00:06.642105505Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 1 10:00:06.642155 containerd[1615]: time="2025-11-01T10:00:06.642117418Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 1 10:00:06.642155 containerd[1615]: time="2025-11-01T10:00:06.642128909Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 1 10:00:06.642155 containerd[1615]: time="2025-11-01T10:00:06.642138728Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 1 10:00:06.642155 containerd[1615]: time="2025-11-01T10:00:06.642150750Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 1 10:00:06.642303 containerd[1615]: time="2025-11-01T10:00:06.642281375Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 1 10:00:06.642331 containerd[1615]: time="2025-11-01T10:00:06.642311702Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 1 10:00:06.642366 containerd[1615]: time="2025-11-01T10:00:06.642350595Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 1 10:00:06.642409 containerd[1615]: time="2025-11-01T10:00:06.642375832Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 1 10:00:06.642497 containerd[1615]: time="2025-11-01T10:00:06.642475118Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 1 10:00:06.642520 containerd[1615]: time="2025-11-01T10:00:06.642496609Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 1 10:00:06.642520 containerd[1615]: time="2025-11-01T10:00:06.642508701Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 1 10:00:06.642565 containerd[1615]: time="2025-11-01T10:00:06.642524471Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 1 10:00:06.642565 containerd[1615]: time="2025-11-01T10:00:06.642544398Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 1 10:00:06.642565 containerd[1615]: time="2025-11-01T10:00:06.642555700Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 1 10:00:06.642616 containerd[1615]: time="2025-11-01T10:00:06.642566189Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 1 10:00:06.642616 containerd[1615]: time="2025-11-01T10:00:06.642596246Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 1 10:00:06.642681 containerd[1615]: time="2025-11-01T10:00:06.642660015Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 1 10:00:06.642681 containerd[1615]: time="2025-11-01T10:00:06.642677057Z" level=info msg="Start snapshots syncer" Nov 1 10:00:06.642729 containerd[1615]: time="2025-11-01T10:00:06.642719897Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 1 10:00:06.643241 containerd[1615]: time="2025-11-01T10:00:06.643186653Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 1 10:00:06.643503 containerd[1615]: time="2025-11-01T10:00:06.643251304Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 1 10:00:06.643503 containerd[1615]: time="2025-11-01T10:00:06.643369866Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 1 10:00:06.643571 containerd[1615]: time="2025-11-01T10:00:06.643546488Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 1 10:00:06.643593 containerd[1615]: time="2025-11-01T10:00:06.643571805Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 1 10:00:06.643593 containerd[1615]: time="2025-11-01T10:00:06.643582746Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 1 10:00:06.643632 containerd[1615]: time="2025-11-01T10:00:06.643593496Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 1 10:00:06.643632 containerd[1615]: time="2025-11-01T10:00:06.643605118Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 1 10:00:06.643632 containerd[1615]: time="2025-11-01T10:00:06.643618332Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 1 10:00:06.643632 containerd[1615]: time="2025-11-01T10:00:06.643628792Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 1 10:00:06.643893 containerd[1615]: time="2025-11-01T10:00:06.643639262Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 1 10:00:06.643893 containerd[1615]: time="2025-11-01T10:00:06.643649882Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 1 10:00:06.643893 containerd[1615]: time="2025-11-01T10:00:06.643697270Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 1 10:00:06.643893 containerd[1615]: time="2025-11-01T10:00:06.643709614Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 1 10:00:06.643893 containerd[1615]: time="2025-11-01T10:00:06.643730132Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 1 10:00:06.643893 containerd[1615]: time="2025-11-01T10:00:06.643825501Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 1 10:00:06.643893 containerd[1615]: time="2025-11-01T10:00:06.643836361Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 1 10:00:06.643893 containerd[1615]: time="2025-11-01T10:00:06.643857641Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 1 10:00:06.643893 containerd[1615]: time="2025-11-01T10:00:06.643870696Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 1 10:00:06.643893 containerd[1615]: time="2025-11-01T10:00:06.643889160Z" level=info msg="runtime interface created" Nov 1 10:00:06.643893 containerd[1615]: time="2025-11-01T10:00:06.643895272Z" level=info msg="created NRI interface" Nov 1 10:00:06.644095 containerd[1615]: time="2025-11-01T10:00:06.643913817Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 1 10:00:06.644095 containerd[1615]: time="2025-11-01T10:00:06.643949884Z" level=info msg="Connect containerd service" Nov 1 10:00:06.644095 containerd[1615]: time="2025-11-01T10:00:06.643973468Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 1 10:00:06.645095 containerd[1615]: time="2025-11-01T10:00:06.645058914Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 10:00:06.658023 sshd_keygen[1598]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 10:00:06.685744 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 1 10:00:06.692239 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 1 10:00:06.720139 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 10:00:06.720468 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 1 10:00:06.726729 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 1 10:00:06.752268 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 1 10:00:06.757633 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 1 10:00:06.763731 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 1 10:00:06.776026 systemd[1]: Reached target getty.target - Login Prompts. Nov 1 10:00:06.785464 tar[1608]: linux-amd64/README.md Nov 1 10:00:06.822193 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 1 10:00:06.876169 containerd[1615]: time="2025-11-01T10:00:06.876026621Z" level=info msg="Start subscribing containerd event" Nov 1 10:00:06.876335 containerd[1615]: time="2025-11-01T10:00:06.876118884Z" level=info msg="Start recovering state" Nov 1 10:00:06.876548 containerd[1615]: time="2025-11-01T10:00:06.876513374Z" level=info msg="Start event monitor" Nov 1 10:00:06.876590 containerd[1615]: time="2025-11-01T10:00:06.876558959Z" level=info msg="Start cni network conf syncer for default" Nov 1 10:00:06.876590 containerd[1615]: time="2025-11-01T10:00:06.876576442Z" level=info msg="Start streaming server" Nov 1 10:00:06.876652 containerd[1615]: time="2025-11-01T10:00:06.876605226Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 1 10:00:06.876652 containerd[1615]: time="2025-11-01T10:00:06.876629021Z" level=info msg="runtime interface starting up..." Nov 1 10:00:06.876652 containerd[1615]: time="2025-11-01T10:00:06.876640212Z" level=info msg="starting plugins..." Nov 1 10:00:06.877045 containerd[1615]: time="2025-11-01T10:00:06.876680697Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 1 10:00:06.877045 containerd[1615]: time="2025-11-01T10:00:06.876736763Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 10:00:06.877045 containerd[1615]: time="2025-11-01T10:00:06.876861256Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 10:00:06.877288 systemd[1]: Started containerd.service - containerd container runtime. Nov 1 10:00:06.877921 containerd[1615]: time="2025-11-01T10:00:06.877886028Z" level=info msg="containerd successfully booted in 0.262438s" Nov 1 10:00:06.894576 systemd-networkd[1513]: eth0: Gained IPv6LL Nov 1 10:00:06.898472 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 1 10:00:06.901340 systemd[1]: Reached target network-online.target - Network is Online. Nov 1 10:00:06.905130 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 1 10:00:06.909711 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 10:00:06.912992 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 1 10:00:06.941810 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 1 10:00:06.942125 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 1 10:00:06.945053 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 1 10:00:06.948359 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 1 10:00:08.334777 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 1 10:00:08.338452 systemd[1]: Started sshd@0-10.0.0.25:22-10.0.0.1:37318.service - OpenSSH per-connection server daemon (10.0.0.1:37318). Nov 1 10:00:08.358840 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 10:00:08.361726 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 1 10:00:08.364445 systemd[1]: Startup finished in 3.027s (kernel) + 6.527s (initrd) + 5.629s (userspace) = 15.184s. Nov 1 10:00:08.378748 (kubelet)[1718]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 10:00:08.407247 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 37318 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:00:08.409299 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:00:08.417458 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 1 10:00:08.418876 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 1 10:00:08.664469 systemd-logind[1588]: New session 1 of user core. Nov 1 10:00:08.676703 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 1 10:00:08.680094 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 1 10:00:08.696213 (systemd)[1727]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 10:00:08.698628 systemd-logind[1588]: New session c1 of user core. Nov 1 10:00:08.852132 systemd[1727]: Queued start job for default target default.target. Nov 1 10:00:08.868843 systemd[1727]: Created slice app.slice - User Application Slice. Nov 1 10:00:08.868867 systemd[1727]: Reached target paths.target - Paths. Nov 1 10:00:08.868910 systemd[1727]: Reached target timers.target - Timers. Nov 1 10:00:08.870580 systemd[1727]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 1 10:00:08.882909 systemd[1727]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 1 10:00:08.883044 systemd[1727]: Reached target sockets.target - Sockets. Nov 1 10:00:08.883080 systemd[1727]: Reached target basic.target - Basic System. Nov 1 10:00:08.883121 systemd[1727]: Reached target default.target - Main User Target. Nov 1 10:00:08.883152 systemd[1727]: Startup finished in 173ms. Nov 1 10:00:08.883982 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 1 10:00:08.886088 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 1 10:00:08.909721 systemd[1]: Started sshd@1-10.0.0.25:22-10.0.0.1:37320.service - OpenSSH per-connection server daemon (10.0.0.1:37320). Nov 1 10:00:08.950467 kubelet[1718]: E1101 10:00:08.950417 1718 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 10:00:08.954605 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 10:00:08.954820 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 10:00:08.955237 systemd[1]: kubelet.service: Consumed 1.823s CPU time, 266.5M memory peak. Nov 1 10:00:08.966751 sshd[1743]: Accepted publickey for core from 10.0.0.1 port 37320 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:00:08.968708 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:00:08.973218 systemd-logind[1588]: New session 2 of user core. Nov 1 10:00:08.986513 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 1 10:00:09.000053 sshd[1748]: Connection closed by 10.0.0.1 port 37320 Nov 1 10:00:09.000475 sshd-session[1743]: pam_unix(sshd:session): session closed for user core Nov 1 10:00:09.013250 systemd[1]: sshd@1-10.0.0.25:22-10.0.0.1:37320.service: Deactivated successfully. Nov 1 10:00:09.015245 systemd[1]: session-2.scope: Deactivated successfully. Nov 1 10:00:09.016060 systemd-logind[1588]: Session 2 logged out. Waiting for processes to exit. Nov 1 10:00:09.018883 systemd[1]: Started sshd@2-10.0.0.25:22-10.0.0.1:37326.service - OpenSSH per-connection server daemon (10.0.0.1:37326). Nov 1 10:00:09.019748 systemd-logind[1588]: Removed session 2. Nov 1 10:00:09.082218 sshd[1754]: Accepted publickey for core from 10.0.0.1 port 37326 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:00:09.084517 sshd-session[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:00:09.091152 systemd-logind[1588]: New session 3 of user core. Nov 1 10:00:09.099543 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 1 10:00:09.112081 sshd[1757]: Connection closed by 10.0.0.1 port 37326 Nov 1 10:00:09.112549 sshd-session[1754]: pam_unix(sshd:session): session closed for user core Nov 1 10:00:09.126302 systemd[1]: sshd@2-10.0.0.25:22-10.0.0.1:37326.service: Deactivated successfully. Nov 1 10:00:09.129244 systemd[1]: session-3.scope: Deactivated successfully. Nov 1 10:00:09.130331 systemd-logind[1588]: Session 3 logged out. Waiting for processes to exit. Nov 1 10:00:09.134712 systemd[1]: Started sshd@3-10.0.0.25:22-10.0.0.1:37338.service - OpenSSH per-connection server daemon (10.0.0.1:37338). Nov 1 10:00:09.135585 systemd-logind[1588]: Removed session 3. Nov 1 10:00:09.199306 sshd[1763]: Accepted publickey for core from 10.0.0.1 port 37338 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:00:09.201452 sshd-session[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:00:09.206651 systemd-logind[1588]: New session 4 of user core. Nov 1 10:00:09.216566 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 1 10:00:09.232357 sshd[1766]: Connection closed by 10.0.0.1 port 37338 Nov 1 10:00:09.232739 sshd-session[1763]: pam_unix(sshd:session): session closed for user core Nov 1 10:00:09.246671 systemd[1]: sshd@3-10.0.0.25:22-10.0.0.1:37338.service: Deactivated successfully. Nov 1 10:00:09.249000 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 10:00:09.250030 systemd-logind[1588]: Session 4 logged out. Waiting for processes to exit. Nov 1 10:00:09.253701 systemd[1]: Started sshd@4-10.0.0.25:22-10.0.0.1:37354.service - OpenSSH per-connection server daemon (10.0.0.1:37354). Nov 1 10:00:09.254348 systemd-logind[1588]: Removed session 4. Nov 1 10:00:09.311099 sshd[1772]: Accepted publickey for core from 10.0.0.1 port 37354 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:00:09.312481 sshd-session[1772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:00:09.317349 systemd-logind[1588]: New session 5 of user core. Nov 1 10:00:09.326542 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 1 10:00:09.362644 sudo[1776]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 1 10:00:09.363238 sudo[1776]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 10:00:09.387630 sudo[1776]: pam_unix(sudo:session): session closed for user root Nov 1 10:00:09.390354 sshd[1775]: Connection closed by 10.0.0.1 port 37354 Nov 1 10:00:09.390789 sshd-session[1772]: pam_unix(sshd:session): session closed for user core Nov 1 10:00:09.401048 systemd[1]: sshd@4-10.0.0.25:22-10.0.0.1:37354.service: Deactivated successfully. Nov 1 10:00:09.403069 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 10:00:09.403983 systemd-logind[1588]: Session 5 logged out. Waiting for processes to exit. Nov 1 10:00:09.406769 systemd[1]: Started sshd@5-10.0.0.25:22-10.0.0.1:37368.service - OpenSSH per-connection server daemon (10.0.0.1:37368). Nov 1 10:00:09.407608 systemd-logind[1588]: Removed session 5. Nov 1 10:00:09.472248 sshd[1782]: Accepted publickey for core from 10.0.0.1 port 37368 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:00:09.473883 sshd-session[1782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:00:09.480013 systemd-logind[1588]: New session 6 of user core. Nov 1 10:00:09.495573 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 1 10:00:09.514974 sudo[1787]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 1 10:00:09.515461 sudo[1787]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 10:00:09.524586 sudo[1787]: pam_unix(sudo:session): session closed for user root Nov 1 10:00:09.534732 sudo[1786]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 1 10:00:09.535058 sudo[1786]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 10:00:09.547921 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 1 10:00:09.607876 augenrules[1809]: No rules Nov 1 10:00:09.609608 systemd[1]: audit-rules.service: Deactivated successfully. Nov 1 10:00:09.609914 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 1 10:00:09.611849 sudo[1786]: pam_unix(sudo:session): session closed for user root Nov 1 10:00:09.614320 sshd[1785]: Connection closed by 10.0.0.1 port 37368 Nov 1 10:00:09.614745 sshd-session[1782]: pam_unix(sshd:session): session closed for user core Nov 1 10:00:09.624253 systemd[1]: sshd@5-10.0.0.25:22-10.0.0.1:37368.service: Deactivated successfully. Nov 1 10:00:09.626275 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 10:00:09.627171 systemd-logind[1588]: Session 6 logged out. Waiting for processes to exit. Nov 1 10:00:09.630089 systemd[1]: Started sshd@6-10.0.0.25:22-10.0.0.1:37372.service - OpenSSH per-connection server daemon (10.0.0.1:37372). Nov 1 10:00:09.630877 systemd-logind[1588]: Removed session 6. Nov 1 10:00:09.689148 sshd[1818]: Accepted publickey for core from 10.0.0.1 port 37372 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:00:09.690716 sshd-session[1818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:00:09.695869 systemd-logind[1588]: New session 7 of user core. Nov 1 10:00:09.704683 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 1 10:00:09.734108 sudo[1822]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 10:00:09.734589 sudo[1822]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 10:00:10.724165 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 1 10:00:10.751301 (dockerd)[1843]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 1 10:00:11.429696 dockerd[1843]: time="2025-11-01T10:00:11.429624582Z" level=info msg="Starting up" Nov 1 10:00:11.430744 dockerd[1843]: time="2025-11-01T10:00:11.430691944Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 1 10:00:11.455717 dockerd[1843]: time="2025-11-01T10:00:11.455667292Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 1 10:00:12.028277 dockerd[1843]: time="2025-11-01T10:00:12.028190347Z" level=info msg="Loading containers: start." Nov 1 10:00:12.040422 kernel: Initializing XFRM netlink socket Nov 1 10:00:12.343163 systemd-networkd[1513]: docker0: Link UP Nov 1 10:00:12.348018 dockerd[1843]: time="2025-11-01T10:00:12.347952074Z" level=info msg="Loading containers: done." Nov 1 10:00:12.363424 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck209530157-merged.mount: Deactivated successfully. Nov 1 10:00:12.365259 dockerd[1843]: time="2025-11-01T10:00:12.365215039Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 10:00:12.365329 dockerd[1843]: time="2025-11-01T10:00:12.365307893Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 1 10:00:12.365463 dockerd[1843]: time="2025-11-01T10:00:12.365433729Z" level=info msg="Initializing buildkit" Nov 1 10:00:12.397416 dockerd[1843]: time="2025-11-01T10:00:12.397352327Z" level=info msg="Completed buildkit initialization" Nov 1 10:00:12.404486 dockerd[1843]: time="2025-11-01T10:00:12.404433968Z" level=info msg="Daemon has completed initialization" Nov 1 10:00:12.404607 dockerd[1843]: time="2025-11-01T10:00:12.404537683Z" level=info msg="API listen on /run/docker.sock" Nov 1 10:00:12.404781 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 1 10:00:13.489937 containerd[1615]: time="2025-11-01T10:00:13.489877783Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 1 10:00:14.183334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1058785421.mount: Deactivated successfully. Nov 1 10:00:16.033573 containerd[1615]: time="2025-11-01T10:00:16.033331176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:00:16.037599 containerd[1615]: time="2025-11-01T10:00:16.037548394Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=28442726" Nov 1 10:00:16.041786 containerd[1615]: time="2025-11-01T10:00:16.041716671Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:00:16.046849 containerd[1615]: time="2025-11-01T10:00:16.046756152Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:00:16.048326 containerd[1615]: time="2025-11-01T10:00:16.048256125Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.558256554s" Nov 1 10:00:16.048572 containerd[1615]: time="2025-11-01T10:00:16.048336536Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Nov 1 10:00:16.049626 containerd[1615]: time="2025-11-01T10:00:16.049572314Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 1 10:00:18.256008 containerd[1615]: time="2025-11-01T10:00:18.255917278Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:00:18.330443 containerd[1615]: time="2025-11-01T10:00:18.330368601Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26012689" Nov 1 10:00:18.431229 containerd[1615]: time="2025-11-01T10:00:18.431153879Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:00:18.484983 containerd[1615]: time="2025-11-01T10:00:18.484928215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:00:18.486120 containerd[1615]: time="2025-11-01T10:00:18.486077931Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 2.436452919s" Nov 1 10:00:18.486222 containerd[1615]: time="2025-11-01T10:00:18.486126502Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Nov 1 10:00:18.486830 containerd[1615]: time="2025-11-01T10:00:18.486795557Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 1 10:00:19.205431 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 10:00:19.207549 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 10:00:20.014410 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 10:00:20.031902 (kubelet)[2131]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 10:00:20.158665 kubelet[2131]: E1101 10:00:20.158576 2131 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 10:00:20.165854 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 10:00:20.166089 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 10:00:20.166570 systemd[1]: kubelet.service: Consumed 350ms CPU time, 110.6M memory peak. Nov 1 10:00:21.382998 containerd[1615]: time="2025-11-01T10:00:21.382916678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:00:21.400869 containerd[1615]: time="2025-11-01T10:00:21.400809423Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20148794" Nov 1 10:00:21.412800 containerd[1615]: time="2025-11-01T10:00:21.412744836Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:00:21.416360 containerd[1615]: time="2025-11-01T10:00:21.416319350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:00:21.417332 containerd[1615]: time="2025-11-01T10:00:21.417293427Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 2.930467222s" Nov 1 10:00:21.417408 containerd[1615]: time="2025-11-01T10:00:21.417333762Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Nov 1 10:00:21.417918 containerd[1615]: time="2025-11-01T10:00:21.417889895Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 1 10:00:23.120988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2139580153.mount: Deactivated successfully. Nov 1 10:00:23.918512 containerd[1615]: time="2025-11-01T10:00:23.918431764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:00:23.919685 containerd[1615]: time="2025-11-01T10:00:23.919658054Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31925747" Nov 1 10:00:23.920934 containerd[1615]: time="2025-11-01T10:00:23.920887660Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:00:23.923082 containerd[1615]: time="2025-11-01T10:00:23.923052590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:00:23.923738 containerd[1615]: time="2025-11-01T10:00:23.923691959Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 2.505771406s" Nov 1 10:00:23.923738 containerd[1615]: time="2025-11-01T10:00:23.923727625Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Nov 1 10:00:23.924232 containerd[1615]: time="2025-11-01T10:00:23.924201073Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 1 10:00:25.095353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3794031298.mount: Deactivated successfully. Nov 1 10:00:26.345262 containerd[1615]: time="2025-11-01T10:00:26.345174583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:00:26.346046 containerd[1615]: time="2025-11-01T10:00:26.345994450Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20788779" Nov 1 10:00:26.347354 containerd[1615]: time="2025-11-01T10:00:26.347310859Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:00:26.350989 containerd[1615]: time="2025-11-01T10:00:26.350923083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:00:26.352638 containerd[1615]: time="2025-11-01T10:00:26.352589969Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.428351786s" Nov 1 10:00:26.352638 containerd[1615]: time="2025-11-01T10:00:26.352631327Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 1 10:00:26.353592 containerd[1615]: time="2025-11-01T10:00:26.353519783Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 1 10:00:26.858033 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount189347127.mount: Deactivated successfully. Nov 1 10:00:26.864544 containerd[1615]: time="2025-11-01T10:00:26.864492800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 10:00:26.865679 containerd[1615]: time="2025-11-01T10:00:26.865633088Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 1 10:00:26.866995 containerd[1615]: time="2025-11-01T10:00:26.866956580Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 10:00:26.869051 containerd[1615]: time="2025-11-01T10:00:26.869009490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 10:00:26.869622 containerd[1615]: time="2025-11-01T10:00:26.869573869Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 515.999092ms" Nov 1 10:00:26.869622 containerd[1615]: time="2025-11-01T10:00:26.869620406Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 1 10:00:26.870281 containerd[1615]: time="2025-11-01T10:00:26.870233856Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 1 10:00:27.446501 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3380400731.mount: Deactivated successfully. Nov 1 10:00:29.752654 containerd[1615]: time="2025-11-01T10:00:29.752580415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:00:29.753403 containerd[1615]: time="2025-11-01T10:00:29.753300465Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58264100" Nov 1 10:00:29.754605 containerd[1615]: time="2025-11-01T10:00:29.754551762Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:00:29.758260 containerd[1615]: time="2025-11-01T10:00:29.758203450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:00:29.759770 containerd[1615]: time="2025-11-01T10:00:29.759702972Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.889411858s" Nov 1 10:00:29.759811 containerd[1615]: time="2025-11-01T10:00:29.759767203Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 1 10:00:30.203885 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 1 10:00:30.205725 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 10:00:30.394267 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 10:00:30.412669 (kubelet)[2297]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 10:00:30.460067 kubelet[2297]: E1101 10:00:30.459941 2297 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 10:00:30.464025 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 10:00:30.464242 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 10:00:30.464654 systemd[1]: kubelet.service: Consumed 218ms CPU time, 110.9M memory peak. Nov 1 10:00:32.096396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 10:00:32.096563 systemd[1]: kubelet.service: Consumed 218ms CPU time, 110.9M memory peak. Nov 1 10:00:32.098788 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 10:00:32.138908 systemd[1]: Reload requested from client PID 2313 ('systemctl') (unit session-7.scope)... Nov 1 10:00:32.138927 systemd[1]: Reloading... Nov 1 10:00:32.221436 zram_generator::config[2354]: No configuration found. Nov 1 10:00:32.903316 systemd[1]: Reloading finished in 764 ms. Nov 1 10:00:32.973208 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 1 10:00:32.973358 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 1 10:00:32.973768 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 10:00:32.973827 systemd[1]: kubelet.service: Consumed 168ms CPU time, 98.4M memory peak. Nov 1 10:00:32.975814 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 10:00:33.156812 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 10:00:33.166666 (kubelet)[2405]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 10:00:33.218179 kubelet[2405]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 10:00:33.218179 kubelet[2405]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 10:00:33.218179 kubelet[2405]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 10:00:33.218465 kubelet[2405]: I1101 10:00:33.218261 2405 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 10:00:34.055416 kubelet[2405]: I1101 10:00:34.055356 2405 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 1 10:00:34.055416 kubelet[2405]: I1101 10:00:34.055403 2405 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 10:00:34.055681 kubelet[2405]: I1101 10:00:34.055657 2405 server.go:956] "Client rotation is on, will bootstrap in background" Nov 1 10:00:34.078733 kubelet[2405]: I1101 10:00:34.078683 2405 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 10:00:34.081406 kubelet[2405]: E1101 10:00:34.081064 2405 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.25:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 1 10:00:34.091255 kubelet[2405]: I1101 10:00:34.091221 2405 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 1 10:00:34.097130 kubelet[2405]: I1101 10:00:34.097111 2405 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 10:00:34.097466 kubelet[2405]: I1101 10:00:34.097417 2405 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 10:00:34.097635 kubelet[2405]: I1101 10:00:34.097446 2405 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 10:00:34.097741 kubelet[2405]: I1101 10:00:34.097636 2405 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 10:00:34.097741 kubelet[2405]: I1101 10:00:34.097649 2405 container_manager_linux.go:303] "Creating device plugin manager" Nov 1 10:00:34.097844 kubelet[2405]: I1101 10:00:34.097829 2405 state_mem.go:36] "Initialized new in-memory state store" Nov 1 10:00:34.100696 kubelet[2405]: I1101 10:00:34.100656 2405 kubelet.go:480] "Attempting to sync node with API server" Nov 1 10:00:34.100696 kubelet[2405]: I1101 10:00:34.100679 2405 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 10:00:34.100785 kubelet[2405]: I1101 10:00:34.100727 2405 kubelet.go:386] "Adding apiserver pod source" Nov 1 10:00:34.103009 kubelet[2405]: I1101 10:00:34.102797 2405 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 10:00:34.108119 kubelet[2405]: E1101 10:00:34.108089 2405 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 1 10:00:34.108294 kubelet[2405]: E1101 10:00:34.108264 2405 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.25:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 1 10:00:34.108650 kubelet[2405]: I1101 10:00:34.108630 2405 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Nov 1 10:00:34.109093 kubelet[2405]: I1101 10:00:34.109076 2405 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 1 10:00:34.110113 kubelet[2405]: W1101 10:00:34.110096 2405 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 10:00:34.112770 kubelet[2405]: I1101 10:00:34.112749 2405 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 10:00:34.112809 kubelet[2405]: I1101 10:00:34.112805 2405 server.go:1289] "Started kubelet" Nov 1 10:00:34.114684 kubelet[2405]: I1101 10:00:34.114617 2405 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 10:00:34.115084 kubelet[2405]: I1101 10:00:34.115050 2405 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 10:00:34.116676 kubelet[2405]: I1101 10:00:34.116410 2405 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 10:00:34.119065 kubelet[2405]: E1101 10:00:34.116378 2405 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.25:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.25:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1873d9b126a4cf0f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-01 10:00:34.112769807 +0000 UTC m=+0.936990884,LastTimestamp:2025-11-01 10:00:34.112769807 +0000 UTC m=+0.936990884,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 1 10:00:34.119480 kubelet[2405]: I1101 10:00:34.119457 2405 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 10:00:34.120241 kubelet[2405]: I1101 10:00:34.120207 2405 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 10:00:34.120659 kubelet[2405]: I1101 10:00:34.120636 2405 server.go:317] "Adding debug handlers to kubelet server" Nov 1 10:00:34.121327 kubelet[2405]: E1101 10:00:34.121295 2405 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 10:00:34.122067 kubelet[2405]: I1101 10:00:34.121998 2405 factory.go:223] Registration of the systemd container factory successfully Nov 1 10:00:34.122208 kubelet[2405]: I1101 10:00:34.122174 2405 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 10:00:34.123670 kubelet[2405]: E1101 10:00:34.123638 2405 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 10:00:34.124280 kubelet[2405]: I1101 10:00:34.124263 2405 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 10:00:34.124482 kubelet[2405]: I1101 10:00:34.124466 2405 reconciler.go:26] "Reconciler: start to sync state" Nov 1 10:00:34.124911 kubelet[2405]: I1101 10:00:34.124888 2405 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 10:00:34.125195 kubelet[2405]: E1101 10:00:34.125171 2405 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 1 10:00:34.125657 kubelet[2405]: E1101 10:00:34.125614 2405 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.25:6443: connect: connection refused" interval="200ms" Nov 1 10:00:34.125764 kubelet[2405]: I1101 10:00:34.125684 2405 factory.go:223] Registration of the containerd container factory successfully Nov 1 10:00:34.138897 kubelet[2405]: I1101 10:00:34.137793 2405 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 10:00:34.138897 kubelet[2405]: I1101 10:00:34.137808 2405 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 10:00:34.138897 kubelet[2405]: I1101 10:00:34.137828 2405 state_mem.go:36] "Initialized new in-memory state store" Nov 1 10:00:34.224112 kubelet[2405]: E1101 10:00:34.224038 2405 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 10:00:34.324551 kubelet[2405]: E1101 10:00:34.324448 2405 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 10:00:34.327222 kubelet[2405]: E1101 10:00:34.327154 2405 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.25:6443: connect: connection refused" interval="400ms" Nov 1 10:00:34.425369 kubelet[2405]: E1101 10:00:34.425338 2405 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 10:00:34.525893 kubelet[2405]: E1101 10:00:34.525846 2405 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 10:00:34.548970 kubelet[2405]: I1101 10:00:34.548945 2405 policy_none.go:49] "None policy: Start" Nov 1 10:00:34.549022 kubelet[2405]: I1101 10:00:34.548974 2405 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 10:00:34.549022 kubelet[2405]: I1101 10:00:34.548993 2405 state_mem.go:35] "Initializing new in-memory state store" Nov 1 10:00:34.555539 kubelet[2405]: I1101 10:00:34.555492 2405 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 1 10:00:34.558637 kubelet[2405]: I1101 10:00:34.557277 2405 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 1 10:00:34.558637 kubelet[2405]: I1101 10:00:34.557347 2405 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 1 10:00:34.558637 kubelet[2405]: I1101 10:00:34.557372 2405 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 10:00:34.558637 kubelet[2405]: I1101 10:00:34.557435 2405 kubelet.go:2436] "Starting kubelet main sync loop" Nov 1 10:00:34.558637 kubelet[2405]: E1101 10:00:34.557486 2405 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 10:00:34.557445 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 1 10:00:34.561893 kubelet[2405]: E1101 10:00:34.561847 2405 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 1 10:00:34.570928 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 1 10:00:34.574208 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 1 10:00:34.581452 kubelet[2405]: E1101 10:00:34.581326 2405 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 1 10:00:34.581594 kubelet[2405]: I1101 10:00:34.581567 2405 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 10:00:34.581828 kubelet[2405]: I1101 10:00:34.581592 2405 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 10:00:34.581828 kubelet[2405]: I1101 10:00:34.581806 2405 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 10:00:34.582983 kubelet[2405]: E1101 10:00:34.582929 2405 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 10:00:34.583025 kubelet[2405]: E1101 10:00:34.583012 2405 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 1 10:00:34.670016 systemd[1]: Created slice kubepods-burstable-pod4064627bb0d9c8459fb5a14b37ee5fc2.slice - libcontainer container kubepods-burstable-pod4064627bb0d9c8459fb5a14b37ee5fc2.slice. Nov 1 10:00:34.683557 kubelet[2405]: I1101 10:00:34.683527 2405 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 10:00:34.683904 kubelet[2405]: E1101 10:00:34.683873 2405 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.25:6443/api/v1/nodes\": dial tcp 10.0.0.25:6443: connect: connection refused" node="localhost" Nov 1 10:00:34.686350 kubelet[2405]: E1101 10:00:34.686317 2405 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 10:00:34.689616 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice - libcontainer container kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Nov 1 10:00:34.691921 kubelet[2405]: E1101 10:00:34.691895 2405 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 10:00:34.694802 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice - libcontainer container kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Nov 1 10:00:34.696487 kubelet[2405]: E1101 10:00:34.696467 2405 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 10:00:34.728089 kubelet[2405]: I1101 10:00:34.728009 2405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 10:00:34.728089 kubelet[2405]: I1101 10:00:34.728075 2405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 10:00:34.728239 kubelet[2405]: I1101 10:00:34.728099 2405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 10:00:34.728239 kubelet[2405]: I1101 10:00:34.728122 2405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4064627bb0d9c8459fb5a14b37ee5fc2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4064627bb0d9c8459fb5a14b37ee5fc2\") " pod="kube-system/kube-apiserver-localhost" Nov 1 10:00:34.728239 kubelet[2405]: E1101 10:00:34.728122 2405 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.25:6443: connect: connection refused" interval="800ms" Nov 1 10:00:34.728239 kubelet[2405]: I1101 10:00:34.728155 2405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4064627bb0d9c8459fb5a14b37ee5fc2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4064627bb0d9c8459fb5a14b37ee5fc2\") " pod="kube-system/kube-apiserver-localhost" Nov 1 10:00:34.728239 kubelet[2405]: I1101 10:00:34.728209 2405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 10:00:34.728348 kubelet[2405]: I1101 10:00:34.728246 2405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 10:00:34.728348 kubelet[2405]: I1101 10:00:34.728276 2405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 1 10:00:34.728348 kubelet[2405]: I1101 10:00:34.728292 2405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4064627bb0d9c8459fb5a14b37ee5fc2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4064627bb0d9c8459fb5a14b37ee5fc2\") " pod="kube-system/kube-apiserver-localhost" Nov 1 10:00:34.886505 kubelet[2405]: I1101 10:00:34.886402 2405 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 10:00:34.886903 kubelet[2405]: E1101 10:00:34.886833 2405 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.25:6443/api/v1/nodes\": dial tcp 10.0.0.25:6443: connect: connection refused" node="localhost" Nov 1 10:00:34.987740 kubelet[2405]: E1101 10:00:34.987687 2405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:00:34.988470 containerd[1615]: time="2025-11-01T10:00:34.988429932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4064627bb0d9c8459fb5a14b37ee5fc2,Namespace:kube-system,Attempt:0,}" Nov 1 10:00:34.992668 kubelet[2405]: E1101 10:00:34.992636 2405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:00:34.993168 containerd[1615]: time="2025-11-01T10:00:34.993106773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Nov 1 10:00:34.997367 kubelet[2405]: E1101 10:00:34.997314 2405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:00:34.997697 containerd[1615]: time="2025-11-01T10:00:34.997659070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Nov 1 10:00:35.197215 kubelet[2405]: E1101 10:00:35.197078 2405 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.25:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 1 10:00:35.212755 kubelet[2405]: E1101 10:00:35.212719 2405 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 1 10:00:35.288866 kubelet[2405]: I1101 10:00:35.288830 2405 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 10:00:35.289308 kubelet[2405]: E1101 10:00:35.289161 2405 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.25:6443/api/v1/nodes\": dial tcp 10.0.0.25:6443: connect: connection refused" node="localhost" Nov 1 10:00:35.468487 containerd[1615]: time="2025-11-01T10:00:35.468320598Z" level=info msg="connecting to shim ad459c0a9983a1e7c7f91f9f4d587f4c7eabf7b336f2269fdd5ffca5786abbd3" address="unix:///run/containerd/s/3b8b74a74b8495a34bcbffecbb1f1a61111c7dcedff40e1c60a91efd3a0744f0" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:00:35.478027 containerd[1615]: time="2025-11-01T10:00:35.477966638Z" level=info msg="connecting to shim 0ef9752fec943af80a5d7c802bf58cdaf20c7fe5c39c93fc1d3101cd758e1a05" address="unix:///run/containerd/s/61ed93d5f84ef27f5c280a1c4c345e6fd103048079f13276054d2f507ed4a403" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:00:35.479698 containerd[1615]: time="2025-11-01T10:00:35.479234456Z" level=info msg="connecting to shim e24af1675c4ae98e0c4115c0340851e804551d97a002e0c1e3b01a50fc0e3d08" address="unix:///run/containerd/s/006040969689650c3070653a652f09ba54b87c99cfa5b75df93d5bf5fdbc9226" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:00:35.485104 kubelet[2405]: E1101 10:00:35.485064 2405 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 1 10:00:35.508555 systemd[1]: Started cri-containerd-0ef9752fec943af80a5d7c802bf58cdaf20c7fe5c39c93fc1d3101cd758e1a05.scope - libcontainer container 0ef9752fec943af80a5d7c802bf58cdaf20c7fe5c39c93fc1d3101cd758e1a05. Nov 1 10:00:35.513572 systemd[1]: Started cri-containerd-ad459c0a9983a1e7c7f91f9f4d587f4c7eabf7b336f2269fdd5ffca5786abbd3.scope - libcontainer container ad459c0a9983a1e7c7f91f9f4d587f4c7eabf7b336f2269fdd5ffca5786abbd3. Nov 1 10:00:35.516137 systemd[1]: Started cri-containerd-e24af1675c4ae98e0c4115c0340851e804551d97a002e0c1e3b01a50fc0e3d08.scope - libcontainer container e24af1675c4ae98e0c4115c0340851e804551d97a002e0c1e3b01a50fc0e3d08. Nov 1 10:00:35.529321 kubelet[2405]: E1101 10:00:35.529274 2405 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.25:6443: connect: connection refused" interval="1.6s" Nov 1 10:00:35.561644 containerd[1615]: time="2025-11-01T10:00:35.561471610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4064627bb0d9c8459fb5a14b37ee5fc2,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ef9752fec943af80a5d7c802bf58cdaf20c7fe5c39c93fc1d3101cd758e1a05\"" Nov 1 10:00:35.563476 kubelet[2405]: E1101 10:00:35.563449 2405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:00:35.571859 containerd[1615]: time="2025-11-01T10:00:35.571801351Z" level=info msg="CreateContainer within sandbox \"0ef9752fec943af80a5d7c802bf58cdaf20c7fe5c39c93fc1d3101cd758e1a05\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 10:00:35.576231 containerd[1615]: time="2025-11-01T10:00:35.576180454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad459c0a9983a1e7c7f91f9f4d587f4c7eabf7b336f2269fdd5ffca5786abbd3\"" Nov 1 10:00:35.576932 kubelet[2405]: E1101 10:00:35.576902 2405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:00:35.581611 containerd[1615]: time="2025-11-01T10:00:35.581583667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"e24af1675c4ae98e0c4115c0340851e804551d97a002e0c1e3b01a50fc0e3d08\"" Nov 1 10:00:35.581918 containerd[1615]: time="2025-11-01T10:00:35.581897636Z" level=info msg="CreateContainer within sandbox \"ad459c0a9983a1e7c7f91f9f4d587f4c7eabf7b336f2269fdd5ffca5786abbd3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 10:00:35.582363 kubelet[2405]: E1101 10:00:35.582338 2405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:00:35.584755 containerd[1615]: time="2025-11-01T10:00:35.584724647Z" level=info msg="Container b3ce8d83cbcd36b95960b8c58604593c8509cecdd2e60085bd4df7aed1337898: CDI devices from CRI Config.CDIDevices: []" Nov 1 10:00:35.586772 containerd[1615]: time="2025-11-01T10:00:35.586729848Z" level=info msg="CreateContainer within sandbox \"e24af1675c4ae98e0c4115c0340851e804551d97a002e0c1e3b01a50fc0e3d08\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 10:00:35.592596 containerd[1615]: time="2025-11-01T10:00:35.592566644Z" level=info msg="CreateContainer within sandbox \"0ef9752fec943af80a5d7c802bf58cdaf20c7fe5c39c93fc1d3101cd758e1a05\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b3ce8d83cbcd36b95960b8c58604593c8509cecdd2e60085bd4df7aed1337898\"" Nov 1 10:00:35.592716 containerd[1615]: time="2025-11-01T10:00:35.592688813Z" level=info msg="Container 946c9ed11a33452347719bfe1201aaa59898148ddcf972739c7a9ea94454c73b: CDI devices from CRI Config.CDIDevices: []" Nov 1 10:00:35.593192 containerd[1615]: time="2025-11-01T10:00:35.593149707Z" level=info msg="StartContainer for \"b3ce8d83cbcd36b95960b8c58604593c8509cecdd2e60085bd4df7aed1337898\"" Nov 1 10:00:35.594333 containerd[1615]: time="2025-11-01T10:00:35.594308160Z" level=info msg="connecting to shim b3ce8d83cbcd36b95960b8c58604593c8509cecdd2e60085bd4df7aed1337898" address="unix:///run/containerd/s/61ed93d5f84ef27f5c280a1c4c345e6fd103048079f13276054d2f507ed4a403" protocol=ttrpc version=3 Nov 1 10:00:35.603831 containerd[1615]: time="2025-11-01T10:00:35.603781846Z" level=info msg="Container c4e26c1f76273d5f898d05897903f0964354b007323394e44e24b93ff127abaa: CDI devices from CRI Config.CDIDevices: []" Nov 1 10:00:35.608147 containerd[1615]: time="2025-11-01T10:00:35.608104683Z" level=info msg="CreateContainer within sandbox \"ad459c0a9983a1e7c7f91f9f4d587f4c7eabf7b336f2269fdd5ffca5786abbd3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"946c9ed11a33452347719bfe1201aaa59898148ddcf972739c7a9ea94454c73b\"" Nov 1 10:00:35.609487 containerd[1615]: time="2025-11-01T10:00:35.609103066Z" level=info msg="StartContainer for \"946c9ed11a33452347719bfe1201aaa59898148ddcf972739c7a9ea94454c73b\"" Nov 1 10:00:35.610086 containerd[1615]: time="2025-11-01T10:00:35.610052165Z" level=info msg="connecting to shim 946c9ed11a33452347719bfe1201aaa59898148ddcf972739c7a9ea94454c73b" address="unix:///run/containerd/s/3b8b74a74b8495a34bcbffecbb1f1a61111c7dcedff40e1c60a91efd3a0744f0" protocol=ttrpc version=3 Nov 1 10:00:35.613142 containerd[1615]: time="2025-11-01T10:00:35.613110561Z" level=info msg="CreateContainer within sandbox \"e24af1675c4ae98e0c4115c0340851e804551d97a002e0c1e3b01a50fc0e3d08\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c4e26c1f76273d5f898d05897903f0964354b007323394e44e24b93ff127abaa\"" Nov 1 10:00:35.614000 containerd[1615]: time="2025-11-01T10:00:35.613692683Z" level=info msg="StartContainer for \"c4e26c1f76273d5f898d05897903f0964354b007323394e44e24b93ff127abaa\"" Nov 1 10:00:35.613790 systemd[1]: Started cri-containerd-b3ce8d83cbcd36b95960b8c58604593c8509cecdd2e60085bd4df7aed1337898.scope - libcontainer container b3ce8d83cbcd36b95960b8c58604593c8509cecdd2e60085bd4df7aed1337898. Nov 1 10:00:35.615635 containerd[1615]: time="2025-11-01T10:00:35.615600390Z" level=info msg="connecting to shim c4e26c1f76273d5f898d05897903f0964354b007323394e44e24b93ff127abaa" address="unix:///run/containerd/s/006040969689650c3070653a652f09ba54b87c99cfa5b75df93d5bf5fdbc9226" protocol=ttrpc version=3 Nov 1 10:00:35.629127 systemd[1]: Started cri-containerd-946c9ed11a33452347719bfe1201aaa59898148ddcf972739c7a9ea94454c73b.scope - libcontainer container 946c9ed11a33452347719bfe1201aaa59898148ddcf972739c7a9ea94454c73b. Nov 1 10:00:35.643570 systemd[1]: Started cri-containerd-c4e26c1f76273d5f898d05897903f0964354b007323394e44e24b93ff127abaa.scope - libcontainer container c4e26c1f76273d5f898d05897903f0964354b007323394e44e24b93ff127abaa. Nov 1 10:00:35.694857 containerd[1615]: time="2025-11-01T10:00:35.694807502Z" level=info msg="StartContainer for \"b3ce8d83cbcd36b95960b8c58604593c8509cecdd2e60085bd4df7aed1337898\" returns successfully" Nov 1 10:00:35.699200 containerd[1615]: time="2025-11-01T10:00:35.699151318Z" level=info msg="StartContainer for \"946c9ed11a33452347719bfe1201aaa59898148ddcf972739c7a9ea94454c73b\" returns successfully" Nov 1 10:00:35.720407 containerd[1615]: time="2025-11-01T10:00:35.720237202Z" level=info msg="StartContainer for \"c4e26c1f76273d5f898d05897903f0964354b007323394e44e24b93ff127abaa\" returns successfully" Nov 1 10:00:36.091576 kubelet[2405]: I1101 10:00:36.091006 2405 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 10:00:36.575534 kubelet[2405]: E1101 10:00:36.575153 2405 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 10:00:36.575534 kubelet[2405]: E1101 10:00:36.575290 2405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:00:36.579060 kubelet[2405]: E1101 10:00:36.579036 2405 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 10:00:36.579158 kubelet[2405]: E1101 10:00:36.579133 2405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:00:36.580477 kubelet[2405]: E1101 10:00:36.580455 2405 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 10:00:36.580606 kubelet[2405]: E1101 10:00:36.580587 2405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:00:37.078740 kubelet[2405]: I1101 10:00:37.078694 2405 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 1 10:00:37.078906 kubelet[2405]: E1101 10:00:37.078892 2405 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 1 10:00:37.093988 kubelet[2405]: E1101 10:00:37.093946 2405 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 10:00:37.194475 kubelet[2405]: E1101 10:00:37.194428 2405 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 10:00:37.294989 kubelet[2405]: E1101 10:00:37.294934 2405 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 10:00:37.395625 kubelet[2405]: E1101 10:00:37.395515 2405 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 10:00:37.496363 kubelet[2405]: E1101 10:00:37.496309 2405 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 10:00:37.582563 kubelet[2405]: E1101 10:00:37.582520 2405 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 10:00:37.583070 kubelet[2405]: E1101 10:00:37.582604 2405 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 10:00:37.583070 kubelet[2405]: E1101 10:00:37.582689 2405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:00:37.583070 kubelet[2405]: E1101 10:00:37.582734 2405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:00:37.596799 kubelet[2405]: E1101 10:00:37.596748 2405 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 10:00:37.697871 kubelet[2405]: E1101 10:00:37.697773 2405 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 10:00:37.798721 kubelet[2405]: E1101 10:00:37.798671 2405 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 10:00:37.899283 kubelet[2405]: E1101 10:00:37.899233 2405 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 10:00:38.000217 kubelet[2405]: E1101 10:00:38.000076 2405 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 10:00:38.107239 kubelet[2405]: I1101 10:00:38.107189 2405 apiserver.go:52] "Watching apiserver" Nov 1 10:00:38.126003 kubelet[2405]: I1101 10:00:38.125473 2405 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 10:00:38.126003 kubelet[2405]: I1101 10:00:38.125534 2405 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 10:00:38.133378 kubelet[2405]: I1101 10:00:38.133332 2405 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 10:00:38.137503 kubelet[2405]: I1101 10:00:38.137478 2405 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 10:00:38.141702 kubelet[2405]: E1101 10:00:38.141653 2405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:00:38.583404 kubelet[2405]: I1101 10:00:38.583357 2405 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 10:00:38.583999 kubelet[2405]: E1101 10:00:38.583411 2405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:00:38.651851 kubelet[2405]: E1101 10:00:38.651503 2405 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 1 10:00:38.651851 kubelet[2405]: E1101 10:00:38.651755 2405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:00:39.584332 kubelet[2405]: E1101 10:00:39.584001 2405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:00:39.612709 systemd[1]: Reload requested from client PID 2690 ('systemctl') (unit session-7.scope)... Nov 1 10:00:39.612730 systemd[1]: Reloading... Nov 1 10:00:39.707417 zram_generator::config[2737]: No configuration found. Nov 1 10:00:39.955483 systemd[1]: Reloading finished in 342 ms. Nov 1 10:00:39.984981 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 10:00:40.007471 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 10:00:40.008026 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 10:00:40.008124 systemd[1]: kubelet.service: Consumed 1.469s CPU time, 131.4M memory peak. Nov 1 10:00:40.011769 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 10:00:40.267306 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 10:00:40.276683 (kubelet)[2779]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 10:00:40.324960 kubelet[2779]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 10:00:40.324960 kubelet[2779]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 10:00:40.324960 kubelet[2779]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 10:00:40.324960 kubelet[2779]: I1101 10:00:40.324213 2779 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 10:00:40.330783 kubelet[2779]: I1101 10:00:40.330752 2779 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 1 10:00:40.330783 kubelet[2779]: I1101 10:00:40.330773 2779 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 10:00:40.330970 kubelet[2779]: I1101 10:00:40.330950 2779 server.go:956] "Client rotation is on, will bootstrap in background" Nov 1 10:00:40.332011 kubelet[2779]: I1101 10:00:40.331988 2779 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 1 10:00:40.334691 kubelet[2779]: I1101 10:00:40.334641 2779 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 10:00:40.339568 kubelet[2779]: I1101 10:00:40.339538 2779 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 1 10:00:40.344077 kubelet[2779]: I1101 10:00:40.344041 2779 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 10:00:40.344429 kubelet[2779]: I1101 10:00:40.344343 2779 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 10:00:40.344558 kubelet[2779]: I1101 10:00:40.344381 2779 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 10:00:40.344674 kubelet[2779]: I1101 10:00:40.344564 2779 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 10:00:40.344674 kubelet[2779]: I1101 10:00:40.344574 2779 container_manager_linux.go:303] "Creating device plugin manager" Nov 1 10:00:40.344674 kubelet[2779]: I1101 10:00:40.344641 2779 state_mem.go:36] "Initialized new in-memory state store" Nov 1 10:00:40.344840 kubelet[2779]: I1101 10:00:40.344824 2779 kubelet.go:480] "Attempting to sync node with API server" Nov 1 10:00:40.344863 kubelet[2779]: I1101 10:00:40.344841 2779 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 10:00:40.344890 kubelet[2779]: I1101 10:00:40.344876 2779 kubelet.go:386] "Adding apiserver pod source" Nov 1 10:00:40.344924 kubelet[2779]: I1101 10:00:40.344908 2779 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 10:00:40.345738 kubelet[2779]: I1101 10:00:40.345711 2779 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Nov 1 10:00:40.346401 kubelet[2779]: I1101 10:00:40.346355 2779 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 1 10:00:40.354498 kubelet[2779]: I1101 10:00:40.354454 2779 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 10:00:40.354600 kubelet[2779]: I1101 10:00:40.354537 2779 server.go:1289] "Started kubelet" Nov 1 10:00:40.354915 kubelet[2779]: I1101 10:00:40.354831 2779 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 10:00:40.355165 kubelet[2779]: I1101 10:00:40.355131 2779 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 10:00:40.355216 kubelet[2779]: I1101 10:00:40.355195 2779 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 10:00:40.357107 kubelet[2779]: I1101 10:00:40.357069 2779 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 10:00:40.357175 kubelet[2779]: I1101 10:00:40.357116 2779 server.go:317] "Adding debug handlers to kubelet server" Nov 1 10:00:40.362407 kubelet[2779]: I1101 10:00:40.362040 2779 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 10:00:40.362407 kubelet[2779]: I1101 10:00:40.362224 2779 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 10:00:40.362552 kubelet[2779]: I1101 10:00:40.362446 2779 reconciler.go:26] "Reconciler: start to sync state" Nov 1 10:00:40.364196 kubelet[2779]: I1101 10:00:40.364171 2779 factory.go:223] Registration of the systemd container factory successfully Nov 1 10:00:40.364806 kubelet[2779]: I1101 10:00:40.364726 2779 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 10:00:40.364871 kubelet[2779]: E1101 10:00:40.364230 2779 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 10:00:40.365150 kubelet[2779]: I1101 10:00:40.365123 2779 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 10:00:40.369030 kubelet[2779]: I1101 10:00:40.368987 2779 factory.go:223] Registration of the containerd container factory successfully Nov 1 10:00:40.382503 kubelet[2779]: I1101 10:00:40.382443 2779 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 1 10:00:40.384294 kubelet[2779]: I1101 10:00:40.384271 2779 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 1 10:00:40.384365 kubelet[2779]: I1101 10:00:40.384300 2779 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 1 10:00:40.384365 kubelet[2779]: I1101 10:00:40.384322 2779 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 10:00:40.384365 kubelet[2779]: I1101 10:00:40.384330 2779 kubelet.go:2436] "Starting kubelet main sync loop" Nov 1 10:00:40.385994 kubelet[2779]: E1101 10:00:40.384373 2779 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 10:00:40.408192 kubelet[2779]: I1101 10:00:40.408143 2779 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 10:00:40.408192 kubelet[2779]: I1101 10:00:40.408173 2779 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 10:00:40.408192 kubelet[2779]: I1101 10:00:40.408193 2779 state_mem.go:36] "Initialized new in-memory state store" Nov 1 10:00:40.408423 kubelet[2779]: I1101 10:00:40.408313 2779 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 10:00:40.408423 kubelet[2779]: I1101 10:00:40.408324 2779 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 10:00:40.408423 kubelet[2779]: I1101 10:00:40.408342 2779 policy_none.go:49] "None policy: Start" Nov 1 10:00:40.408423 kubelet[2779]: I1101 10:00:40.408352 2779 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 10:00:40.408423 kubelet[2779]: I1101 10:00:40.408363 2779 state_mem.go:35] "Initializing new in-memory state store" Nov 1 10:00:40.408587 kubelet[2779]: I1101 10:00:40.408483 2779 state_mem.go:75] "Updated machine memory state" Nov 1 10:00:40.412910 kubelet[2779]: E1101 10:00:40.412890 2779 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 1 10:00:40.413080 kubelet[2779]: I1101 10:00:40.413052 2779 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 10:00:40.413125 kubelet[2779]: I1101 10:00:40.413072 2779 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 10:00:40.413519 kubelet[2779]: I1101 10:00:40.413497 2779 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 10:00:40.414227 kubelet[2779]: E1101 10:00:40.414202 2779 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 10:00:40.487561 kubelet[2779]: I1101 10:00:40.487496 2779 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 10:00:40.487561 kubelet[2779]: I1101 10:00:40.487496 2779 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 10:00:40.487758 kubelet[2779]: I1101 10:00:40.487645 2779 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 10:00:40.493846 kubelet[2779]: E1101 10:00:40.493795 2779 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 1 10:00:40.494460 kubelet[2779]: E1101 10:00:40.494403 2779 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 1 10:00:40.494523 kubelet[2779]: E1101 10:00:40.494507 2779 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 1 10:00:40.522570 kubelet[2779]: I1101 10:00:40.522482 2779 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 10:00:40.529192 kubelet[2779]: I1101 10:00:40.529149 2779 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 1 10:00:40.529270 kubelet[2779]: I1101 10:00:40.529235 2779 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 1 10:00:40.664437 kubelet[2779]: I1101 10:00:40.664379 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4064627bb0d9c8459fb5a14b37ee5fc2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4064627bb0d9c8459fb5a14b37ee5fc2\") " pod="kube-system/kube-apiserver-localhost" Nov 1 10:00:40.664619 kubelet[2779]: I1101 10:00:40.664449 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4064627bb0d9c8459fb5a14b37ee5fc2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4064627bb0d9c8459fb5a14b37ee5fc2\") " pod="kube-system/kube-apiserver-localhost" Nov 1 10:00:40.664619 kubelet[2779]: I1101 10:00:40.664483 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 10:00:40.664619 kubelet[2779]: I1101 10:00:40.664517 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 10:00:40.664619 kubelet[2779]: I1101 10:00:40.664536 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 10:00:40.664619 kubelet[2779]: I1101 10:00:40.664601 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 10:00:40.664960 kubelet[2779]: I1101 10:00:40.664628 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4064627bb0d9c8459fb5a14b37ee5fc2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4064627bb0d9c8459fb5a14b37ee5fc2\") " pod="kube-system/kube-apiserver-localhost" Nov 1 10:00:40.664960 kubelet[2779]: I1101 10:00:40.664646 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 10:00:40.664960 kubelet[2779]: I1101 10:00:40.664666 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 1 10:00:40.795549 kubelet[2779]: E1101 10:00:40.795298 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:00:40.795549 kubelet[2779]: E1101 10:00:40.795407 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:00:40.795549 kubelet[2779]: E1101 10:00:40.795309 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:00:41.345930 kubelet[2779]: I1101 10:00:41.345890 2779 apiserver.go:52] "Watching apiserver" Nov 1 10:00:41.363318 kubelet[2779]: I1101 10:00:41.363280 2779 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 10:00:41.397147 kubelet[2779]: I1101 10:00:41.397118 2779 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 10:00:41.398145 kubelet[2779]: I1101 10:00:41.398039 2779 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 10:00:41.400091 kubelet[2779]: E1101 10:00:41.400051 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:00:41.407497 kubelet[2779]: E1101 10:00:41.407401 2779 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 1 10:00:41.409014 kubelet[2779]: E1101 10:00:41.408562 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:00:41.409014 kubelet[2779]: E1101 10:00:41.408884 2779 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 1 10:00:41.409129 kubelet[2779]: E1101 10:00:41.409097 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:00:41.437900 kubelet[2779]: I1101 10:00:41.437831 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.437807163 podStartE2EDuration="3.437807163s" podCreationTimestamp="2025-11-01 10:00:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:00:41.429280014 +0000 UTC m=+1.146789758" watchObservedRunningTime="2025-11-01 10:00:41.437807163 +0000 UTC m=+1.155316897" Nov 1 10:00:41.438073 kubelet[2779]: I1101 10:00:41.437936 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.437932313 podStartE2EDuration="3.437932313s" podCreationTimestamp="2025-11-01 10:00:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:00:41.436912646 +0000 UTC m=+1.154422380" watchObservedRunningTime="2025-11-01 10:00:41.437932313 +0000 UTC m=+1.155442057" Nov 1 10:00:41.477975 kubelet[2779]: I1101 10:00:41.476370 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.476350422 podStartE2EDuration="3.476350422s" podCreationTimestamp="2025-11-01 10:00:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:00:41.454430793 +0000 UTC m=+1.171940537" watchObservedRunningTime="2025-11-01 10:00:41.476350422 +0000 UTC m=+1.193860166" Nov 1 10:00:42.398553 kubelet[2779]: E1101 10:00:42.398512 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:00:42.399051 kubelet[2779]: E1101 10:00:42.399030 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:00:45.669176 kubelet[2779]: E1101 10:00:45.669135 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:00:45.865128 kubelet[2779]: I1101 10:00:45.865096 2779 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 10:00:45.865493 containerd[1615]: time="2025-11-01T10:00:45.865458425Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 10:00:45.865919 kubelet[2779]: I1101 10:00:45.865653 2779 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 10:00:46.308344 kubelet[2779]: E1101 10:00:46.308302 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:00:46.404180 kubelet[2779]: E1101 10:00:46.404140 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:00:46.404406 kubelet[2779]: E1101 10:00:46.404349 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:00:47.041362 systemd[1]: Created slice kubepods-besteffort-poda6fd51db_5ac8_42c6_92c1_b1402f4c1db6.slice - libcontainer container kubepods-besteffort-poda6fd51db_5ac8_42c6_92c1_b1402f4c1db6.slice. Nov 1 10:00:47.061647 systemd[1]: Created slice kubepods-besteffort-pod5c7376d9_9b1b_4472_a0f8_fd78c44a9fdb.slice - libcontainer container kubepods-besteffort-pod5c7376d9_9b1b_4472_a0f8_fd78c44a9fdb.slice. Nov 1 10:00:47.107766 kubelet[2779]: I1101 10:00:47.107704 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5c7376d9-9b1b-4472-a0f8-fd78c44a9fdb-kube-proxy\") pod \"kube-proxy-6fwfz\" (UID: \"5c7376d9-9b1b-4472-a0f8-fd78c44a9fdb\") " pod="kube-system/kube-proxy-6fwfz" Nov 1 10:00:47.107766 kubelet[2779]: I1101 10:00:47.107752 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5blrm\" (UniqueName: \"kubernetes.io/projected/5c7376d9-9b1b-4472-a0f8-fd78c44a9fdb-kube-api-access-5blrm\") pod \"kube-proxy-6fwfz\" (UID: \"5c7376d9-9b1b-4472-a0f8-fd78c44a9fdb\") " pod="kube-system/kube-proxy-6fwfz" Nov 1 10:00:47.108254 kubelet[2779]: I1101 10:00:47.107783 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a6fd51db-5ac8-42c6-92c1-b1402f4c1db6-var-lib-calico\") pod \"tigera-operator-7dcd859c48-qmqsd\" (UID: \"a6fd51db-5ac8-42c6-92c1-b1402f4c1db6\") " pod="tigera-operator/tigera-operator-7dcd859c48-qmqsd" Nov 1 10:00:47.108254 kubelet[2779]: I1101 10:00:47.107852 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqk8k\" (UniqueName: \"kubernetes.io/projected/a6fd51db-5ac8-42c6-92c1-b1402f4c1db6-kube-api-access-hqk8k\") pod \"tigera-operator-7dcd859c48-qmqsd\" (UID: \"a6fd51db-5ac8-42c6-92c1-b1402f4c1db6\") " pod="tigera-operator/tigera-operator-7dcd859c48-qmqsd" Nov 1 10:00:47.108254 kubelet[2779]: I1101 10:00:47.107908 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c7376d9-9b1b-4472-a0f8-fd78c44a9fdb-xtables-lock\") pod \"kube-proxy-6fwfz\" (UID: \"5c7376d9-9b1b-4472-a0f8-fd78c44a9fdb\") " pod="kube-system/kube-proxy-6fwfz" Nov 1 10:00:47.108254 kubelet[2779]: I1101 10:00:47.108002 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c7376d9-9b1b-4472-a0f8-fd78c44a9fdb-lib-modules\") pod \"kube-proxy-6fwfz\" (UID: \"5c7376d9-9b1b-4472-a0f8-fd78c44a9fdb\") " pod="kube-system/kube-proxy-6fwfz" Nov 1 10:00:47.360668 containerd[1615]: time="2025-11-01T10:00:47.360507481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-qmqsd,Uid:a6fd51db-5ac8-42c6-92c1-b1402f4c1db6,Namespace:tigera-operator,Attempt:0,}" Nov 1 10:00:47.364875 kubelet[2779]: E1101 10:00:47.364839 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:00:47.365462 containerd[1615]: time="2025-11-01T10:00:47.365362308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6fwfz,Uid:5c7376d9-9b1b-4472-a0f8-fd78c44a9fdb,Namespace:kube-system,Attempt:0,}" Nov 1 10:00:47.406760 kubelet[2779]: E1101 10:00:47.406712 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:00:47.417447 containerd[1615]: time="2025-11-01T10:00:47.417203422Z" level=info msg="connecting to shim 6eb54dc3083410b4c34e1f98276d50e98a1664794549b8ac95fbb3b56b2372f4" address="unix:///run/containerd/s/b52211d7152a1e86202d8d1c24ade9e33efe9f6465c25c71c0f3e144b6cef8fa" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:00:47.418082 containerd[1615]: time="2025-11-01T10:00:47.418055164Z" level=info msg="connecting to shim 3d99eef78938ac0fa6652f775d3bbf0d764c69d55745705ab75e587fa84083eb" address="unix:///run/containerd/s/d3bd3afd2b929bbcaada3f701e98b100cde757059320d1334a80756e77a3f262" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:00:47.473534 systemd[1]: Started cri-containerd-6eb54dc3083410b4c34e1f98276d50e98a1664794549b8ac95fbb3b56b2372f4.scope - libcontainer container 6eb54dc3083410b4c34e1f98276d50e98a1664794549b8ac95fbb3b56b2372f4. Nov 1 10:00:47.477720 systemd[1]: Started cri-containerd-3d99eef78938ac0fa6652f775d3bbf0d764c69d55745705ab75e587fa84083eb.scope - libcontainer container 3d99eef78938ac0fa6652f775d3bbf0d764c69d55745705ab75e587fa84083eb. Nov 1 10:00:47.514032 containerd[1615]: time="2025-11-01T10:00:47.513918460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6fwfz,Uid:5c7376d9-9b1b-4472-a0f8-fd78c44a9fdb,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d99eef78938ac0fa6652f775d3bbf0d764c69d55745705ab75e587fa84083eb\"" Nov 1 10:00:47.515861 kubelet[2779]: E1101 10:00:47.515830 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:00:47.528060 containerd[1615]: time="2025-11-01T10:00:47.528001922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-qmqsd,Uid:a6fd51db-5ac8-42c6-92c1-b1402f4c1db6,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6eb54dc3083410b4c34e1f98276d50e98a1664794549b8ac95fbb3b56b2372f4\"" Nov 1 10:00:47.529959 containerd[1615]: time="2025-11-01T10:00:47.529920005Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 1 10:00:47.530101 containerd[1615]: time="2025-11-01T10:00:47.530069480Z" level=info msg="CreateContainer within sandbox \"3d99eef78938ac0fa6652f775d3bbf0d764c69d55745705ab75e587fa84083eb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 10:00:47.545530 containerd[1615]: time="2025-11-01T10:00:47.545480721Z" level=info msg="Container 0ef45ffb237f5a70c998b80cac080d0e823f4dbf534479b2aefe61af0d3c327c: CDI devices from CRI Config.CDIDevices: []" Nov 1 10:00:47.565668 containerd[1615]: time="2025-11-01T10:00:47.565613755Z" level=info msg="CreateContainer within sandbox \"3d99eef78938ac0fa6652f775d3bbf0d764c69d55745705ab75e587fa84083eb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0ef45ffb237f5a70c998b80cac080d0e823f4dbf534479b2aefe61af0d3c327c\"" Nov 1 10:00:47.566371 containerd[1615]: time="2025-11-01T10:00:47.566331382Z" level=info msg="StartContainer for \"0ef45ffb237f5a70c998b80cac080d0e823f4dbf534479b2aefe61af0d3c327c\"" Nov 1 10:00:47.567687 containerd[1615]: time="2025-11-01T10:00:47.567648461Z" level=info msg="connecting to shim 0ef45ffb237f5a70c998b80cac080d0e823f4dbf534479b2aefe61af0d3c327c" address="unix:///run/containerd/s/d3bd3afd2b929bbcaada3f701e98b100cde757059320d1334a80756e77a3f262" protocol=ttrpc version=3 Nov 1 10:00:47.592529 systemd[1]: Started cri-containerd-0ef45ffb237f5a70c998b80cac080d0e823f4dbf534479b2aefe61af0d3c327c.scope - libcontainer container 0ef45ffb237f5a70c998b80cac080d0e823f4dbf534479b2aefe61af0d3c327c. Nov 1 10:00:47.721617 containerd[1615]: time="2025-11-01T10:00:47.721490620Z" level=info msg="StartContainer for \"0ef45ffb237f5a70c998b80cac080d0e823f4dbf534479b2aefe61af0d3c327c\" returns successfully" Nov 1 10:00:48.410106 kubelet[2779]: E1101 10:00:48.410046 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:00:49.321546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount910478958.mount: Deactivated successfully. Nov 1 10:00:50.185484 kubelet[2779]: E1101 10:00:50.185405 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:00:50.203627 kubelet[2779]: I1101 10:00:50.203506 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6fwfz" podStartSLOduration=3.203488095 podStartE2EDuration="3.203488095s" podCreationTimestamp="2025-11-01 10:00:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:00:48.420275752 +0000 UTC m=+8.137785496" watchObservedRunningTime="2025-11-01 10:00:50.203488095 +0000 UTC m=+9.920997829" Nov 1 10:00:50.401319 containerd[1615]: time="2025-11-01T10:00:50.401259784Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:00:50.402228 containerd[1615]: time="2025-11-01T10:00:50.402179220Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Nov 1 10:00:50.403334 containerd[1615]: time="2025-11-01T10:00:50.403291443Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:00:50.405688 containerd[1615]: time="2025-11-01T10:00:50.405656194Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:00:50.406296 containerd[1615]: time="2025-11-01T10:00:50.406252667Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.876286084s" Nov 1 10:00:50.406324 containerd[1615]: time="2025-11-01T10:00:50.406294817Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 1 10:00:50.412667 containerd[1615]: time="2025-11-01T10:00:50.412614061Z" level=info msg="CreateContainer within sandbox \"6eb54dc3083410b4c34e1f98276d50e98a1664794549b8ac95fbb3b56b2372f4\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 1 10:00:50.415451 kubelet[2779]: E1101 10:00:50.415423 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:00:50.427690 containerd[1615]: time="2025-11-01T10:00:50.427629101Z" level=info msg="Container 54b4506091fe52a41bbe28a41251fde2ca5f5951d5186ea523ce2e3057d08b65: CDI devices from CRI Config.CDIDevices: []" Nov 1 10:00:50.435141 containerd[1615]: time="2025-11-01T10:00:50.435093858Z" level=info msg="CreateContainer within sandbox \"6eb54dc3083410b4c34e1f98276d50e98a1664794549b8ac95fbb3b56b2372f4\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"54b4506091fe52a41bbe28a41251fde2ca5f5951d5186ea523ce2e3057d08b65\"" Nov 1 10:00:50.435736 containerd[1615]: time="2025-11-01T10:00:50.435655315Z" level=info msg="StartContainer for \"54b4506091fe52a41bbe28a41251fde2ca5f5951d5186ea523ce2e3057d08b65\"" Nov 1 10:00:50.437179 containerd[1615]: time="2025-11-01T10:00:50.437131228Z" level=info msg="connecting to shim 54b4506091fe52a41bbe28a41251fde2ca5f5951d5186ea523ce2e3057d08b65" address="unix:///run/containerd/s/b52211d7152a1e86202d8d1c24ade9e33efe9f6465c25c71c0f3e144b6cef8fa" protocol=ttrpc version=3 Nov 1 10:00:50.466510 systemd[1]: Started cri-containerd-54b4506091fe52a41bbe28a41251fde2ca5f5951d5186ea523ce2e3057d08b65.scope - libcontainer container 54b4506091fe52a41bbe28a41251fde2ca5f5951d5186ea523ce2e3057d08b65. Nov 1 10:00:50.501739 containerd[1615]: time="2025-11-01T10:00:50.501688459Z" level=info msg="StartContainer for \"54b4506091fe52a41bbe28a41251fde2ca5f5951d5186ea523ce2e3057d08b65\" returns successfully" Nov 1 10:00:51.181287 update_engine[1591]: I20251101 10:00:51.181175 1591 update_attempter.cc:509] Updating boot flags... Nov 1 10:00:51.427427 kubelet[2779]: I1101 10:00:51.427328 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-qmqsd" podStartSLOduration=2.549560333 podStartE2EDuration="5.427308738s" podCreationTimestamp="2025-11-01 10:00:46 +0000 UTC" firstStartedPulling="2025-11-01 10:00:47.529124039 +0000 UTC m=+7.246633783" lastFinishedPulling="2025-11-01 10:00:50.406872444 +0000 UTC m=+10.124382188" observedRunningTime="2025-11-01 10:00:51.427166708 +0000 UTC m=+11.144676452" watchObservedRunningTime="2025-11-01 10:00:51.427308738 +0000 UTC m=+11.144818482" Nov 1 10:00:56.470203 sudo[1822]: pam_unix(sudo:session): session closed for user root Nov 1 10:00:56.471958 sshd[1821]: Connection closed by 10.0.0.1 port 37372 Nov 1 10:00:56.474566 sshd-session[1818]: pam_unix(sshd:session): session closed for user core Nov 1 10:00:56.478894 systemd[1]: sshd@6-10.0.0.25:22-10.0.0.1:37372.service: Deactivated successfully. Nov 1 10:00:56.481844 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 10:00:56.482161 systemd[1]: session-7.scope: Consumed 5.278s CPU time, 215.8M memory peak. Nov 1 10:00:56.483818 systemd-logind[1588]: Session 7 logged out. Waiting for processes to exit. Nov 1 10:00:56.485495 systemd-logind[1588]: Removed session 7. Nov 1 10:01:00.533195 systemd[1]: Created slice kubepods-besteffort-pod980e14f3_a01c_416d_8489_1772f809349c.slice - libcontainer container kubepods-besteffort-pod980e14f3_a01c_416d_8489_1772f809349c.slice. Nov 1 10:01:00.593416 kubelet[2779]: I1101 10:01:00.593350 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/980e14f3-a01c-416d-8489-1772f809349c-tigera-ca-bundle\") pod \"calico-typha-dbf6bdc4f-t665r\" (UID: \"980e14f3-a01c-416d-8489-1772f809349c\") " pod="calico-system/calico-typha-dbf6bdc4f-t665r" Nov 1 10:01:00.593416 kubelet[2779]: I1101 10:01:00.593408 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/980e14f3-a01c-416d-8489-1772f809349c-typha-certs\") pod \"calico-typha-dbf6bdc4f-t665r\" (UID: \"980e14f3-a01c-416d-8489-1772f809349c\") " pod="calico-system/calico-typha-dbf6bdc4f-t665r" Nov 1 10:01:00.593416 kubelet[2779]: I1101 10:01:00.593431 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwx88\" (UniqueName: \"kubernetes.io/projected/980e14f3-a01c-416d-8489-1772f809349c-kube-api-access-zwx88\") pod \"calico-typha-dbf6bdc4f-t665r\" (UID: \"980e14f3-a01c-416d-8489-1772f809349c\") " pod="calico-system/calico-typha-dbf6bdc4f-t665r" Nov 1 10:01:00.732939 systemd[1]: Created slice kubepods-besteffort-podf4f5791a_ddc9_4efb_a8d9_2e7486816fa0.slice - libcontainer container kubepods-besteffort-podf4f5791a_ddc9_4efb_a8d9_2e7486816fa0.slice. Nov 1 10:01:00.794910 kubelet[2779]: I1101 10:01:00.794496 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f4f5791a-ddc9-4efb-a8d9-2e7486816fa0-cni-log-dir\") pod \"calico-node-d6gqr\" (UID: \"f4f5791a-ddc9-4efb-a8d9-2e7486816fa0\") " pod="calico-system/calico-node-d6gqr" Nov 1 10:01:00.794910 kubelet[2779]: I1101 10:01:00.794538 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f4f5791a-ddc9-4efb-a8d9-2e7486816fa0-flexvol-driver-host\") pod \"calico-node-d6gqr\" (UID: \"f4f5791a-ddc9-4efb-a8d9-2e7486816fa0\") " pod="calico-system/calico-node-d6gqr" Nov 1 10:01:00.794910 kubelet[2779]: I1101 10:01:00.794565 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f4f5791a-ddc9-4efb-a8d9-2e7486816fa0-node-certs\") pod \"calico-node-d6gqr\" (UID: \"f4f5791a-ddc9-4efb-a8d9-2e7486816fa0\") " pod="calico-system/calico-node-d6gqr" Nov 1 10:01:00.794910 kubelet[2779]: I1101 10:01:00.794581 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f4f5791a-ddc9-4efb-a8d9-2e7486816fa0-var-run-calico\") pod \"calico-node-d6gqr\" (UID: \"f4f5791a-ddc9-4efb-a8d9-2e7486816fa0\") " pod="calico-system/calico-node-d6gqr" Nov 1 10:01:00.794910 kubelet[2779]: I1101 10:01:00.794602 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f4f5791a-ddc9-4efb-a8d9-2e7486816fa0-cni-net-dir\") pod \"calico-node-d6gqr\" (UID: \"f4f5791a-ddc9-4efb-a8d9-2e7486816fa0\") " pod="calico-system/calico-node-d6gqr" Nov 1 10:01:00.795147 kubelet[2779]: I1101 10:01:00.794658 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4f5791a-ddc9-4efb-a8d9-2e7486816fa0-tigera-ca-bundle\") pod \"calico-node-d6gqr\" (UID: \"f4f5791a-ddc9-4efb-a8d9-2e7486816fa0\") " pod="calico-system/calico-node-d6gqr" Nov 1 10:01:00.795147 kubelet[2779]: I1101 10:01:00.794715 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66wpd\" (UniqueName: \"kubernetes.io/projected/f4f5791a-ddc9-4efb-a8d9-2e7486816fa0-kube-api-access-66wpd\") pod \"calico-node-d6gqr\" (UID: \"f4f5791a-ddc9-4efb-a8d9-2e7486816fa0\") " pod="calico-system/calico-node-d6gqr" Nov 1 10:01:00.795147 kubelet[2779]: I1101 10:01:00.794744 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4f5791a-ddc9-4efb-a8d9-2e7486816fa0-lib-modules\") pod \"calico-node-d6gqr\" (UID: \"f4f5791a-ddc9-4efb-a8d9-2e7486816fa0\") " pod="calico-system/calico-node-d6gqr" Nov 1 10:01:00.795147 kubelet[2779]: I1101 10:01:00.794761 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4f5791a-ddc9-4efb-a8d9-2e7486816fa0-xtables-lock\") pod \"calico-node-d6gqr\" (UID: \"f4f5791a-ddc9-4efb-a8d9-2e7486816fa0\") " pod="calico-system/calico-node-d6gqr" Nov 1 10:01:00.795147 kubelet[2779]: I1101 10:01:00.794782 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f4f5791a-ddc9-4efb-a8d9-2e7486816fa0-cni-bin-dir\") pod \"calico-node-d6gqr\" (UID: \"f4f5791a-ddc9-4efb-a8d9-2e7486816fa0\") " pod="calico-system/calico-node-d6gqr" Nov 1 10:01:00.795261 kubelet[2779]: I1101 10:01:00.794795 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f4f5791a-ddc9-4efb-a8d9-2e7486816fa0-var-lib-calico\") pod \"calico-node-d6gqr\" (UID: \"f4f5791a-ddc9-4efb-a8d9-2e7486816fa0\") " pod="calico-system/calico-node-d6gqr" Nov 1 10:01:00.795261 kubelet[2779]: I1101 10:01:00.794811 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f4f5791a-ddc9-4efb-a8d9-2e7486816fa0-policysync\") pod \"calico-node-d6gqr\" (UID: \"f4f5791a-ddc9-4efb-a8d9-2e7486816fa0\") " pod="calico-system/calico-node-d6gqr" Nov 1 10:01:00.840005 kubelet[2779]: E1101 10:01:00.839954 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:01:00.840744 containerd[1615]: time="2025-11-01T10:01:00.840696674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-dbf6bdc4f-t665r,Uid:980e14f3-a01c-416d-8489-1772f809349c,Namespace:calico-system,Attempt:0,}" Nov 1 10:01:00.865754 containerd[1615]: time="2025-11-01T10:01:00.865703005Z" level=info msg="connecting to shim a4675b52d51bf1d1b5122309266ca884f0fc25100c5e45df7b9b8b36447a4514" address="unix:///run/containerd/s/0282dff701af99fca77e9845263014aa052acb5714736188277d3273aa06903d" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:01:00.901908 kubelet[2779]: E1101 10:01:00.901849 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:00.902517 kubelet[2779]: W1101 10:01:00.902158 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:00.902516 systemd[1]: Started cri-containerd-a4675b52d51bf1d1b5122309266ca884f0fc25100c5e45df7b9b8b36447a4514.scope - libcontainer container a4675b52d51bf1d1b5122309266ca884f0fc25100c5e45df7b9b8b36447a4514. Nov 1 10:01:00.904675 kubelet[2779]: E1101 10:01:00.904432 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:00.909538 kubelet[2779]: E1101 10:01:00.909497 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:00.909538 kubelet[2779]: W1101 10:01:00.909527 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:00.909833 kubelet[2779]: E1101 10:01:00.909552 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:00.917045 kubelet[2779]: E1101 10:01:00.916779 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fgvv4" podUID="c846e0de-56ff-40b3-829b-1fda67e4a78f" Nov 1 10:01:00.920843 kubelet[2779]: E1101 10:01:00.920776 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:00.920843 kubelet[2779]: W1101 10:01:00.920795 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:00.920843 kubelet[2779]: E1101 10:01:00.920813 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:00.968876 containerd[1615]: time="2025-11-01T10:01:00.968828282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-dbf6bdc4f-t665r,Uid:980e14f3-a01c-416d-8489-1772f809349c,Namespace:calico-system,Attempt:0,} returns sandbox id \"a4675b52d51bf1d1b5122309266ca884f0fc25100c5e45df7b9b8b36447a4514\"" Nov 1 10:01:00.969712 kubelet[2779]: E1101 10:01:00.969683 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:01:00.971843 containerd[1615]: time="2025-11-01T10:01:00.971801996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 1 10:01:00.982639 kubelet[2779]: E1101 10:01:00.982590 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:00.982762 kubelet[2779]: W1101 10:01:00.982664 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:00.982762 kubelet[2779]: E1101 10:01:00.982695 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:00.983190 kubelet[2779]: E1101 10:01:00.983171 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:00.983190 kubelet[2779]: W1101 10:01:00.983184 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:00.983190 kubelet[2779]: E1101 10:01:00.983195 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:00.984483 kubelet[2779]: E1101 10:01:00.984461 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:00.984483 kubelet[2779]: W1101 10:01:00.984473 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:00.984483 kubelet[2779]: E1101 10:01:00.984484 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:00.984815 kubelet[2779]: E1101 10:01:00.984788 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:00.984815 kubelet[2779]: W1101 10:01:00.984805 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:00.984815 kubelet[2779]: E1101 10:01:00.984815 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:00.985080 kubelet[2779]: E1101 10:01:00.985032 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:00.985080 kubelet[2779]: W1101 10:01:00.985041 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:00.985080 kubelet[2779]: E1101 10:01:00.985051 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:00.985271 kubelet[2779]: E1101 10:01:00.985242 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:00.985271 kubelet[2779]: W1101 10:01:00.985255 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:00.985271 kubelet[2779]: E1101 10:01:00.985266 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:00.986240 kubelet[2779]: E1101 10:01:00.986214 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:00.986240 kubelet[2779]: W1101 10:01:00.986228 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:00.986240 kubelet[2779]: E1101 10:01:00.986238 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:00.986529 kubelet[2779]: E1101 10:01:00.986492 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:00.986529 kubelet[2779]: W1101 10:01:00.986505 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:00.986529 kubelet[2779]: E1101 10:01:00.986516 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:00.986735 kubelet[2779]: E1101 10:01:00.986716 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:00.987141 kubelet[2779]: W1101 10:01:00.987077 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:00.987141 kubelet[2779]: E1101 10:01:00.987092 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:00.987321 kubelet[2779]: E1101 10:01:00.987299 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:00.987321 kubelet[2779]: W1101 10:01:00.987310 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:00.987321 kubelet[2779]: E1101 10:01:00.987318 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:00.987522 kubelet[2779]: E1101 10:01:00.987502 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:00.987522 kubelet[2779]: W1101 10:01:00.987513 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:00.987522 kubelet[2779]: E1101 10:01:00.987521 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:00.987703 kubelet[2779]: E1101 10:01:00.987685 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:00.987703 kubelet[2779]: W1101 10:01:00.987696 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:00.987703 kubelet[2779]: E1101 10:01:00.987705 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:00.987890 kubelet[2779]: E1101 10:01:00.987869 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:00.987890 kubelet[2779]: W1101 10:01:00.987880 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:00.987890 kubelet[2779]: E1101 10:01:00.987888 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:00.988421 kubelet[2779]: E1101 10:01:00.988134 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:00.988697 kubelet[2779]: W1101 10:01:00.988666 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:00.988697 kubelet[2779]: E1101 10:01:00.988687 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:00.989015 kubelet[2779]: E1101 10:01:00.988886 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:00.989015 kubelet[2779]: W1101 10:01:00.988901 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:00.989015 kubelet[2779]: E1101 10:01:00.988910 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:00.989230 kubelet[2779]: E1101 10:01:00.989127 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:00.989230 kubelet[2779]: W1101 10:01:00.989136 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:00.989230 kubelet[2779]: E1101 10:01:00.989145 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:00.989377 kubelet[2779]: E1101 10:01:00.989351 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:00.989377 kubelet[2779]: W1101 10:01:00.989363 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:00.989377 kubelet[2779]: E1101 10:01:00.989372 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:00.989608 kubelet[2779]: E1101 10:01:00.989574 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:00.989608 kubelet[2779]: W1101 10:01:00.989587 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:00.989608 kubelet[2779]: E1101 10:01:00.989595 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:00.989822 kubelet[2779]: E1101 10:01:00.989745 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:00.989822 kubelet[2779]: W1101 10:01:00.989752 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:00.989822 kubelet[2779]: E1101 10:01:00.989760 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:00.989998 kubelet[2779]: E1101 10:01:00.989911 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:00.989998 kubelet[2779]: W1101 10:01:00.989919 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:00.989998 kubelet[2779]: E1101 10:01:00.989927 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:00.996566 kubelet[2779]: E1101 10:01:00.996482 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:00.996566 kubelet[2779]: W1101 10:01:00.996502 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:00.996566 kubelet[2779]: E1101 10:01:00.996514 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:00.996566 kubelet[2779]: I1101 10:01:00.996542 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c846e0de-56ff-40b3-829b-1fda67e4a78f-varrun\") pod \"csi-node-driver-fgvv4\" (UID: \"c846e0de-56ff-40b3-829b-1fda67e4a78f\") " pod="calico-system/csi-node-driver-fgvv4" Nov 1 10:01:00.997679 kubelet[2779]: E1101 10:01:00.997619 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:00.997679 kubelet[2779]: W1101 10:01:00.997674 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:00.997769 kubelet[2779]: E1101 10:01:00.997706 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:00.997793 kubelet[2779]: I1101 10:01:00.997771 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c846e0de-56ff-40b3-829b-1fda67e4a78f-registration-dir\") pod \"csi-node-driver-fgvv4\" (UID: \"c846e0de-56ff-40b3-829b-1fda67e4a78f\") " pod="calico-system/csi-node-driver-fgvv4" Nov 1 10:01:00.998132 kubelet[2779]: E1101 10:01:00.998092 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:00.998132 kubelet[2779]: W1101 10:01:00.998115 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:00.998132 kubelet[2779]: E1101 10:01:00.998126 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:00.998242 kubelet[2779]: I1101 10:01:00.998218 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c846e0de-56ff-40b3-829b-1fda67e4a78f-socket-dir\") pod \"csi-node-driver-fgvv4\" (UID: \"c846e0de-56ff-40b3-829b-1fda67e4a78f\") " pod="calico-system/csi-node-driver-fgvv4" Nov 1 10:01:00.998474 kubelet[2779]: E1101 10:01:00.998449 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:00.998474 kubelet[2779]: W1101 10:01:00.998464 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:00.998474 kubelet[2779]: E1101 10:01:00.998473 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:01.000326 kubelet[2779]: E1101 10:01:01.000304 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:01.000326 kubelet[2779]: W1101 10:01:01.000318 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:01.000326 kubelet[2779]: E1101 10:01:01.000328 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:01.000648 kubelet[2779]: E1101 10:01:01.000624 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:01.000648 kubelet[2779]: W1101 10:01:01.000643 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:01.000718 kubelet[2779]: E1101 10:01:01.000656 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:01.000718 kubelet[2779]: I1101 10:01:01.000682 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c846e0de-56ff-40b3-829b-1fda67e4a78f-kubelet-dir\") pod \"csi-node-driver-fgvv4\" (UID: \"c846e0de-56ff-40b3-829b-1fda67e4a78f\") " pod="calico-system/csi-node-driver-fgvv4" Nov 1 10:01:01.000932 kubelet[2779]: E1101 10:01:01.000903 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:01.000932 kubelet[2779]: W1101 10:01:01.000919 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:01.000932 kubelet[2779]: E1101 10:01:01.000929 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:01.001200 kubelet[2779]: E1101 10:01:01.001134 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:01.001200 kubelet[2779]: W1101 10:01:01.001148 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:01.001200 kubelet[2779]: E1101 10:01:01.001157 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:01.001380 kubelet[2779]: E1101 10:01:01.001362 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:01.001380 kubelet[2779]: W1101 10:01:01.001375 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:01.001585 kubelet[2779]: E1101 10:01:01.001413 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:01.001585 kubelet[2779]: I1101 10:01:01.001434 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2r8p\" (UniqueName: \"kubernetes.io/projected/c846e0de-56ff-40b3-829b-1fda67e4a78f-kube-api-access-x2r8p\") pod \"csi-node-driver-fgvv4\" (UID: \"c846e0de-56ff-40b3-829b-1fda67e4a78f\") " pod="calico-system/csi-node-driver-fgvv4" Nov 1 10:01:01.002022 kubelet[2779]: E1101 10:01:01.001708 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:01.002022 kubelet[2779]: W1101 10:01:01.001726 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:01.002022 kubelet[2779]: E1101 10:01:01.001738 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:01.002441 kubelet[2779]: E1101 10:01:01.002422 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:01.002441 kubelet[2779]: W1101 10:01:01.002439 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:01.002505 kubelet[2779]: E1101 10:01:01.002450 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:01.003081 kubelet[2779]: E1101 10:01:01.002892 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:01.003081 kubelet[2779]: W1101 10:01:01.002911 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:01.003081 kubelet[2779]: E1101 10:01:01.002922 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:01.003281 kubelet[2779]: E1101 10:01:01.003253 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:01.003281 kubelet[2779]: W1101 10:01:01.003269 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:01.003281 kubelet[2779]: E1101 10:01:01.003279 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:01.003755 kubelet[2779]: E1101 10:01:01.003730 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:01.003755 kubelet[2779]: W1101 10:01:01.003742 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:01.003755 kubelet[2779]: E1101 10:01:01.003751 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:01.004231 kubelet[2779]: E1101 10:01:01.004212 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:01.004231 kubelet[2779]: W1101 10:01:01.004226 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:01.004308 kubelet[2779]: E1101 10:01:01.004236 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:01.036489 kubelet[2779]: E1101 10:01:01.036422 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:01:01.037149 containerd[1615]: time="2025-11-01T10:01:01.037085161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d6gqr,Uid:f4f5791a-ddc9-4efb-a8d9-2e7486816fa0,Namespace:calico-system,Attempt:0,}" Nov 1 10:01:01.057270 containerd[1615]: time="2025-11-01T10:01:01.057148664Z" level=info msg="connecting to shim 77e85c4c8ab9b65846d0f1d484d14b4754718a312706659f627027f9421fe4a3" address="unix:///run/containerd/s/10de382cfab7ef60bce45682a8421cd44daaf3016aa54d69ebbf13cfafed6e40" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:01:01.084524 systemd[1]: Started cri-containerd-77e85c4c8ab9b65846d0f1d484d14b4754718a312706659f627027f9421fe4a3.scope - libcontainer container 77e85c4c8ab9b65846d0f1d484d14b4754718a312706659f627027f9421fe4a3. Nov 1 10:01:01.103364 kubelet[2779]: E1101 10:01:01.103294 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:01.103364 kubelet[2779]: W1101 10:01:01.103315 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:01.103364 kubelet[2779]: E1101 10:01:01.103336 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:01.103890 kubelet[2779]: E1101 10:01:01.103834 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:01.103890 kubelet[2779]: W1101 10:01:01.103846 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:01.103890 kubelet[2779]: E1101 10:01:01.103856 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:01.104503 kubelet[2779]: E1101 10:01:01.104429 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:01.104503 kubelet[2779]: W1101 10:01:01.104442 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:01.104503 kubelet[2779]: E1101 10:01:01.104451 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:01.104820 kubelet[2779]: E1101 10:01:01.104732 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:01.104820 kubelet[2779]: W1101 10:01:01.104745 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:01.104820 kubelet[2779]: E1101 10:01:01.104775 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:01.104998 kubelet[2779]: E1101 10:01:01.104982 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:01.104998 kubelet[2779]: W1101 10:01:01.104995 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:01.105132 kubelet[2779]: E1101 10:01:01.105006 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:01.105542 kubelet[2779]: E1101 10:01:01.105215 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:01.105542 kubelet[2779]: W1101 10:01:01.105226 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:01.105542 kubelet[2779]: E1101 10:01:01.105235 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:01.105542 kubelet[2779]: E1101 10:01:01.105455 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:01.105542 kubelet[2779]: W1101 10:01:01.105465 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:01.105542 kubelet[2779]: E1101 10:01:01.105475 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:01.105688 kubelet[2779]: E1101 10:01:01.105670 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:01.105688 kubelet[2779]: W1101 10:01:01.105680 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:01.105727 kubelet[2779]: E1101 10:01:01.105693 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:01.106082 kubelet[2779]: E1101 10:01:01.106001 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:01.106082 kubelet[2779]: W1101 10:01:01.106028 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:01.106082 kubelet[2779]: E1101 10:01:01.106054 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:01.106416 kubelet[2779]: E1101 10:01:01.106327 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:01.106416 kubelet[2779]: W1101 10:01:01.106336 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:01.106416 kubelet[2779]: E1101 10:01:01.106346 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:01.106595 kubelet[2779]: E1101 10:01:01.106574 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:01.106595 kubelet[2779]: W1101 10:01:01.106589 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:01.106662 kubelet[2779]: E1101 10:01:01.106598 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:01.107062 kubelet[2779]: E1101 10:01:01.107048 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:01.107202 kubelet[2779]: W1101 10:01:01.107190 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:01.107312 kubelet[2779]: E1101 10:01:01.107260 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:01.107858 kubelet[2779]: E1101 10:01:01.107846 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:01.107945 kubelet[2779]: W1101 10:01:01.107918 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:01.107945 kubelet[2779]: E1101 10:01:01.107933 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:01.109455 kubelet[2779]: E1101 10:01:01.109379 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:01.109455 kubelet[2779]: W1101 10:01:01.109410 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:01.109455 kubelet[2779]: E1101 10:01:01.109421 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:01.109822 kubelet[2779]: E1101 10:01:01.109810 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:01.109940 kubelet[2779]: W1101 10:01:01.109879 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:01.109940 kubelet[2779]: E1101 10:01:01.109893 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:01.110339 kubelet[2779]: E1101 10:01:01.110271 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:01.110449 kubelet[2779]: W1101 10:01:01.110294 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:01.110449 kubelet[2779]: E1101 10:01:01.110405 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:01.110755 kubelet[2779]: E1101 10:01:01.110722 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:01.110755 kubelet[2779]: W1101 10:01:01.110733 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:01.110755 kubelet[2779]: E1101 10:01:01.110743 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:01.111113 kubelet[2779]: E1101 10:01:01.111069 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:01.111113 kubelet[2779]: W1101 10:01:01.111080 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:01.111113 kubelet[2779]: E1101 10:01:01.111090 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:01.111450 kubelet[2779]: E1101 10:01:01.111417 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:01.111450 kubelet[2779]: W1101 10:01:01.111428 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:01.111450 kubelet[2779]: E1101 10:01:01.111438 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:01.112029 kubelet[2779]: E1101 10:01:01.111902 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:01.112029 kubelet[2779]: W1101 10:01:01.111913 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:01.112029 kubelet[2779]: E1101 10:01:01.111923 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:01.112293 kubelet[2779]: E1101 10:01:01.112201 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:01.112293 kubelet[2779]: W1101 10:01:01.112212 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:01.112293 kubelet[2779]: E1101 10:01:01.112223 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:01.112618 kubelet[2779]: E1101 10:01:01.112606 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:01.113082 kubelet[2779]: W1101 10:01:01.112674 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:01.113082 kubelet[2779]: E1101 10:01:01.112687 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:01.113216 kubelet[2779]: E1101 10:01:01.113204 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:01.113278 kubelet[2779]: W1101 10:01:01.113266 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:01.113338 kubelet[2779]: E1101 10:01:01.113327 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:01.114180 kubelet[2779]: E1101 10:01:01.114112 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:01.114180 kubelet[2779]: W1101 10:01:01.114125 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:01.114180 kubelet[2779]: E1101 10:01:01.114136 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:01.114579 kubelet[2779]: E1101 10:01:01.114566 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:01.114658 kubelet[2779]: W1101 10:01:01.114645 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:01.114710 kubelet[2779]: E1101 10:01:01.114699 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:01.115377 containerd[1615]: time="2025-11-01T10:01:01.115348868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d6gqr,Uid:f4f5791a-ddc9-4efb-a8d9-2e7486816fa0,Namespace:calico-system,Attempt:0,} returns sandbox id \"77e85c4c8ab9b65846d0f1d484d14b4754718a312706659f627027f9421fe4a3\"" Nov 1 10:01:01.116093 kubelet[2779]: E1101 10:01:01.116078 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:01:01.122254 kubelet[2779]: E1101 10:01:01.122217 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:01.122254 kubelet[2779]: W1101 10:01:01.122235 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:01.122254 kubelet[2779]: E1101 10:01:01.122255 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:02.385708 kubelet[2779]: E1101 10:01:02.385639 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fgvv4" podUID="c846e0de-56ff-40b3-829b-1fda67e4a78f" Nov 1 10:01:03.121129 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount117729252.mount: Deactivated successfully. Nov 1 10:01:04.385482 kubelet[2779]: E1101 10:01:04.385415 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fgvv4" podUID="c846e0de-56ff-40b3-829b-1fda67e4a78f" Nov 1 10:01:05.228509 containerd[1615]: time="2025-11-01T10:01:05.228446899Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:01:05.260742 containerd[1615]: time="2025-11-01T10:01:05.260687566Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33738263" Nov 1 10:01:05.281750 containerd[1615]: time="2025-11-01T10:01:05.281703310Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:01:05.304942 containerd[1615]: time="2025-11-01T10:01:05.304908931Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:01:05.305433 containerd[1615]: time="2025-11-01T10:01:05.305375190Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 4.333526355s" Nov 1 10:01:05.305477 containerd[1615]: time="2025-11-01T10:01:05.305435424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 1 10:01:05.306802 containerd[1615]: time="2025-11-01T10:01:05.306774006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 1 10:01:05.319490 containerd[1615]: time="2025-11-01T10:01:05.319440496Z" level=info msg="CreateContainer within sandbox \"a4675b52d51bf1d1b5122309266ca884f0fc25100c5e45df7b9b8b36447a4514\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 1 10:01:05.330697 containerd[1615]: time="2025-11-01T10:01:05.329714016Z" level=info msg="Container 968d43069b4a05841b5557597231c2c473a8b7101ef42b9da1b3963422480f37: CDI devices from CRI Config.CDIDevices: []" Nov 1 10:01:05.340563 containerd[1615]: time="2025-11-01T10:01:05.340516904Z" level=info msg="CreateContainer within sandbox \"a4675b52d51bf1d1b5122309266ca884f0fc25100c5e45df7b9b8b36447a4514\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"968d43069b4a05841b5557597231c2c473a8b7101ef42b9da1b3963422480f37\"" Nov 1 10:01:05.341147 containerd[1615]: time="2025-11-01T10:01:05.341104461Z" level=info msg="StartContainer for \"968d43069b4a05841b5557597231c2c473a8b7101ef42b9da1b3963422480f37\"" Nov 1 10:01:05.342422 containerd[1615]: time="2025-11-01T10:01:05.342349998Z" level=info msg="connecting to shim 968d43069b4a05841b5557597231c2c473a8b7101ef42b9da1b3963422480f37" address="unix:///run/containerd/s/0282dff701af99fca77e9845263014aa052acb5714736188277d3273aa06903d" protocol=ttrpc version=3 Nov 1 10:01:05.371662 systemd[1]: Started cri-containerd-968d43069b4a05841b5557597231c2c473a8b7101ef42b9da1b3963422480f37.scope - libcontainer container 968d43069b4a05841b5557597231c2c473a8b7101ef42b9da1b3963422480f37. Nov 1 10:01:05.431447 containerd[1615]: time="2025-11-01T10:01:05.431403689Z" level=info msg="StartContainer for \"968d43069b4a05841b5557597231c2c473a8b7101ef42b9da1b3963422480f37\" returns successfully" Nov 1 10:01:05.451405 kubelet[2779]: E1101 10:01:05.451318 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:01:05.523630 kubelet[2779]: E1101 10:01:05.522522 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:05.523630 kubelet[2779]: W1101 10:01:05.523440 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:05.523630 kubelet[2779]: E1101 10:01:05.523487 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:05.524339 kubelet[2779]: E1101 10:01:05.524319 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:05.524339 kubelet[2779]: W1101 10:01:05.524336 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:05.524450 kubelet[2779]: E1101 10:01:05.524348 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:05.524702 kubelet[2779]: E1101 10:01:05.524676 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:05.524867 kubelet[2779]: W1101 10:01:05.524755 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:05.524867 kubelet[2779]: E1101 10:01:05.524789 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:05.525212 kubelet[2779]: E1101 10:01:05.525200 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:05.525342 kubelet[2779]: W1101 10:01:05.525268 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:05.525342 kubelet[2779]: E1101 10:01:05.525281 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:05.525558 kubelet[2779]: E1101 10:01:05.525547 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:05.525714 kubelet[2779]: W1101 10:01:05.525633 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:05.525714 kubelet[2779]: E1101 10:01:05.525651 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:05.526158 kubelet[2779]: E1101 10:01:05.526082 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:05.526309 kubelet[2779]: W1101 10:01:05.526103 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:05.526309 kubelet[2779]: E1101 10:01:05.526226 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:05.527968 kubelet[2779]: E1101 10:01:05.527903 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:05.527968 kubelet[2779]: W1101 10:01:05.527916 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:05.527968 kubelet[2779]: E1101 10:01:05.527927 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:05.528337 kubelet[2779]: E1101 10:01:05.528269 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:05.528337 kubelet[2779]: W1101 10:01:05.528280 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:05.528337 kubelet[2779]: E1101 10:01:05.528290 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:05.528682 kubelet[2779]: E1101 10:01:05.528669 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:05.528825 kubelet[2779]: W1101 10:01:05.528730 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:05.528825 kubelet[2779]: E1101 10:01:05.528746 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:05.528961 kubelet[2779]: E1101 10:01:05.528950 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:05.529018 kubelet[2779]: W1101 10:01:05.529007 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:05.529093 kubelet[2779]: E1101 10:01:05.529082 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:05.529359 kubelet[2779]: E1101 10:01:05.529301 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:05.529359 kubelet[2779]: W1101 10:01:05.529311 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:05.529359 kubelet[2779]: E1101 10:01:05.529324 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:05.529772 kubelet[2779]: E1101 10:01:05.529661 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:05.529772 kubelet[2779]: W1101 10:01:05.529673 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:05.529772 kubelet[2779]: E1101 10:01:05.529682 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:05.529944 kubelet[2779]: E1101 10:01:05.529932 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:05.530002 kubelet[2779]: W1101 10:01:05.529991 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:05.530074 kubelet[2779]: E1101 10:01:05.530062 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:05.530312 kubelet[2779]: E1101 10:01:05.530300 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:05.530496 kubelet[2779]: W1101 10:01:05.530371 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:05.530496 kubelet[2779]: E1101 10:01:05.530404 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:05.530642 kubelet[2779]: E1101 10:01:05.530630 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:05.530696 kubelet[2779]: W1101 10:01:05.530685 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:05.530764 kubelet[2779]: E1101 10:01:05.530752 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:05.536404 kubelet[2779]: E1101 10:01:05.536340 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:05.536791 kubelet[2779]: W1101 10:01:05.536691 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:05.536791 kubelet[2779]: E1101 10:01:05.536717 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:05.537285 kubelet[2779]: E1101 10:01:05.537235 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:05.537285 kubelet[2779]: W1101 10:01:05.537248 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:05.537444 kubelet[2779]: E1101 10:01:05.537429 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:05.539217 kubelet[2779]: E1101 10:01:05.539186 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:05.539217 kubelet[2779]: W1101 10:01:05.539214 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:05.539298 kubelet[2779]: E1101 10:01:05.539240 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:05.539550 kubelet[2779]: E1101 10:01:05.539529 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:05.539550 kubelet[2779]: W1101 10:01:05.539545 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:05.539550 kubelet[2779]: E1101 10:01:05.539555 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:05.539775 kubelet[2779]: E1101 10:01:05.539756 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:05.539775 kubelet[2779]: W1101 10:01:05.539768 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:05.539775 kubelet[2779]: E1101 10:01:05.539777 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:05.539970 kubelet[2779]: E1101 10:01:05.539950 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:05.539970 kubelet[2779]: W1101 10:01:05.539965 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:05.539970 kubelet[2779]: E1101 10:01:05.539973 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:05.540686 kubelet[2779]: E1101 10:01:05.540131 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:05.540686 kubelet[2779]: W1101 10:01:05.540140 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:05.540686 kubelet[2779]: E1101 10:01:05.540147 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:05.540686 kubelet[2779]: E1101 10:01:05.540355 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:05.540686 kubelet[2779]: W1101 10:01:05.540362 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:05.540686 kubelet[2779]: E1101 10:01:05.540372 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:05.540998 kubelet[2779]: E1101 10:01:05.540981 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:05.540998 kubelet[2779]: W1101 10:01:05.540993 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:05.540998 kubelet[2779]: E1101 10:01:05.541001 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:05.541225 kubelet[2779]: E1101 10:01:05.541212 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:05.541225 kubelet[2779]: W1101 10:01:05.541221 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:05.541321 kubelet[2779]: E1101 10:01:05.541230 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:05.541442 kubelet[2779]: E1101 10:01:05.541425 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:05.541442 kubelet[2779]: W1101 10:01:05.541433 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:05.541442 kubelet[2779]: E1101 10:01:05.541441 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:05.541656 kubelet[2779]: E1101 10:01:05.541630 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:05.541656 kubelet[2779]: W1101 10:01:05.541649 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:05.541656 kubelet[2779]: E1101 10:01:05.541657 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:05.542014 kubelet[2779]: E1101 10:01:05.541985 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:05.542014 kubelet[2779]: W1101 10:01:05.542001 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:05.542014 kubelet[2779]: E1101 10:01:05.542009 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:05.542352 kubelet[2779]: E1101 10:01:05.542193 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:05.542352 kubelet[2779]: W1101 10:01:05.542203 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:05.542352 kubelet[2779]: E1101 10:01:05.542211 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:05.542352 kubelet[2779]: E1101 10:01:05.542357 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:05.542556 kubelet[2779]: W1101 10:01:05.542365 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:05.542556 kubelet[2779]: E1101 10:01:05.542373 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:05.542956 kubelet[2779]: E1101 10:01:05.542936 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:05.542956 kubelet[2779]: W1101 10:01:05.542951 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:05.543064 kubelet[2779]: E1101 10:01:05.542960 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:05.543164 kubelet[2779]: E1101 10:01:05.543148 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:05.543204 kubelet[2779]: W1101 10:01:05.543158 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:05.543204 kubelet[2779]: E1101 10:01:05.543175 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:05.544061 kubelet[2779]: E1101 10:01:05.544040 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:05.544061 kubelet[2779]: W1101 10:01:05.544053 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:05.544061 kubelet[2779]: E1101 10:01:05.544064 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:06.385295 kubelet[2779]: E1101 10:01:06.385193 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fgvv4" podUID="c846e0de-56ff-40b3-829b-1fda67e4a78f" Nov 1 10:01:06.452215 kubelet[2779]: I1101 10:01:06.452168 2779 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 10:01:06.452681 kubelet[2779]: E1101 10:01:06.452614 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:01:06.536613 kubelet[2779]: E1101 10:01:06.536571 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:06.536613 kubelet[2779]: W1101 10:01:06.536591 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:06.536613 kubelet[2779]: E1101 10:01:06.536612 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:06.536925 kubelet[2779]: E1101 10:01:06.536896 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:06.536925 kubelet[2779]: W1101 10:01:06.536908 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:06.536925 kubelet[2779]: E1101 10:01:06.536920 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:06.537245 kubelet[2779]: E1101 10:01:06.537226 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:06.537245 kubelet[2779]: W1101 10:01:06.537240 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:06.537355 kubelet[2779]: E1101 10:01:06.537252 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:06.537516 kubelet[2779]: E1101 10:01:06.537492 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:06.537516 kubelet[2779]: W1101 10:01:06.537507 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:06.537744 kubelet[2779]: E1101 10:01:06.537519 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:06.537788 kubelet[2779]: E1101 10:01:06.537759 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:06.537788 kubelet[2779]: W1101 10:01:06.537770 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:06.537788 kubelet[2779]: E1101 10:01:06.537781 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:06.538036 kubelet[2779]: E1101 10:01:06.537997 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:06.538036 kubelet[2779]: W1101 10:01:06.538020 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:06.538036 kubelet[2779]: E1101 10:01:06.538031 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:06.538280 kubelet[2779]: E1101 10:01:06.538260 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:06.538280 kubelet[2779]: W1101 10:01:06.538275 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:06.538362 kubelet[2779]: E1101 10:01:06.538286 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:06.538587 kubelet[2779]: E1101 10:01:06.538568 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:06.538587 kubelet[2779]: W1101 10:01:06.538580 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:06.538681 kubelet[2779]: E1101 10:01:06.538592 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:06.538822 kubelet[2779]: E1101 10:01:06.538803 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:06.538822 kubelet[2779]: W1101 10:01:06.538815 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:06.538911 kubelet[2779]: E1101 10:01:06.538825 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:06.539046 kubelet[2779]: E1101 10:01:06.539026 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:06.539046 kubelet[2779]: W1101 10:01:06.539038 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:06.539116 kubelet[2779]: E1101 10:01:06.539049 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:06.539263 kubelet[2779]: E1101 10:01:06.539244 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:06.539263 kubelet[2779]: W1101 10:01:06.539256 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:06.539343 kubelet[2779]: E1101 10:01:06.539266 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:06.539510 kubelet[2779]: E1101 10:01:06.539479 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:06.539510 kubelet[2779]: W1101 10:01:06.539494 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:06.539510 kubelet[2779]: E1101 10:01:06.539507 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:06.539725 kubelet[2779]: E1101 10:01:06.539716 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:06.539763 kubelet[2779]: W1101 10:01:06.539726 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:06.539763 kubelet[2779]: E1101 10:01:06.539737 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:06.539982 kubelet[2779]: E1101 10:01:06.539964 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:06.539982 kubelet[2779]: W1101 10:01:06.539978 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:06.540043 kubelet[2779]: E1101 10:01:06.539990 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:06.540241 kubelet[2779]: E1101 10:01:06.540221 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:06.540241 kubelet[2779]: W1101 10:01:06.540237 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:06.540303 kubelet[2779]: E1101 10:01:06.540249 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:06.544421 kubelet[2779]: E1101 10:01:06.544335 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:06.544421 kubelet[2779]: W1101 10:01:06.544361 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:06.544421 kubelet[2779]: E1101 10:01:06.544403 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:06.544672 kubelet[2779]: E1101 10:01:06.544638 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:06.544672 kubelet[2779]: W1101 10:01:06.544657 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:06.544759 kubelet[2779]: E1101 10:01:06.544670 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:06.545089 kubelet[2779]: E1101 10:01:06.545071 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:06.545089 kubelet[2779]: W1101 10:01:06.545084 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:06.545157 kubelet[2779]: E1101 10:01:06.545095 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:06.545352 kubelet[2779]: E1101 10:01:06.545335 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:06.545352 kubelet[2779]: W1101 10:01:06.545347 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:06.545430 kubelet[2779]: E1101 10:01:06.545357 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:06.545610 kubelet[2779]: E1101 10:01:06.545592 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:06.545610 kubelet[2779]: W1101 10:01:06.545606 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:06.545659 kubelet[2779]: E1101 10:01:06.545617 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:06.545862 kubelet[2779]: E1101 10:01:06.545841 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:06.545862 kubelet[2779]: W1101 10:01:06.545855 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:06.545928 kubelet[2779]: E1101 10:01:06.545866 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:06.547613 kubelet[2779]: E1101 10:01:06.546628 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:06.547613 kubelet[2779]: W1101 10:01:06.547561 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:06.547613 kubelet[2779]: E1101 10:01:06.547575 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:06.547861 kubelet[2779]: E1101 10:01:06.547826 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:06.547861 kubelet[2779]: W1101 10:01:06.547856 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:06.547923 kubelet[2779]: E1101 10:01:06.547869 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:06.548085 kubelet[2779]: E1101 10:01:06.548063 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:06.548085 kubelet[2779]: W1101 10:01:06.548082 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:06.548139 kubelet[2779]: E1101 10:01:06.548093 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:06.548295 kubelet[2779]: E1101 10:01:06.548272 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:06.548295 kubelet[2779]: W1101 10:01:06.548291 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:06.548350 kubelet[2779]: E1101 10:01:06.548303 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:06.548555 kubelet[2779]: E1101 10:01:06.548532 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:06.548555 kubelet[2779]: W1101 10:01:06.548551 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:06.548604 kubelet[2779]: E1101 10:01:06.548563 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:06.548990 kubelet[2779]: E1101 10:01:06.548967 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:06.548990 kubelet[2779]: W1101 10:01:06.548986 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:06.549058 kubelet[2779]: E1101 10:01:06.549009 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:06.549236 kubelet[2779]: E1101 10:01:06.549215 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:06.549236 kubelet[2779]: W1101 10:01:06.549233 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:06.549284 kubelet[2779]: E1101 10:01:06.549245 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:06.549501 kubelet[2779]: E1101 10:01:06.549478 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:06.549531 kubelet[2779]: W1101 10:01:06.549497 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:06.549531 kubelet[2779]: E1101 10:01:06.549517 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:06.549709 kubelet[2779]: E1101 10:01:06.549687 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:06.549744 kubelet[2779]: W1101 10:01:06.549705 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:06.549744 kubelet[2779]: E1101 10:01:06.549723 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:06.549938 kubelet[2779]: E1101 10:01:06.549909 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:06.549938 kubelet[2779]: W1101 10:01:06.549933 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:06.549994 kubelet[2779]: E1101 10:01:06.549944 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:06.550314 kubelet[2779]: E1101 10:01:06.550291 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:06.550314 kubelet[2779]: W1101 10:01:06.550311 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:06.550365 kubelet[2779]: E1101 10:01:06.550323 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:06.550565 kubelet[2779]: E1101 10:01:06.550543 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:01:06.550565 kubelet[2779]: W1101 10:01:06.550562 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:01:06.550624 kubelet[2779]: E1101 10:01:06.550573 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:01:06.938007 containerd[1615]: time="2025-11-01T10:01:06.937941272Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:01:06.938626 containerd[1615]: time="2025-11-01T10:01:06.938594393Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4442579" Nov 1 10:01:06.939735 containerd[1615]: time="2025-11-01T10:01:06.939675720Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:01:06.941328 containerd[1615]: time="2025-11-01T10:01:06.941290632Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:01:06.941819 containerd[1615]: time="2025-11-01T10:01:06.941784753Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.634978035s" Nov 1 10:01:06.941868 containerd[1615]: time="2025-11-01T10:01:06.941816743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 1 10:01:06.945187 containerd[1615]: time="2025-11-01T10:01:06.945151195Z" level=info msg="CreateContainer within sandbox \"77e85c4c8ab9b65846d0f1d484d14b4754718a312706659f627027f9421fe4a3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 1 10:01:06.953904 containerd[1615]: time="2025-11-01T10:01:06.953553975Z" level=info msg="Container 4676bde5d046743fe261c128fe6fe74efbdc742fe72a55377c48f3d293515c17: CDI devices from CRI Config.CDIDevices: []" Nov 1 10:01:06.962032 containerd[1615]: time="2025-11-01T10:01:06.961981682Z" level=info msg="CreateContainer within sandbox \"77e85c4c8ab9b65846d0f1d484d14b4754718a312706659f627027f9421fe4a3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4676bde5d046743fe261c128fe6fe74efbdc742fe72a55377c48f3d293515c17\"" Nov 1 10:01:06.962450 containerd[1615]: time="2025-11-01T10:01:06.962406683Z" level=info msg="StartContainer for \"4676bde5d046743fe261c128fe6fe74efbdc742fe72a55377c48f3d293515c17\"" Nov 1 10:01:06.963699 containerd[1615]: time="2025-11-01T10:01:06.963674361Z" level=info msg="connecting to shim 4676bde5d046743fe261c128fe6fe74efbdc742fe72a55377c48f3d293515c17" address="unix:///run/containerd/s/10de382cfab7ef60bce45682a8421cd44daaf3016aa54d69ebbf13cfafed6e40" protocol=ttrpc version=3 Nov 1 10:01:06.986514 systemd[1]: Started cri-containerd-4676bde5d046743fe261c128fe6fe74efbdc742fe72a55377c48f3d293515c17.scope - libcontainer container 4676bde5d046743fe261c128fe6fe74efbdc742fe72a55377c48f3d293515c17. Nov 1 10:01:07.030271 containerd[1615]: time="2025-11-01T10:01:07.030230503Z" level=info msg="StartContainer for \"4676bde5d046743fe261c128fe6fe74efbdc742fe72a55377c48f3d293515c17\" returns successfully" Nov 1 10:01:07.040437 systemd[1]: cri-containerd-4676bde5d046743fe261c128fe6fe74efbdc742fe72a55377c48f3d293515c17.scope: Deactivated successfully. Nov 1 10:01:07.042242 containerd[1615]: time="2025-11-01T10:01:07.042217528Z" level=info msg="received exit event container_id:\"4676bde5d046743fe261c128fe6fe74efbdc742fe72a55377c48f3d293515c17\" id:\"4676bde5d046743fe261c128fe6fe74efbdc742fe72a55377c48f3d293515c17\" pid:3536 exited_at:{seconds:1761991267 nanos:41771329}" Nov 1 10:01:07.066069 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4676bde5d046743fe261c128fe6fe74efbdc742fe72a55377c48f3d293515c17-rootfs.mount: Deactivated successfully. Nov 1 10:01:07.456642 kubelet[2779]: E1101 10:01:07.456604 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:01:07.472420 kubelet[2779]: I1101 10:01:07.472323 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-dbf6bdc4f-t665r" podStartSLOduration=3.136535182 podStartE2EDuration="7.472303226s" podCreationTimestamp="2025-11-01 10:01:00 +0000 UTC" firstStartedPulling="2025-11-01 10:01:00.970550724 +0000 UTC m=+20.688060468" lastFinishedPulling="2025-11-01 10:01:05.306318768 +0000 UTC m=+25.023828512" observedRunningTime="2025-11-01 10:01:05.462484401 +0000 UTC m=+25.179994145" watchObservedRunningTime="2025-11-01 10:01:07.472303226 +0000 UTC m=+27.189812970" Nov 1 10:01:08.385011 kubelet[2779]: E1101 10:01:08.384951 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fgvv4" podUID="c846e0de-56ff-40b3-829b-1fda67e4a78f" Nov 1 10:01:08.460235 kubelet[2779]: E1101 10:01:08.460198 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:01:08.460986 containerd[1615]: time="2025-11-01T10:01:08.460936153Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 1 10:01:10.385598 kubelet[2779]: E1101 10:01:10.385538 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fgvv4" podUID="c846e0de-56ff-40b3-829b-1fda67e4a78f" Nov 1 10:01:11.967361 containerd[1615]: time="2025-11-01T10:01:11.967298194Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:01:11.970554 containerd[1615]: time="2025-11-01T10:01:11.970521930Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Nov 1 10:01:11.974273 containerd[1615]: time="2025-11-01T10:01:11.974247890Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:01:11.977809 containerd[1615]: time="2025-11-01T10:01:11.977765038Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:01:11.978394 containerd[1615]: time="2025-11-01T10:01:11.978359366Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.517371806s" Nov 1 10:01:11.978433 containerd[1615]: time="2025-11-01T10:01:11.978411224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 1 10:01:11.983285 containerd[1615]: time="2025-11-01T10:01:11.983243486Z" level=info msg="CreateContainer within sandbox \"77e85c4c8ab9b65846d0f1d484d14b4754718a312706659f627027f9421fe4a3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 1 10:01:11.997769 containerd[1615]: time="2025-11-01T10:01:11.997709925Z" level=info msg="Container ff76f2777c4c1ca597e9ae3ee2dfda19450da1bafda9ecbff56b15db994745eb: CDI devices from CRI Config.CDIDevices: []" Nov 1 10:01:12.007823 containerd[1615]: time="2025-11-01T10:01:12.007772185Z" level=info msg="CreateContainer within sandbox \"77e85c4c8ab9b65846d0f1d484d14b4754718a312706659f627027f9421fe4a3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ff76f2777c4c1ca597e9ae3ee2dfda19450da1bafda9ecbff56b15db994745eb\"" Nov 1 10:01:12.009402 containerd[1615]: time="2025-11-01T10:01:12.008377633Z" level=info msg="StartContainer for \"ff76f2777c4c1ca597e9ae3ee2dfda19450da1bafda9ecbff56b15db994745eb\"" Nov 1 10:01:12.009905 containerd[1615]: time="2025-11-01T10:01:12.009869670Z" level=info msg="connecting to shim ff76f2777c4c1ca597e9ae3ee2dfda19450da1bafda9ecbff56b15db994745eb" address="unix:///run/containerd/s/10de382cfab7ef60bce45682a8421cd44daaf3016aa54d69ebbf13cfafed6e40" protocol=ttrpc version=3 Nov 1 10:01:12.031679 systemd[1]: Started cri-containerd-ff76f2777c4c1ca597e9ae3ee2dfda19450da1bafda9ecbff56b15db994745eb.scope - libcontainer container ff76f2777c4c1ca597e9ae3ee2dfda19450da1bafda9ecbff56b15db994745eb. Nov 1 10:01:12.093288 containerd[1615]: time="2025-11-01T10:01:12.093136570Z" level=info msg="StartContainer for \"ff76f2777c4c1ca597e9ae3ee2dfda19450da1bafda9ecbff56b15db994745eb\" returns successfully" Nov 1 10:01:12.387418 kubelet[2779]: E1101 10:01:12.387352 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fgvv4" podUID="c846e0de-56ff-40b3-829b-1fda67e4a78f" Nov 1 10:01:12.469903 kubelet[2779]: E1101 10:01:12.469844 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:01:13.243993 systemd[1]: cri-containerd-ff76f2777c4c1ca597e9ae3ee2dfda19450da1bafda9ecbff56b15db994745eb.scope: Deactivated successfully. Nov 1 10:01:13.244330 systemd[1]: cri-containerd-ff76f2777c4c1ca597e9ae3ee2dfda19450da1bafda9ecbff56b15db994745eb.scope: Consumed 684ms CPU time, 177.3M memory peak, 3.7M read from disk, 171.3M written to disk. Nov 1 10:01:13.254106 containerd[1615]: time="2025-11-01T10:01:13.253950187Z" level=info msg="received exit event container_id:\"ff76f2777c4c1ca597e9ae3ee2dfda19450da1bafda9ecbff56b15db994745eb\" id:\"ff76f2777c4c1ca597e9ae3ee2dfda19450da1bafda9ecbff56b15db994745eb\" pid:3595 exited_at:{seconds:1761991273 nanos:246361314}" Nov 1 10:01:13.275573 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff76f2777c4c1ca597e9ae3ee2dfda19450da1bafda9ecbff56b15db994745eb-rootfs.mount: Deactivated successfully. Nov 1 10:01:13.357513 kubelet[2779]: I1101 10:01:13.357224 2779 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 1 10:01:13.476492 systemd[1]: Created slice kubepods-burstable-pod2ebaf64b_4b6c_45ec_b276_6e536781a90d.slice - libcontainer container kubepods-burstable-pod2ebaf64b_4b6c_45ec_b276_6e536781a90d.slice. Nov 1 10:01:13.482935 kubelet[2779]: E1101 10:01:13.482900 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:01:13.487201 containerd[1615]: time="2025-11-01T10:01:13.487156955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 1 10:01:13.488933 kubelet[2779]: I1101 10:01:13.488898 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfbgc\" (UniqueName: \"kubernetes.io/projected/380b8d89-6dd2-41d8-9c8c-26a95df82b99-kube-api-access-mfbgc\") pod \"calico-apiserver-65f4874cbd-sm9hg\" (UID: \"380b8d89-6dd2-41d8-9c8c-26a95df82b99\") " pod="calico-apiserver/calico-apiserver-65f4874cbd-sm9hg" Nov 1 10:01:13.489050 kubelet[2779]: I1101 10:01:13.488951 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xznvf\" (UniqueName: \"kubernetes.io/projected/3fb01811-b89e-4b02-a492-80496752165e-kube-api-access-xznvf\") pod \"calico-kube-controllers-6cbc9dfd5f-2p8r7\" (UID: \"3fb01811-b89e-4b02-a492-80496752165e\") " pod="calico-system/calico-kube-controllers-6cbc9dfd5f-2p8r7" Nov 1 10:01:13.489050 kubelet[2779]: I1101 10:01:13.488969 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93ee64e5-e5b1-4b2f-98fd-8f1562d11954-config\") pod \"goldmane-666569f655-p67f4\" (UID: \"93ee64e5-e5b1-4b2f-98fd-8f1562d11954\") " pod="calico-system/goldmane-666569f655-p67f4" Nov 1 10:01:13.489050 kubelet[2779]: I1101 10:01:13.488988 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57r6h\" (UniqueName: \"kubernetes.io/projected/2ebaf64b-4b6c-45ec-b276-6e536781a90d-kube-api-access-57r6h\") pod \"coredns-674b8bbfcf-lmwk9\" (UID: \"2ebaf64b-4b6c-45ec-b276-6e536781a90d\") " pod="kube-system/coredns-674b8bbfcf-lmwk9" Nov 1 10:01:13.489050 kubelet[2779]: I1101 10:01:13.489003 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4scl5\" (UniqueName: \"kubernetes.io/projected/f28927fa-3258-46e8-940e-e151d8c21104-kube-api-access-4scl5\") pod \"coredns-674b8bbfcf-zczfq\" (UID: \"f28927fa-3258-46e8-940e-e151d8c21104\") " pod="kube-system/coredns-674b8bbfcf-zczfq" Nov 1 10:01:13.489050 kubelet[2779]: I1101 10:01:13.489020 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93ee64e5-e5b1-4b2f-98fd-8f1562d11954-goldmane-ca-bundle\") pod \"goldmane-666569f655-p67f4\" (UID: \"93ee64e5-e5b1-4b2f-98fd-8f1562d11954\") " pod="calico-system/goldmane-666569f655-p67f4" Nov 1 10:01:13.489183 kubelet[2779]: I1101 10:01:13.489038 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3fb01811-b89e-4b02-a492-80496752165e-tigera-ca-bundle\") pod \"calico-kube-controllers-6cbc9dfd5f-2p8r7\" (UID: \"3fb01811-b89e-4b02-a492-80496752165e\") " pod="calico-system/calico-kube-controllers-6cbc9dfd5f-2p8r7" Nov 1 10:01:13.489183 kubelet[2779]: I1101 10:01:13.489055 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f28927fa-3258-46e8-940e-e151d8c21104-config-volume\") pod \"coredns-674b8bbfcf-zczfq\" (UID: \"f28927fa-3258-46e8-940e-e151d8c21104\") " pod="kube-system/coredns-674b8bbfcf-zczfq" Nov 1 10:01:13.489183 kubelet[2779]: I1101 10:01:13.489076 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ebaf64b-4b6c-45ec-b276-6e536781a90d-config-volume\") pod \"coredns-674b8bbfcf-lmwk9\" (UID: \"2ebaf64b-4b6c-45ec-b276-6e536781a90d\") " pod="kube-system/coredns-674b8bbfcf-lmwk9" Nov 1 10:01:13.489183 kubelet[2779]: I1101 10:01:13.489091 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/93ee64e5-e5b1-4b2f-98fd-8f1562d11954-goldmane-key-pair\") pod \"goldmane-666569f655-p67f4\" (UID: \"93ee64e5-e5b1-4b2f-98fd-8f1562d11954\") " pod="calico-system/goldmane-666569f655-p67f4" Nov 1 10:01:13.489183 kubelet[2779]: I1101 10:01:13.489106 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b851fa91-aed4-4dcf-a9d5-824e1504d481-whisker-backend-key-pair\") pod \"whisker-69576fcb57-d9hxp\" (UID: \"b851fa91-aed4-4dcf-a9d5-824e1504d481\") " pod="calico-system/whisker-69576fcb57-d9hxp" Nov 1 10:01:13.489306 kubelet[2779]: I1101 10:01:13.489120 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b851fa91-aed4-4dcf-a9d5-824e1504d481-whisker-ca-bundle\") pod \"whisker-69576fcb57-d9hxp\" (UID: \"b851fa91-aed4-4dcf-a9d5-824e1504d481\") " pod="calico-system/whisker-69576fcb57-d9hxp" Nov 1 10:01:13.489306 kubelet[2779]: I1101 10:01:13.489136 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzmhz\" (UniqueName: \"kubernetes.io/projected/84d5906d-6e10-419b-a2c5-f35ab2809acd-kube-api-access-jzmhz\") pod \"calico-apiserver-65f4874cbd-bgwxl\" (UID: \"84d5906d-6e10-419b-a2c5-f35ab2809acd\") " pod="calico-apiserver/calico-apiserver-65f4874cbd-bgwxl" Nov 1 10:01:13.489306 kubelet[2779]: I1101 10:01:13.489175 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/380b8d89-6dd2-41d8-9c8c-26a95df82b99-calico-apiserver-certs\") pod \"calico-apiserver-65f4874cbd-sm9hg\" (UID: \"380b8d89-6dd2-41d8-9c8c-26a95df82b99\") " pod="calico-apiserver/calico-apiserver-65f4874cbd-sm9hg" Nov 1 10:01:13.489306 kubelet[2779]: I1101 10:01:13.489191 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzx4z\" (UniqueName: \"kubernetes.io/projected/b851fa91-aed4-4dcf-a9d5-824e1504d481-kube-api-access-vzx4z\") pod \"whisker-69576fcb57-d9hxp\" (UID: \"b851fa91-aed4-4dcf-a9d5-824e1504d481\") " pod="calico-system/whisker-69576fcb57-d9hxp" Nov 1 10:01:13.489306 kubelet[2779]: I1101 10:01:13.489207 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/84d5906d-6e10-419b-a2c5-f35ab2809acd-calico-apiserver-certs\") pod \"calico-apiserver-65f4874cbd-bgwxl\" (UID: \"84d5906d-6e10-419b-a2c5-f35ab2809acd\") " pod="calico-apiserver/calico-apiserver-65f4874cbd-bgwxl" Nov 1 10:01:13.489491 kubelet[2779]: I1101 10:01:13.489223 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkgn4\" (UniqueName: \"kubernetes.io/projected/93ee64e5-e5b1-4b2f-98fd-8f1562d11954-kube-api-access-tkgn4\") pod \"goldmane-666569f655-p67f4\" (UID: \"93ee64e5-e5b1-4b2f-98fd-8f1562d11954\") " pod="calico-system/goldmane-666569f655-p67f4" Nov 1 10:01:13.492152 systemd[1]: Created slice kubepods-besteffort-pod84d5906d_6e10_419b_a2c5_f35ab2809acd.slice - libcontainer container kubepods-besteffort-pod84d5906d_6e10_419b_a2c5_f35ab2809acd.slice. Nov 1 10:01:13.498025 systemd[1]: Created slice kubepods-besteffort-podb851fa91_aed4_4dcf_a9d5_824e1504d481.slice - libcontainer container kubepods-besteffort-podb851fa91_aed4_4dcf_a9d5_824e1504d481.slice. Nov 1 10:01:13.505202 systemd[1]: Created slice kubepods-besteffort-pod3fb01811_b89e_4b02_a492_80496752165e.slice - libcontainer container kubepods-besteffort-pod3fb01811_b89e_4b02_a492_80496752165e.slice. Nov 1 10:01:13.510532 systemd[1]: Created slice kubepods-besteffort-pod380b8d89_6dd2_41d8_9c8c_26a95df82b99.slice - libcontainer container kubepods-besteffort-pod380b8d89_6dd2_41d8_9c8c_26a95df82b99.slice. Nov 1 10:01:13.516983 systemd[1]: Created slice kubepods-burstable-podf28927fa_3258_46e8_940e_e151d8c21104.slice - libcontainer container kubepods-burstable-podf28927fa_3258_46e8_940e_e151d8c21104.slice. Nov 1 10:01:13.523472 systemd[1]: Created slice kubepods-besteffort-pod93ee64e5_e5b1_4b2f_98fd_8f1562d11954.slice - libcontainer container kubepods-besteffort-pod93ee64e5_e5b1_4b2f_98fd_8f1562d11954.slice. Nov 1 10:01:13.786865 kubelet[2779]: E1101 10:01:13.786710 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:01:13.787506 containerd[1615]: time="2025-11-01T10:01:13.787464866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lmwk9,Uid:2ebaf64b-4b6c-45ec-b276-6e536781a90d,Namespace:kube-system,Attempt:0,}" Nov 1 10:01:13.796084 containerd[1615]: time="2025-11-01T10:01:13.796042108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65f4874cbd-bgwxl,Uid:84d5906d-6e10-419b-a2c5-f35ab2809acd,Namespace:calico-apiserver,Attempt:0,}" Nov 1 10:01:13.802663 containerd[1615]: time="2025-11-01T10:01:13.802632823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69576fcb57-d9hxp,Uid:b851fa91-aed4-4dcf-a9d5-824e1504d481,Namespace:calico-system,Attempt:0,}" Nov 1 10:01:13.817413 containerd[1615]: time="2025-11-01T10:01:13.816492912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cbc9dfd5f-2p8r7,Uid:3fb01811-b89e-4b02-a492-80496752165e,Namespace:calico-system,Attempt:0,}" Nov 1 10:01:13.817651 containerd[1615]: time="2025-11-01T10:01:13.817626373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65f4874cbd-sm9hg,Uid:380b8d89-6dd2-41d8-9c8c-26a95df82b99,Namespace:calico-apiserver,Attempt:0,}" Nov 1 10:01:13.822485 kubelet[2779]: E1101 10:01:13.821420 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:01:13.822741 containerd[1615]: time="2025-11-01T10:01:13.822713291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zczfq,Uid:f28927fa-3258-46e8-940e-e151d8c21104,Namespace:kube-system,Attempt:0,}" Nov 1 10:01:13.833057 containerd[1615]: time="2025-11-01T10:01:13.833011339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-p67f4,Uid:93ee64e5-e5b1-4b2f-98fd-8f1562d11954,Namespace:calico-system,Attempt:0,}" Nov 1 10:01:13.961587 containerd[1615]: time="2025-11-01T10:01:13.961530750Z" level=error msg="Failed to destroy network for sandbox \"0bfe2597be11620b8a16d88d8d4b305714d45b34af4ce95303ad571f7fffff25\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:01:13.962818 containerd[1615]: time="2025-11-01T10:01:13.962770161Z" level=error msg="Failed to destroy network for sandbox \"e460360d02ecf7bfd8e6999a441bb889fa5351b388fbc49fdac322beb31049d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:01:13.963299 containerd[1615]: time="2025-11-01T10:01:13.963260614Z" level=error msg="Failed to destroy network for sandbox \"2ac0ccc4ae7dd44cafe973764a1c0262596cef2e001441e878fdd0d6faa27ebd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:01:13.969794 containerd[1615]: time="2025-11-01T10:01:13.969758745Z" level=error msg="Failed to destroy network for sandbox \"19cab33b6ddef8531e3fa3aecceedec75738f46002d8af8d79010b8cd4f344b6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:01:13.986868 containerd[1615]: time="2025-11-01T10:01:13.986787974Z" level=error msg="Failed to destroy network for sandbox \"1da192213a6fec49026e7b034b764afffc6f4904b4113a724a06f2c1be832f2f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:01:14.181202 containerd[1615]: time="2025-11-01T10:01:14.181144189Z" level=error msg="Failed to destroy network for sandbox \"abe1ee25f2b38a7d96574e2f037f71c47b81d14f0c2757ebcbae872096546d74\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:01:14.244182 containerd[1615]: time="2025-11-01T10:01:14.244076231Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lmwk9,Uid:2ebaf64b-4b6c-45ec-b276-6e536781a90d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bfe2597be11620b8a16d88d8d4b305714d45b34af4ce95303ad571f7fffff25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:01:14.244628 kubelet[2779]: E1101 10:01:14.244582 2779 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bfe2597be11620b8a16d88d8d4b305714d45b34af4ce95303ad571f7fffff25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:01:14.244827 kubelet[2779]: E1101 10:01:14.244687 2779 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bfe2597be11620b8a16d88d8d4b305714d45b34af4ce95303ad571f7fffff25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-lmwk9" Nov 1 10:01:14.244827 kubelet[2779]: E1101 10:01:14.244738 2779 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bfe2597be11620b8a16d88d8d4b305714d45b34af4ce95303ad571f7fffff25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-lmwk9" Nov 1 10:01:14.245272 kubelet[2779]: E1101 10:01:14.245173 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-lmwk9_kube-system(2ebaf64b-4b6c-45ec-b276-6e536781a90d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-lmwk9_kube-system(2ebaf64b-4b6c-45ec-b276-6e536781a90d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0bfe2597be11620b8a16d88d8d4b305714d45b34af4ce95303ad571f7fffff25\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-lmwk9" podUID="2ebaf64b-4b6c-45ec-b276-6e536781a90d" Nov 1 10:01:14.249380 containerd[1615]: time="2025-11-01T10:01:14.249322617Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65f4874cbd-sm9hg,Uid:380b8d89-6dd2-41d8-9c8c-26a95df82b99,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e460360d02ecf7bfd8e6999a441bb889fa5351b388fbc49fdac322beb31049d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:01:14.249678 kubelet[2779]: E1101 10:01:14.249632 2779 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e460360d02ecf7bfd8e6999a441bb889fa5351b388fbc49fdac322beb31049d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:01:14.249739 kubelet[2779]: E1101 10:01:14.249706 2779 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e460360d02ecf7bfd8e6999a441bb889fa5351b388fbc49fdac322beb31049d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65f4874cbd-sm9hg" Nov 1 10:01:14.249739 kubelet[2779]: E1101 10:01:14.249729 2779 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e460360d02ecf7bfd8e6999a441bb889fa5351b388fbc49fdac322beb31049d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65f4874cbd-sm9hg" Nov 1 10:01:14.249815 kubelet[2779]: E1101 10:01:14.249783 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65f4874cbd-sm9hg_calico-apiserver(380b8d89-6dd2-41d8-9c8c-26a95df82b99)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65f4874cbd-sm9hg_calico-apiserver(380b8d89-6dd2-41d8-9c8c-26a95df82b99)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e460360d02ecf7bfd8e6999a441bb889fa5351b388fbc49fdac322beb31049d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65f4874cbd-sm9hg" podUID="380b8d89-6dd2-41d8-9c8c-26a95df82b99" Nov 1 10:01:14.254228 containerd[1615]: time="2025-11-01T10:01:14.254175472Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69576fcb57-d9hxp,Uid:b851fa91-aed4-4dcf-a9d5-824e1504d481,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ac0ccc4ae7dd44cafe973764a1c0262596cef2e001441e878fdd0d6faa27ebd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:01:14.254604 kubelet[2779]: E1101 10:01:14.254364 2779 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ac0ccc4ae7dd44cafe973764a1c0262596cef2e001441e878fdd0d6faa27ebd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:01:14.254604 kubelet[2779]: E1101 10:01:14.254434 2779 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ac0ccc4ae7dd44cafe973764a1c0262596cef2e001441e878fdd0d6faa27ebd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-69576fcb57-d9hxp" Nov 1 10:01:14.254604 kubelet[2779]: E1101 10:01:14.254454 2779 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ac0ccc4ae7dd44cafe973764a1c0262596cef2e001441e878fdd0d6faa27ebd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-69576fcb57-d9hxp" Nov 1 10:01:14.254693 kubelet[2779]: E1101 10:01:14.254512 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-69576fcb57-d9hxp_calico-system(b851fa91-aed4-4dcf-a9d5-824e1504d481)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-69576fcb57-d9hxp_calico-system(b851fa91-aed4-4dcf-a9d5-824e1504d481)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2ac0ccc4ae7dd44cafe973764a1c0262596cef2e001441e878fdd0d6faa27ebd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-69576fcb57-d9hxp" podUID="b851fa91-aed4-4dcf-a9d5-824e1504d481" Nov 1 10:01:14.255635 containerd[1615]: time="2025-11-01T10:01:14.255583300Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65f4874cbd-bgwxl,Uid:84d5906d-6e10-419b-a2c5-f35ab2809acd,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"19cab33b6ddef8531e3fa3aecceedec75738f46002d8af8d79010b8cd4f344b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:01:14.256182 kubelet[2779]: E1101 10:01:14.255907 2779 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19cab33b6ddef8531e3fa3aecceedec75738f46002d8af8d79010b8cd4f344b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:01:14.256182 kubelet[2779]: E1101 10:01:14.256036 2779 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19cab33b6ddef8531e3fa3aecceedec75738f46002d8af8d79010b8cd4f344b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65f4874cbd-bgwxl" Nov 1 10:01:14.256182 kubelet[2779]: E1101 10:01:14.256065 2779 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19cab33b6ddef8531e3fa3aecceedec75738f46002d8af8d79010b8cd4f344b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65f4874cbd-bgwxl" Nov 1 10:01:14.256283 kubelet[2779]: E1101 10:01:14.256137 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65f4874cbd-bgwxl_calico-apiserver(84d5906d-6e10-419b-a2c5-f35ab2809acd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65f4874cbd-bgwxl_calico-apiserver(84d5906d-6e10-419b-a2c5-f35ab2809acd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"19cab33b6ddef8531e3fa3aecceedec75738f46002d8af8d79010b8cd4f344b6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65f4874cbd-bgwxl" podUID="84d5906d-6e10-419b-a2c5-f35ab2809acd" Nov 1 10:01:14.259374 containerd[1615]: time="2025-11-01T10:01:14.259312001Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cbc9dfd5f-2p8r7,Uid:3fb01811-b89e-4b02-a492-80496752165e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1da192213a6fec49026e7b034b764afffc6f4904b4113a724a06f2c1be832f2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:01:14.259921 kubelet[2779]: E1101 10:01:14.259592 2779 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1da192213a6fec49026e7b034b764afffc6f4904b4113a724a06f2c1be832f2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:01:14.259921 kubelet[2779]: E1101 10:01:14.259715 2779 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1da192213a6fec49026e7b034b764afffc6f4904b4113a724a06f2c1be832f2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cbc9dfd5f-2p8r7" Nov 1 10:01:14.259921 kubelet[2779]: E1101 10:01:14.259743 2779 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1da192213a6fec49026e7b034b764afffc6f4904b4113a724a06f2c1be832f2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cbc9dfd5f-2p8r7" Nov 1 10:01:14.260026 kubelet[2779]: E1101 10:01:14.259811 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6cbc9dfd5f-2p8r7_calico-system(3fb01811-b89e-4b02-a492-80496752165e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6cbc9dfd5f-2p8r7_calico-system(3fb01811-b89e-4b02-a492-80496752165e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1da192213a6fec49026e7b034b764afffc6f4904b4113a724a06f2c1be832f2f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6cbc9dfd5f-2p8r7" podUID="3fb01811-b89e-4b02-a492-80496752165e" Nov 1 10:01:14.263408 containerd[1615]: time="2025-11-01T10:01:14.263318305Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zczfq,Uid:f28927fa-3258-46e8-940e-e151d8c21104,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"abe1ee25f2b38a7d96574e2f037f71c47b81d14f0c2757ebcbae872096546d74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:01:14.263663 kubelet[2779]: E1101 10:01:14.263632 2779 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abe1ee25f2b38a7d96574e2f037f71c47b81d14f0c2757ebcbae872096546d74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:01:14.263720 kubelet[2779]: E1101 10:01:14.263678 2779 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abe1ee25f2b38a7d96574e2f037f71c47b81d14f0c2757ebcbae872096546d74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-zczfq" Nov 1 10:01:14.263720 kubelet[2779]: E1101 10:01:14.263706 2779 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abe1ee25f2b38a7d96574e2f037f71c47b81d14f0c2757ebcbae872096546d74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-zczfq" Nov 1 10:01:14.263892 kubelet[2779]: E1101 10:01:14.263854 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-zczfq_kube-system(f28927fa-3258-46e8-940e-e151d8c21104)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-zczfq_kube-system(f28927fa-3258-46e8-940e-e151d8c21104)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"abe1ee25f2b38a7d96574e2f037f71c47b81d14f0c2757ebcbae872096546d74\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-zczfq" podUID="f28927fa-3258-46e8-940e-e151d8c21104" Nov 1 10:01:14.297705 containerd[1615]: time="2025-11-01T10:01:14.297634636Z" level=error msg="Failed to destroy network for sandbox \"a410608497311fc4ddc17ad12046bafa819fc0000ae97f7408b5ac3431250f5b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:01:14.300528 systemd[1]: run-netns-cni\x2d59458264\x2db804\x2d98aa\x2d2d7f\x2db7ee98d3a635.mount: Deactivated successfully. Nov 1 10:01:14.300855 containerd[1615]: time="2025-11-01T10:01:14.300609600Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-p67f4,Uid:93ee64e5-e5b1-4b2f-98fd-8f1562d11954,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a410608497311fc4ddc17ad12046bafa819fc0000ae97f7408b5ac3431250f5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:01:14.300949 kubelet[2779]: E1101 10:01:14.300905 2779 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a410608497311fc4ddc17ad12046bafa819fc0000ae97f7408b5ac3431250f5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:01:14.301010 kubelet[2779]: E1101 10:01:14.300975 2779 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a410608497311fc4ddc17ad12046bafa819fc0000ae97f7408b5ac3431250f5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-p67f4" Nov 1 10:01:14.301041 kubelet[2779]: E1101 10:01:14.301017 2779 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a410608497311fc4ddc17ad12046bafa819fc0000ae97f7408b5ac3431250f5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-p67f4" Nov 1 10:01:14.301112 kubelet[2779]: E1101 10:01:14.301084 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-p67f4_calico-system(93ee64e5-e5b1-4b2f-98fd-8f1562d11954)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-p67f4_calico-system(93ee64e5-e5b1-4b2f-98fd-8f1562d11954)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a410608497311fc4ddc17ad12046bafa819fc0000ae97f7408b5ac3431250f5b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-p67f4" podUID="93ee64e5-e5b1-4b2f-98fd-8f1562d11954" Nov 1 10:01:14.394709 systemd[1]: Created slice kubepods-besteffort-podc846e0de_56ff_40b3_829b_1fda67e4a78f.slice - libcontainer container kubepods-besteffort-podc846e0de_56ff_40b3_829b_1fda67e4a78f.slice. Nov 1 10:01:14.397363 containerd[1615]: time="2025-11-01T10:01:14.397319850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fgvv4,Uid:c846e0de-56ff-40b3-829b-1fda67e4a78f,Namespace:calico-system,Attempt:0,}" Nov 1 10:01:14.457311 containerd[1615]: time="2025-11-01T10:01:14.457179334Z" level=error msg="Failed to destroy network for sandbox \"f464bc76e35955b19508618b4b6f24929900c5e6566bca538a2fd20d893c0d72\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:01:14.460041 systemd[1]: run-netns-cni\x2d6536f001\x2d8178\x2d592b\x2d3622\x2d1396df676ae1.mount: Deactivated successfully. Nov 1 10:01:14.460603 containerd[1615]: time="2025-11-01T10:01:14.460498796Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fgvv4,Uid:c846e0de-56ff-40b3-829b-1fda67e4a78f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f464bc76e35955b19508618b4b6f24929900c5e6566bca538a2fd20d893c0d72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:01:14.460927 kubelet[2779]: E1101 10:01:14.460863 2779 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f464bc76e35955b19508618b4b6f24929900c5e6566bca538a2fd20d893c0d72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:01:14.461059 kubelet[2779]: E1101 10:01:14.460945 2779 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f464bc76e35955b19508618b4b6f24929900c5e6566bca538a2fd20d893c0d72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fgvv4" Nov 1 10:01:14.461059 kubelet[2779]: E1101 10:01:14.460975 2779 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f464bc76e35955b19508618b4b6f24929900c5e6566bca538a2fd20d893c0d72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fgvv4" Nov 1 10:01:14.461113 kubelet[2779]: E1101 10:01:14.461048 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fgvv4_calico-system(c846e0de-56ff-40b3-829b-1fda67e4a78f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fgvv4_calico-system(c846e0de-56ff-40b3-829b-1fda67e4a78f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f464bc76e35955b19508618b4b6f24929900c5e6566bca538a2fd20d893c0d72\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fgvv4" podUID="c846e0de-56ff-40b3-829b-1fda67e4a78f" Nov 1 10:01:19.908450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3669310738.mount: Deactivated successfully. Nov 1 10:01:20.318889 containerd[1615]: time="2025-11-01T10:01:20.318743122Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:01:20.330664 containerd[1615]: time="2025-11-01T10:01:20.330614997Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Nov 1 10:01:20.341484 containerd[1615]: time="2025-11-01T10:01:20.341433955Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:01:20.360124 containerd[1615]: time="2025-11-01T10:01:20.360089766Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:01:20.360880 containerd[1615]: time="2025-11-01T10:01:20.360821671Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.873615614s" Nov 1 10:01:20.360880 containerd[1615]: time="2025-11-01T10:01:20.360862378Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 1 10:01:20.416084 containerd[1615]: time="2025-11-01T10:01:20.416032694Z" level=info msg="CreateContainer within sandbox \"77e85c4c8ab9b65846d0f1d484d14b4754718a312706659f627027f9421fe4a3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 1 10:01:20.488730 containerd[1615]: time="2025-11-01T10:01:20.488660488Z" level=info msg="Container 3d974a00b51c086ac9e963878162bdd7df9ffde4d173473bbfe01cc71a649004: CDI devices from CRI Config.CDIDevices: []" Nov 1 10:01:20.555433 containerd[1615]: time="2025-11-01T10:01:20.555364473Z" level=info msg="CreateContainer within sandbox \"77e85c4c8ab9b65846d0f1d484d14b4754718a312706659f627027f9421fe4a3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3d974a00b51c086ac9e963878162bdd7df9ffde4d173473bbfe01cc71a649004\"" Nov 1 10:01:20.556771 containerd[1615]: time="2025-11-01T10:01:20.556736871Z" level=info msg="StartContainer for \"3d974a00b51c086ac9e963878162bdd7df9ffde4d173473bbfe01cc71a649004\"" Nov 1 10:01:20.558447 containerd[1615]: time="2025-11-01T10:01:20.558412630Z" level=info msg="connecting to shim 3d974a00b51c086ac9e963878162bdd7df9ffde4d173473bbfe01cc71a649004" address="unix:///run/containerd/s/10de382cfab7ef60bce45682a8421cd44daaf3016aa54d69ebbf13cfafed6e40" protocol=ttrpc version=3 Nov 1 10:01:20.577562 systemd[1]: Started cri-containerd-3d974a00b51c086ac9e963878162bdd7df9ffde4d173473bbfe01cc71a649004.scope - libcontainer container 3d974a00b51c086ac9e963878162bdd7df9ffde4d173473bbfe01cc71a649004. Nov 1 10:01:20.691398 containerd[1615]: time="2025-11-01T10:01:20.691348223Z" level=info msg="StartContainer for \"3d974a00b51c086ac9e963878162bdd7df9ffde4d173473bbfe01cc71a649004\" returns successfully" Nov 1 10:01:20.718179 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 1 10:01:20.718897 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 1 10:01:20.835973 kubelet[2779]: I1101 10:01:20.835465 2779 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b851fa91-aed4-4dcf-a9d5-824e1504d481-whisker-ca-bundle\") pod \"b851fa91-aed4-4dcf-a9d5-824e1504d481\" (UID: \"b851fa91-aed4-4dcf-a9d5-824e1504d481\") " Nov 1 10:01:20.835973 kubelet[2779]: I1101 10:01:20.835533 2779 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzx4z\" (UniqueName: \"kubernetes.io/projected/b851fa91-aed4-4dcf-a9d5-824e1504d481-kube-api-access-vzx4z\") pod \"b851fa91-aed4-4dcf-a9d5-824e1504d481\" (UID: \"b851fa91-aed4-4dcf-a9d5-824e1504d481\") " Nov 1 10:01:20.835973 kubelet[2779]: I1101 10:01:20.835638 2779 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b851fa91-aed4-4dcf-a9d5-824e1504d481-whisker-backend-key-pair\") pod \"b851fa91-aed4-4dcf-a9d5-824e1504d481\" (UID: \"b851fa91-aed4-4dcf-a9d5-824e1504d481\") " Nov 1 10:01:20.836620 kubelet[2779]: I1101 10:01:20.836255 2779 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b851fa91-aed4-4dcf-a9d5-824e1504d481-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "b851fa91-aed4-4dcf-a9d5-824e1504d481" (UID: "b851fa91-aed4-4dcf-a9d5-824e1504d481"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 10:01:20.841029 kubelet[2779]: I1101 10:01:20.840908 2779 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b851fa91-aed4-4dcf-a9d5-824e1504d481-kube-api-access-vzx4z" (OuterVolumeSpecName: "kube-api-access-vzx4z") pod "b851fa91-aed4-4dcf-a9d5-824e1504d481" (UID: "b851fa91-aed4-4dcf-a9d5-824e1504d481"). InnerVolumeSpecName "kube-api-access-vzx4z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 10:01:20.842727 kubelet[2779]: I1101 10:01:20.842581 2779 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b851fa91-aed4-4dcf-a9d5-824e1504d481-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "b851fa91-aed4-4dcf-a9d5-824e1504d481" (UID: "b851fa91-aed4-4dcf-a9d5-824e1504d481"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 10:01:20.909514 systemd[1]: var-lib-kubelet-pods-b851fa91\x2daed4\x2d4dcf\x2da9d5\x2d824e1504d481-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvzx4z.mount: Deactivated successfully. Nov 1 10:01:20.909657 systemd[1]: var-lib-kubelet-pods-b851fa91\x2daed4\x2d4dcf\x2da9d5\x2d824e1504d481-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 1 10:01:20.936170 kubelet[2779]: I1101 10:01:20.936121 2779 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b851fa91-aed4-4dcf-a9d5-824e1504d481-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 1 10:01:20.936170 kubelet[2779]: I1101 10:01:20.936155 2779 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vzx4z\" (UniqueName: \"kubernetes.io/projected/b851fa91-aed4-4dcf-a9d5-824e1504d481-kube-api-access-vzx4z\") on node \"localhost\" DevicePath \"\"" Nov 1 10:01:20.936170 kubelet[2779]: I1101 10:01:20.936167 2779 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b851fa91-aed4-4dcf-a9d5-824e1504d481-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 1 10:01:21.500222 kubelet[2779]: E1101 10:01:21.500175 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:01:21.507591 systemd[1]: Removed slice kubepods-besteffort-podb851fa91_aed4_4dcf_a9d5_824e1504d481.slice - libcontainer container kubepods-besteffort-podb851fa91_aed4_4dcf_a9d5_824e1504d481.slice. Nov 1 10:01:21.671750 kubelet[2779]: I1101 10:01:21.671682 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-d6gqr" podStartSLOduration=2.426951409 podStartE2EDuration="21.671655825s" podCreationTimestamp="2025-11-01 10:01:00 +0000 UTC" firstStartedPulling="2025-11-01 10:01:01.116877052 +0000 UTC m=+20.834386796" lastFinishedPulling="2025-11-01 10:01:20.361581468 +0000 UTC m=+40.079091212" observedRunningTime="2025-11-01 10:01:21.670681715 +0000 UTC m=+41.388191459" watchObservedRunningTime="2025-11-01 10:01:21.671655825 +0000 UTC m=+41.389165569" Nov 1 10:01:21.725897 systemd[1]: Created slice kubepods-besteffort-podbbcb1281_6226_4685_891f_c7986a6aa61d.slice - libcontainer container kubepods-besteffort-podbbcb1281_6226_4685_891f_c7986a6aa61d.slice. Nov 1 10:01:21.741317 kubelet[2779]: I1101 10:01:21.741274 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdl67\" (UniqueName: \"kubernetes.io/projected/bbcb1281-6226-4685-891f-c7986a6aa61d-kube-api-access-fdl67\") pod \"whisker-74c9bcdf97-7n95m\" (UID: \"bbcb1281-6226-4685-891f-c7986a6aa61d\") " pod="calico-system/whisker-74c9bcdf97-7n95m" Nov 1 10:01:21.741317 kubelet[2779]: I1101 10:01:21.741317 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bbcb1281-6226-4685-891f-c7986a6aa61d-whisker-backend-key-pair\") pod \"whisker-74c9bcdf97-7n95m\" (UID: \"bbcb1281-6226-4685-891f-c7986a6aa61d\") " pod="calico-system/whisker-74c9bcdf97-7n95m" Nov 1 10:01:21.741521 kubelet[2779]: I1101 10:01:21.741343 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbcb1281-6226-4685-891f-c7986a6aa61d-whisker-ca-bundle\") pod \"whisker-74c9bcdf97-7n95m\" (UID: \"bbcb1281-6226-4685-891f-c7986a6aa61d\") " pod="calico-system/whisker-74c9bcdf97-7n95m" Nov 1 10:01:21.895217 systemd[1]: Started sshd@7-10.0.0.25:22-10.0.0.1:57006.service - OpenSSH per-connection server daemon (10.0.0.1:57006). Nov 1 10:01:21.972665 sshd[3970]: Accepted publickey for core from 10.0.0.1 port 57006 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:01:21.976095 sshd-session[3970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:01:21.991602 systemd-logind[1588]: New session 8 of user core. Nov 1 10:01:22.000630 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 1 10:01:22.029026 containerd[1615]: time="2025-11-01T10:01:22.028959594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74c9bcdf97-7n95m,Uid:bbcb1281-6226-4685-891f-c7986a6aa61d,Namespace:calico-system,Attempt:0,}" Nov 1 10:01:22.213866 sshd[4039]: Connection closed by 10.0.0.1 port 57006 Nov 1 10:01:22.214557 sshd-session[3970]: pam_unix(sshd:session): session closed for user core Nov 1 10:01:22.220436 systemd[1]: sshd@7-10.0.0.25:22-10.0.0.1:57006.service: Deactivated successfully. Nov 1 10:01:22.222691 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 10:01:22.223762 systemd-logind[1588]: Session 8 logged out. Waiting for processes to exit. Nov 1 10:01:22.225280 systemd-logind[1588]: Removed session 8. Nov 1 10:01:22.317610 systemd-networkd[1513]: calid4a36951062: Link UP Nov 1 10:01:22.318306 systemd-networkd[1513]: calid4a36951062: Gained carrier Nov 1 10:01:22.331938 containerd[1615]: 2025-11-01 10:01:22.158 [INFO][4078] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 10:01:22.331938 containerd[1615]: 2025-11-01 10:01:22.185 [INFO][4078] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--74c9bcdf97--7n95m-eth0 whisker-74c9bcdf97- calico-system bbcb1281-6226-4685-891f-c7986a6aa61d 974 0 2025-11-01 10:01:21 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:74c9bcdf97 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-74c9bcdf97-7n95m eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid4a36951062 [] [] }} ContainerID="c731307e48be2942af3cbc4697746a3fcd297339b0c0ada5160b923e74d01fea" Namespace="calico-system" Pod="whisker-74c9bcdf97-7n95m" WorkloadEndpoint="localhost-k8s-whisker--74c9bcdf97--7n95m-" Nov 1 10:01:22.331938 containerd[1615]: 2025-11-01 10:01:22.185 [INFO][4078] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c731307e48be2942af3cbc4697746a3fcd297339b0c0ada5160b923e74d01fea" Namespace="calico-system" Pod="whisker-74c9bcdf97-7n95m" WorkloadEndpoint="localhost-k8s-whisker--74c9bcdf97--7n95m-eth0" Nov 1 10:01:22.331938 containerd[1615]: 2025-11-01 10:01:22.270 [INFO][4096] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c731307e48be2942af3cbc4697746a3fcd297339b0c0ada5160b923e74d01fea" HandleID="k8s-pod-network.c731307e48be2942af3cbc4697746a3fcd297339b0c0ada5160b923e74d01fea" Workload="localhost-k8s-whisker--74c9bcdf97--7n95m-eth0" Nov 1 10:01:22.332194 containerd[1615]: 2025-11-01 10:01:22.271 [INFO][4096] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c731307e48be2942af3cbc4697746a3fcd297339b0c0ada5160b923e74d01fea" HandleID="k8s-pod-network.c731307e48be2942af3cbc4697746a3fcd297339b0c0ada5160b923e74d01fea" Workload="localhost-k8s-whisker--74c9bcdf97--7n95m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00019f3a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-74c9bcdf97-7n95m", "timestamp":"2025-11-01 10:01:22.270466011 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 10:01:22.332194 containerd[1615]: 2025-11-01 10:01:22.271 [INFO][4096] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 10:01:22.332194 containerd[1615]: 2025-11-01 10:01:22.271 [INFO][4096] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 10:01:22.332194 containerd[1615]: 2025-11-01 10:01:22.271 [INFO][4096] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 10:01:22.332194 containerd[1615]: 2025-11-01 10:01:22.279 [INFO][4096] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c731307e48be2942af3cbc4697746a3fcd297339b0c0ada5160b923e74d01fea" host="localhost" Nov 1 10:01:22.332194 containerd[1615]: 2025-11-01 10:01:22.287 [INFO][4096] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 10:01:22.332194 containerd[1615]: 2025-11-01 10:01:22.291 [INFO][4096] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 10:01:22.332194 containerd[1615]: 2025-11-01 10:01:22.293 [INFO][4096] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 10:01:22.332194 containerd[1615]: 2025-11-01 10:01:22.295 [INFO][4096] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 10:01:22.332194 containerd[1615]: 2025-11-01 10:01:22.295 [INFO][4096] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c731307e48be2942af3cbc4697746a3fcd297339b0c0ada5160b923e74d01fea" host="localhost" Nov 1 10:01:22.332476 containerd[1615]: 2025-11-01 10:01:22.296 [INFO][4096] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c731307e48be2942af3cbc4697746a3fcd297339b0c0ada5160b923e74d01fea Nov 1 10:01:22.332476 containerd[1615]: 2025-11-01 10:01:22.299 [INFO][4096] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c731307e48be2942af3cbc4697746a3fcd297339b0c0ada5160b923e74d01fea" host="localhost" Nov 1 10:01:22.332476 containerd[1615]: 2025-11-01 10:01:22.303 [INFO][4096] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.c731307e48be2942af3cbc4697746a3fcd297339b0c0ada5160b923e74d01fea" host="localhost" Nov 1 10:01:22.332476 containerd[1615]: 2025-11-01 10:01:22.303 [INFO][4096] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.c731307e48be2942af3cbc4697746a3fcd297339b0c0ada5160b923e74d01fea" host="localhost" Nov 1 10:01:22.332476 containerd[1615]: 2025-11-01 10:01:22.303 [INFO][4096] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 10:01:22.332476 containerd[1615]: 2025-11-01 10:01:22.303 [INFO][4096] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="c731307e48be2942af3cbc4697746a3fcd297339b0c0ada5160b923e74d01fea" HandleID="k8s-pod-network.c731307e48be2942af3cbc4697746a3fcd297339b0c0ada5160b923e74d01fea" Workload="localhost-k8s-whisker--74c9bcdf97--7n95m-eth0" Nov 1 10:01:22.332613 containerd[1615]: 2025-11-01 10:01:22.308 [INFO][4078] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c731307e48be2942af3cbc4697746a3fcd297339b0c0ada5160b923e74d01fea" Namespace="calico-system" Pod="whisker-74c9bcdf97-7n95m" WorkloadEndpoint="localhost-k8s-whisker--74c9bcdf97--7n95m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--74c9bcdf97--7n95m-eth0", GenerateName:"whisker-74c9bcdf97-", Namespace:"calico-system", SelfLink:"", UID:"bbcb1281-6226-4685-891f-c7986a6aa61d", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 1, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"74c9bcdf97", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-74c9bcdf97-7n95m", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid4a36951062", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:01:22.332613 containerd[1615]: 2025-11-01 10:01:22.308 [INFO][4078] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="c731307e48be2942af3cbc4697746a3fcd297339b0c0ada5160b923e74d01fea" Namespace="calico-system" Pod="whisker-74c9bcdf97-7n95m" WorkloadEndpoint="localhost-k8s-whisker--74c9bcdf97--7n95m-eth0" Nov 1 10:01:22.332712 containerd[1615]: 2025-11-01 10:01:22.308 [INFO][4078] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid4a36951062 ContainerID="c731307e48be2942af3cbc4697746a3fcd297339b0c0ada5160b923e74d01fea" Namespace="calico-system" Pod="whisker-74c9bcdf97-7n95m" WorkloadEndpoint="localhost-k8s-whisker--74c9bcdf97--7n95m-eth0" Nov 1 10:01:22.332712 containerd[1615]: 2025-11-01 10:01:22.318 [INFO][4078] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c731307e48be2942af3cbc4697746a3fcd297339b0c0ada5160b923e74d01fea" Namespace="calico-system" Pod="whisker-74c9bcdf97-7n95m" WorkloadEndpoint="localhost-k8s-whisker--74c9bcdf97--7n95m-eth0" Nov 1 10:01:22.332758 containerd[1615]: 2025-11-01 10:01:22.318 [INFO][4078] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c731307e48be2942af3cbc4697746a3fcd297339b0c0ada5160b923e74d01fea" Namespace="calico-system" Pod="whisker-74c9bcdf97-7n95m" WorkloadEndpoint="localhost-k8s-whisker--74c9bcdf97--7n95m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--74c9bcdf97--7n95m-eth0", GenerateName:"whisker-74c9bcdf97-", Namespace:"calico-system", SelfLink:"", UID:"bbcb1281-6226-4685-891f-c7986a6aa61d", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 1, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"74c9bcdf97", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c731307e48be2942af3cbc4697746a3fcd297339b0c0ada5160b923e74d01fea", Pod:"whisker-74c9bcdf97-7n95m", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid4a36951062", MAC:"22:be:7f:0f:32:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:01:22.332812 containerd[1615]: 2025-11-01 10:01:22.328 [INFO][4078] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c731307e48be2942af3cbc4697746a3fcd297339b0c0ada5160b923e74d01fea" Namespace="calico-system" Pod="whisker-74c9bcdf97-7n95m" WorkloadEndpoint="localhost-k8s-whisker--74c9bcdf97--7n95m-eth0" Nov 1 10:01:22.387926 kubelet[2779]: I1101 10:01:22.387875 2779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b851fa91-aed4-4dcf-a9d5-824e1504d481" path="/var/lib/kubelet/pods/b851fa91-aed4-4dcf-a9d5-824e1504d481/volumes" Nov 1 10:01:22.447163 containerd[1615]: time="2025-11-01T10:01:22.447111703Z" level=info msg="connecting to shim c731307e48be2942af3cbc4697746a3fcd297339b0c0ada5160b923e74d01fea" address="unix:///run/containerd/s/54250730981cd973320d231962c0668c806a0b134fa860dea24037ae644c1c13" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:01:22.477531 systemd[1]: Started cri-containerd-c731307e48be2942af3cbc4697746a3fcd297339b0c0ada5160b923e74d01fea.scope - libcontainer container c731307e48be2942af3cbc4697746a3fcd297339b0c0ada5160b923e74d01fea. Nov 1 10:01:22.492183 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 10:01:22.502618 kubelet[2779]: I1101 10:01:22.502583 2779 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 10:01:22.503065 kubelet[2779]: E1101 10:01:22.503042 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:01:22.530118 containerd[1615]: time="2025-11-01T10:01:22.530072939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74c9bcdf97-7n95m,Uid:bbcb1281-6226-4685-891f-c7986a6aa61d,Namespace:calico-system,Attempt:0,} returns sandbox id \"c731307e48be2942af3cbc4697746a3fcd297339b0c0ada5160b923e74d01fea\"" Nov 1 10:01:22.534846 containerd[1615]: time="2025-11-01T10:01:22.534656559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 10:01:22.884728 containerd[1615]: time="2025-11-01T10:01:22.884656661Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:01:22.885766 containerd[1615]: time="2025-11-01T10:01:22.885734515Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 10:01:22.885848 containerd[1615]: time="2025-11-01T10:01:22.885799898Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 1 10:01:22.885980 kubelet[2779]: E1101 10:01:22.885931 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 10:01:22.886051 kubelet[2779]: E1101 10:01:22.885983 2779 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 10:01:22.890782 kubelet[2779]: E1101 10:01:22.890717 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c1eb14a891d84fedb5e547d6cabc368b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fdl67,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74c9bcdf97-7n95m_calico-system(bbcb1281-6226-4685-891f-c7986a6aa61d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 10:01:22.892797 containerd[1615]: time="2025-11-01T10:01:22.892769200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 10:01:23.218094 containerd[1615]: time="2025-11-01T10:01:23.217971841Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:01:23.220311 containerd[1615]: time="2025-11-01T10:01:23.220241513Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 10:01:23.220537 containerd[1615]: time="2025-11-01T10:01:23.220246874Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 1 10:01:23.220568 kubelet[2779]: E1101 10:01:23.220513 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 10:01:23.220639 kubelet[2779]: E1101 10:01:23.220566 2779 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 10:01:23.221659 kubelet[2779]: E1101 10:01:23.220708 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fdl67,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74c9bcdf97-7n95m_calico-system(bbcb1281-6226-4685-891f-c7986a6aa61d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 10:01:23.221893 kubelet[2779]: E1101 10:01:23.221849 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74c9bcdf97-7n95m" podUID="bbcb1281-6226-4685-891f-c7986a6aa61d" Nov 1 10:01:23.505445 kubelet[2779]: E1101 10:01:23.505304 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:01:23.508705 kubelet[2779]: E1101 10:01:23.508637 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74c9bcdf97-7n95m" podUID="bbcb1281-6226-4685-891f-c7986a6aa61d" Nov 1 10:01:23.950654 systemd-networkd[1513]: calid4a36951062: Gained IPv6LL Nov 1 10:01:24.511359 kubelet[2779]: E1101 10:01:24.509084 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74c9bcdf97-7n95m" podUID="bbcb1281-6226-4685-891f-c7986a6aa61d" Nov 1 10:01:25.386128 containerd[1615]: time="2025-11-01T10:01:25.385857997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-p67f4,Uid:93ee64e5-e5b1-4b2f-98fd-8f1562d11954,Namespace:calico-system,Attempt:0,}" Nov 1 10:01:25.506806 systemd-networkd[1513]: cali507805a683f: Link UP Nov 1 10:01:25.507827 systemd-networkd[1513]: cali507805a683f: Gained carrier Nov 1 10:01:25.522711 containerd[1615]: 2025-11-01 10:01:25.431 [INFO][4320] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 10:01:25.522711 containerd[1615]: 2025-11-01 10:01:25.442 [INFO][4320] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--p67f4-eth0 goldmane-666569f655- calico-system 93ee64e5-e5b1-4b2f-98fd-8f1562d11954 901 0 2025-11-01 10:00:58 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-p67f4 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali507805a683f [] [] }} ContainerID="7bf8ceb4503dffb2d4c2b80f578e8a7e3e39f81d72ba3ac585c54d4609c94fbe" Namespace="calico-system" Pod="goldmane-666569f655-p67f4" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--p67f4-" Nov 1 10:01:25.522711 containerd[1615]: 2025-11-01 10:01:25.442 [INFO][4320] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7bf8ceb4503dffb2d4c2b80f578e8a7e3e39f81d72ba3ac585c54d4609c94fbe" Namespace="calico-system" Pod="goldmane-666569f655-p67f4" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--p67f4-eth0" Nov 1 10:01:25.522711 containerd[1615]: 2025-11-01 10:01:25.468 [INFO][4333] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7bf8ceb4503dffb2d4c2b80f578e8a7e3e39f81d72ba3ac585c54d4609c94fbe" HandleID="k8s-pod-network.7bf8ceb4503dffb2d4c2b80f578e8a7e3e39f81d72ba3ac585c54d4609c94fbe" Workload="localhost-k8s-goldmane--666569f655--p67f4-eth0" Nov 1 10:01:25.522912 containerd[1615]: 2025-11-01 10:01:25.468 [INFO][4333] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7bf8ceb4503dffb2d4c2b80f578e8a7e3e39f81d72ba3ac585c54d4609c94fbe" HandleID="k8s-pod-network.7bf8ceb4503dffb2d4c2b80f578e8a7e3e39f81d72ba3ac585c54d4609c94fbe" Workload="localhost-k8s-goldmane--666569f655--p67f4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000502ac0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-p67f4", "timestamp":"2025-11-01 10:01:25.468607199 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 10:01:25.522912 containerd[1615]: 2025-11-01 10:01:25.468 [INFO][4333] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 10:01:25.522912 containerd[1615]: 2025-11-01 10:01:25.468 [INFO][4333] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 10:01:25.522912 containerd[1615]: 2025-11-01 10:01:25.468 [INFO][4333] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 10:01:25.522912 containerd[1615]: 2025-11-01 10:01:25.475 [INFO][4333] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7bf8ceb4503dffb2d4c2b80f578e8a7e3e39f81d72ba3ac585c54d4609c94fbe" host="localhost" Nov 1 10:01:25.522912 containerd[1615]: 2025-11-01 10:01:25.479 [INFO][4333] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 10:01:25.522912 containerd[1615]: 2025-11-01 10:01:25.482 [INFO][4333] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 10:01:25.522912 containerd[1615]: 2025-11-01 10:01:25.484 [INFO][4333] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 10:01:25.522912 containerd[1615]: 2025-11-01 10:01:25.485 [INFO][4333] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 10:01:25.522912 containerd[1615]: 2025-11-01 10:01:25.485 [INFO][4333] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7bf8ceb4503dffb2d4c2b80f578e8a7e3e39f81d72ba3ac585c54d4609c94fbe" host="localhost" Nov 1 10:01:25.523133 containerd[1615]: 2025-11-01 10:01:25.487 [INFO][4333] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7bf8ceb4503dffb2d4c2b80f578e8a7e3e39f81d72ba3ac585c54d4609c94fbe Nov 1 10:01:25.523133 containerd[1615]: 2025-11-01 10:01:25.496 [INFO][4333] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7bf8ceb4503dffb2d4c2b80f578e8a7e3e39f81d72ba3ac585c54d4609c94fbe" host="localhost" Nov 1 10:01:25.523133 containerd[1615]: 2025-11-01 10:01:25.500 [INFO][4333] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.7bf8ceb4503dffb2d4c2b80f578e8a7e3e39f81d72ba3ac585c54d4609c94fbe" host="localhost" Nov 1 10:01:25.523133 containerd[1615]: 2025-11-01 10:01:25.500 [INFO][4333] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.7bf8ceb4503dffb2d4c2b80f578e8a7e3e39f81d72ba3ac585c54d4609c94fbe" host="localhost" Nov 1 10:01:25.523133 containerd[1615]: 2025-11-01 10:01:25.500 [INFO][4333] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 10:01:25.523133 containerd[1615]: 2025-11-01 10:01:25.500 [INFO][4333] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="7bf8ceb4503dffb2d4c2b80f578e8a7e3e39f81d72ba3ac585c54d4609c94fbe" HandleID="k8s-pod-network.7bf8ceb4503dffb2d4c2b80f578e8a7e3e39f81d72ba3ac585c54d4609c94fbe" Workload="localhost-k8s-goldmane--666569f655--p67f4-eth0" Nov 1 10:01:25.523247 containerd[1615]: 2025-11-01 10:01:25.504 [INFO][4320] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7bf8ceb4503dffb2d4c2b80f578e8a7e3e39f81d72ba3ac585c54d4609c94fbe" Namespace="calico-system" Pod="goldmane-666569f655-p67f4" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--p67f4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--p67f4-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"93ee64e5-e5b1-4b2f-98fd-8f1562d11954", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 0, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-p67f4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali507805a683f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:01:25.523247 containerd[1615]: 2025-11-01 10:01:25.504 [INFO][4320] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="7bf8ceb4503dffb2d4c2b80f578e8a7e3e39f81d72ba3ac585c54d4609c94fbe" Namespace="calico-system" Pod="goldmane-666569f655-p67f4" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--p67f4-eth0" Nov 1 10:01:25.523320 containerd[1615]: 2025-11-01 10:01:25.504 [INFO][4320] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali507805a683f ContainerID="7bf8ceb4503dffb2d4c2b80f578e8a7e3e39f81d72ba3ac585c54d4609c94fbe" Namespace="calico-system" Pod="goldmane-666569f655-p67f4" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--p67f4-eth0" Nov 1 10:01:25.523320 containerd[1615]: 2025-11-01 10:01:25.508 [INFO][4320] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7bf8ceb4503dffb2d4c2b80f578e8a7e3e39f81d72ba3ac585c54d4609c94fbe" Namespace="calico-system" Pod="goldmane-666569f655-p67f4" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--p67f4-eth0" Nov 1 10:01:25.523365 containerd[1615]: 2025-11-01 10:01:25.508 [INFO][4320] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7bf8ceb4503dffb2d4c2b80f578e8a7e3e39f81d72ba3ac585c54d4609c94fbe" Namespace="calico-system" Pod="goldmane-666569f655-p67f4" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--p67f4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--p67f4-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"93ee64e5-e5b1-4b2f-98fd-8f1562d11954", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 0, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7bf8ceb4503dffb2d4c2b80f578e8a7e3e39f81d72ba3ac585c54d4609c94fbe", Pod:"goldmane-666569f655-p67f4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali507805a683f", MAC:"be:54:85:c7:cc:4e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:01:25.523449 containerd[1615]: 2025-11-01 10:01:25.518 [INFO][4320] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7bf8ceb4503dffb2d4c2b80f578e8a7e3e39f81d72ba3ac585c54d4609c94fbe" Namespace="calico-system" Pod="goldmane-666569f655-p67f4" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--p67f4-eth0" Nov 1 10:01:25.549973 containerd[1615]: time="2025-11-01T10:01:25.549914153Z" level=info msg="connecting to shim 7bf8ceb4503dffb2d4c2b80f578e8a7e3e39f81d72ba3ac585c54d4609c94fbe" address="unix:///run/containerd/s/c2db0614f9381c2b5746316a1b1a8f2e35d106af03725c5be7173783be62689e" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:01:25.584551 systemd[1]: Started cri-containerd-7bf8ceb4503dffb2d4c2b80f578e8a7e3e39f81d72ba3ac585c54d4609c94fbe.scope - libcontainer container 7bf8ceb4503dffb2d4c2b80f578e8a7e3e39f81d72ba3ac585c54d4609c94fbe. Nov 1 10:01:25.599569 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 10:01:25.631059 containerd[1615]: time="2025-11-01T10:01:25.631016913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-p67f4,Uid:93ee64e5-e5b1-4b2f-98fd-8f1562d11954,Namespace:calico-system,Attempt:0,} returns sandbox id \"7bf8ceb4503dffb2d4c2b80f578e8a7e3e39f81d72ba3ac585c54d4609c94fbe\"" Nov 1 10:01:25.632774 containerd[1615]: time="2025-11-01T10:01:25.632743285Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 10:01:25.945029 containerd[1615]: time="2025-11-01T10:01:25.944960338Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:01:25.946134 containerd[1615]: time="2025-11-01T10:01:25.946082455Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 10:01:25.946197 containerd[1615]: time="2025-11-01T10:01:25.946167775Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 1 10:01:25.946404 kubelet[2779]: E1101 10:01:25.946348 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 10:01:25.946837 kubelet[2779]: E1101 10:01:25.946423 2779 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 10:01:25.946837 kubelet[2779]: E1101 10:01:25.946590 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tkgn4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-p67f4_calico-system(93ee64e5-e5b1-4b2f-98fd-8f1562d11954): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 10:01:25.947832 kubelet[2779]: E1101 10:01:25.947775 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p67f4" podUID="93ee64e5-e5b1-4b2f-98fd-8f1562d11954" Nov 1 10:01:26.385870 kubelet[2779]: E1101 10:01:26.385556 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:01:26.386210 containerd[1615]: time="2025-11-01T10:01:26.386167367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65f4874cbd-bgwxl,Uid:84d5906d-6e10-419b-a2c5-f35ab2809acd,Namespace:calico-apiserver,Attempt:0,}" Nov 1 10:01:26.387155 containerd[1615]: time="2025-11-01T10:01:26.386360450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fgvv4,Uid:c846e0de-56ff-40b3-829b-1fda67e4a78f,Namespace:calico-system,Attempt:0,}" Nov 1 10:01:26.387155 containerd[1615]: time="2025-11-01T10:01:26.386526361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lmwk9,Uid:2ebaf64b-4b6c-45ec-b276-6e536781a90d,Namespace:kube-system,Attempt:0,}" Nov 1 10:01:26.512781 kubelet[2779]: E1101 10:01:26.512739 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p67f4" podUID="93ee64e5-e5b1-4b2f-98fd-8f1562d11954" Nov 1 10:01:26.553486 systemd-networkd[1513]: calie41595f726d: Link UP Nov 1 10:01:26.556036 systemd-networkd[1513]: calie41595f726d: Gained carrier Nov 1 10:01:26.573039 containerd[1615]: 2025-11-01 10:01:26.446 [INFO][4408] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 10:01:26.573039 containerd[1615]: 2025-11-01 10:01:26.460 [INFO][4408] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--fgvv4-eth0 csi-node-driver- calico-system c846e0de-56ff-40b3-829b-1fda67e4a78f 772 0 2025-11-01 10:01:00 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-fgvv4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie41595f726d [] [] }} ContainerID="40cf4db82164955254333793dc0f36672b9e2a4af83433d5d0a3eae346f2a3e5" Namespace="calico-system" Pod="csi-node-driver-fgvv4" WorkloadEndpoint="localhost-k8s-csi--node--driver--fgvv4-" Nov 1 10:01:26.573039 containerd[1615]: 2025-11-01 10:01:26.460 [INFO][4408] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="40cf4db82164955254333793dc0f36672b9e2a4af83433d5d0a3eae346f2a3e5" Namespace="calico-system" Pod="csi-node-driver-fgvv4" WorkloadEndpoint="localhost-k8s-csi--node--driver--fgvv4-eth0" Nov 1 10:01:26.573039 containerd[1615]: 2025-11-01 10:01:26.506 [INFO][4466] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="40cf4db82164955254333793dc0f36672b9e2a4af83433d5d0a3eae346f2a3e5" HandleID="k8s-pod-network.40cf4db82164955254333793dc0f36672b9e2a4af83433d5d0a3eae346f2a3e5" Workload="localhost-k8s-csi--node--driver--fgvv4-eth0" Nov 1 10:01:26.573552 containerd[1615]: 2025-11-01 10:01:26.506 [INFO][4466] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="40cf4db82164955254333793dc0f36672b9e2a4af83433d5d0a3eae346f2a3e5" HandleID="k8s-pod-network.40cf4db82164955254333793dc0f36672b9e2a4af83433d5d0a3eae346f2a3e5" Workload="localhost-k8s-csi--node--driver--fgvv4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f9b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-fgvv4", "timestamp":"2025-11-01 10:01:26.506342662 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 10:01:26.573552 containerd[1615]: 2025-11-01 10:01:26.506 [INFO][4466] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 10:01:26.573552 containerd[1615]: 2025-11-01 10:01:26.506 [INFO][4466] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 10:01:26.573552 containerd[1615]: 2025-11-01 10:01:26.506 [INFO][4466] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 10:01:26.573552 containerd[1615]: 2025-11-01 10:01:26.517 [INFO][4466] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.40cf4db82164955254333793dc0f36672b9e2a4af83433d5d0a3eae346f2a3e5" host="localhost" Nov 1 10:01:26.573552 containerd[1615]: 2025-11-01 10:01:26.522 [INFO][4466] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 10:01:26.573552 containerd[1615]: 2025-11-01 10:01:26.529 [INFO][4466] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 10:01:26.573552 containerd[1615]: 2025-11-01 10:01:26.532 [INFO][4466] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 10:01:26.573552 containerd[1615]: 2025-11-01 10:01:26.534 [INFO][4466] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 10:01:26.573552 containerd[1615]: 2025-11-01 10:01:26.534 [INFO][4466] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.40cf4db82164955254333793dc0f36672b9e2a4af83433d5d0a3eae346f2a3e5" host="localhost" Nov 1 10:01:26.573821 containerd[1615]: 2025-11-01 10:01:26.536 [INFO][4466] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.40cf4db82164955254333793dc0f36672b9e2a4af83433d5d0a3eae346f2a3e5 Nov 1 10:01:26.573821 containerd[1615]: 2025-11-01 10:01:26.540 [INFO][4466] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.40cf4db82164955254333793dc0f36672b9e2a4af83433d5d0a3eae346f2a3e5" host="localhost" Nov 1 10:01:26.573821 containerd[1615]: 2025-11-01 10:01:26.547 [INFO][4466] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.40cf4db82164955254333793dc0f36672b9e2a4af83433d5d0a3eae346f2a3e5" host="localhost" Nov 1 10:01:26.573821 containerd[1615]: 2025-11-01 10:01:26.547 [INFO][4466] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.40cf4db82164955254333793dc0f36672b9e2a4af83433d5d0a3eae346f2a3e5" host="localhost" Nov 1 10:01:26.573821 containerd[1615]: 2025-11-01 10:01:26.547 [INFO][4466] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 10:01:26.573821 containerd[1615]: 2025-11-01 10:01:26.547 [INFO][4466] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="40cf4db82164955254333793dc0f36672b9e2a4af83433d5d0a3eae346f2a3e5" HandleID="k8s-pod-network.40cf4db82164955254333793dc0f36672b9e2a4af83433d5d0a3eae346f2a3e5" Workload="localhost-k8s-csi--node--driver--fgvv4-eth0" Nov 1 10:01:26.573947 containerd[1615]: 2025-11-01 10:01:26.551 [INFO][4408] cni-plugin/k8s.go 418: Populated endpoint ContainerID="40cf4db82164955254333793dc0f36672b9e2a4af83433d5d0a3eae346f2a3e5" Namespace="calico-system" Pod="csi-node-driver-fgvv4" WorkloadEndpoint="localhost-k8s-csi--node--driver--fgvv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fgvv4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c846e0de-56ff-40b3-829b-1fda67e4a78f", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 1, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-fgvv4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie41595f726d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:01:26.574001 containerd[1615]: 2025-11-01 10:01:26.551 [INFO][4408] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="40cf4db82164955254333793dc0f36672b9e2a4af83433d5d0a3eae346f2a3e5" Namespace="calico-system" Pod="csi-node-driver-fgvv4" WorkloadEndpoint="localhost-k8s-csi--node--driver--fgvv4-eth0" Nov 1 10:01:26.574001 containerd[1615]: 2025-11-01 10:01:26.551 [INFO][4408] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie41595f726d ContainerID="40cf4db82164955254333793dc0f36672b9e2a4af83433d5d0a3eae346f2a3e5" Namespace="calico-system" Pod="csi-node-driver-fgvv4" WorkloadEndpoint="localhost-k8s-csi--node--driver--fgvv4-eth0" Nov 1 10:01:26.574001 containerd[1615]: 2025-11-01 10:01:26.557 [INFO][4408] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="40cf4db82164955254333793dc0f36672b9e2a4af83433d5d0a3eae346f2a3e5" Namespace="calico-system" Pod="csi-node-driver-fgvv4" WorkloadEndpoint="localhost-k8s-csi--node--driver--fgvv4-eth0" Nov 1 10:01:26.574069 containerd[1615]: 2025-11-01 10:01:26.557 [INFO][4408] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="40cf4db82164955254333793dc0f36672b9e2a4af83433d5d0a3eae346f2a3e5" Namespace="calico-system" Pod="csi-node-driver-fgvv4" WorkloadEndpoint="localhost-k8s-csi--node--driver--fgvv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fgvv4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c846e0de-56ff-40b3-829b-1fda67e4a78f", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 1, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"40cf4db82164955254333793dc0f36672b9e2a4af83433d5d0a3eae346f2a3e5", Pod:"csi-node-driver-fgvv4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie41595f726d", MAC:"d2:46:76:f9:27:5b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:01:26.574116 containerd[1615]: 2025-11-01 10:01:26.566 [INFO][4408] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="40cf4db82164955254333793dc0f36672b9e2a4af83433d5d0a3eae346f2a3e5" Namespace="calico-system" Pod="csi-node-driver-fgvv4" WorkloadEndpoint="localhost-k8s-csi--node--driver--fgvv4-eth0" Nov 1 10:01:26.638611 systemd-networkd[1513]: cali507805a683f: Gained IPv6LL Nov 1 10:01:26.677319 containerd[1615]: time="2025-11-01T10:01:26.677266407Z" level=info msg="connecting to shim 40cf4db82164955254333793dc0f36672b9e2a4af83433d5d0a3eae346f2a3e5" address="unix:///run/containerd/s/e80e469934d16b58a1f4962dc3cc52a0dc6f50946c35b6a259d4832237dccc61" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:01:26.692660 systemd-networkd[1513]: calibe3e0630ba8: Link UP Nov 1 10:01:26.693216 systemd-networkd[1513]: calibe3e0630ba8: Gained carrier Nov 1 10:01:26.709515 systemd[1]: Started cri-containerd-40cf4db82164955254333793dc0f36672b9e2a4af83433d5d0a3eae346f2a3e5.scope - libcontainer container 40cf4db82164955254333793dc0f36672b9e2a4af83433d5d0a3eae346f2a3e5. Nov 1 10:01:26.711435 containerd[1615]: 2025-11-01 10:01:26.445 [INFO][4407] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 10:01:26.711435 containerd[1615]: 2025-11-01 10:01:26.471 [INFO][4407] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--65f4874cbd--bgwxl-eth0 calico-apiserver-65f4874cbd- calico-apiserver 84d5906d-6e10-419b-a2c5-f35ab2809acd 899 0 2025-11-01 10:00:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:65f4874cbd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-65f4874cbd-bgwxl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibe3e0630ba8 [] [] }} ContainerID="eddbe217bc77d46181026c34923a070052f92d085f6468851bbaa61b44b3ffd3" Namespace="calico-apiserver" Pod="calico-apiserver-65f4874cbd-bgwxl" WorkloadEndpoint="localhost-k8s-calico--apiserver--65f4874cbd--bgwxl-" Nov 1 10:01:26.711435 containerd[1615]: 2025-11-01 10:01:26.471 [INFO][4407] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eddbe217bc77d46181026c34923a070052f92d085f6468851bbaa61b44b3ffd3" Namespace="calico-apiserver" Pod="calico-apiserver-65f4874cbd-bgwxl" WorkloadEndpoint="localhost-k8s-calico--apiserver--65f4874cbd--bgwxl-eth0" Nov 1 10:01:26.711435 containerd[1615]: 2025-11-01 10:01:26.518 [INFO][4474] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eddbe217bc77d46181026c34923a070052f92d085f6468851bbaa61b44b3ffd3" HandleID="k8s-pod-network.eddbe217bc77d46181026c34923a070052f92d085f6468851bbaa61b44b3ffd3" Workload="localhost-k8s-calico--apiserver--65f4874cbd--bgwxl-eth0" Nov 1 10:01:26.711739 containerd[1615]: 2025-11-01 10:01:26.519 [INFO][4474] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="eddbe217bc77d46181026c34923a070052f92d085f6468851bbaa61b44b3ffd3" HandleID="k8s-pod-network.eddbe217bc77d46181026c34923a070052f92d085f6468851bbaa61b44b3ffd3" Workload="localhost-k8s-calico--apiserver--65f4874cbd--bgwxl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004332c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-65f4874cbd-bgwxl", "timestamp":"2025-11-01 10:01:26.518985191 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 10:01:26.711739 containerd[1615]: 2025-11-01 10:01:26.519 [INFO][4474] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 10:01:26.711739 containerd[1615]: 2025-11-01 10:01:26.547 [INFO][4474] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 10:01:26.711739 containerd[1615]: 2025-11-01 10:01:26.548 [INFO][4474] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 10:01:26.711739 containerd[1615]: 2025-11-01 10:01:26.655 [INFO][4474] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.eddbe217bc77d46181026c34923a070052f92d085f6468851bbaa61b44b3ffd3" host="localhost" Nov 1 10:01:26.711739 containerd[1615]: 2025-11-01 10:01:26.662 [INFO][4474] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 10:01:26.711739 containerd[1615]: 2025-11-01 10:01:26.666 [INFO][4474] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 10:01:26.711739 containerd[1615]: 2025-11-01 10:01:26.668 [INFO][4474] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 10:01:26.711739 containerd[1615]: 2025-11-01 10:01:26.670 [INFO][4474] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 10:01:26.711739 containerd[1615]: 2025-11-01 10:01:26.670 [INFO][4474] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.eddbe217bc77d46181026c34923a070052f92d085f6468851bbaa61b44b3ffd3" host="localhost" Nov 1 10:01:26.712072 containerd[1615]: 2025-11-01 10:01:26.671 [INFO][4474] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.eddbe217bc77d46181026c34923a070052f92d085f6468851bbaa61b44b3ffd3 Nov 1 10:01:26.712072 containerd[1615]: 2025-11-01 10:01:26.677 [INFO][4474] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.eddbe217bc77d46181026c34923a070052f92d085f6468851bbaa61b44b3ffd3" host="localhost" Nov 1 10:01:26.712072 containerd[1615]: 2025-11-01 10:01:26.681 [INFO][4474] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.eddbe217bc77d46181026c34923a070052f92d085f6468851bbaa61b44b3ffd3" host="localhost" Nov 1 10:01:26.712072 containerd[1615]: 2025-11-01 10:01:26.681 [INFO][4474] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.eddbe217bc77d46181026c34923a070052f92d085f6468851bbaa61b44b3ffd3" host="localhost" Nov 1 10:01:26.712072 containerd[1615]: 2025-11-01 10:01:26.681 [INFO][4474] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 10:01:26.712072 containerd[1615]: 2025-11-01 10:01:26.681 [INFO][4474] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="eddbe217bc77d46181026c34923a070052f92d085f6468851bbaa61b44b3ffd3" HandleID="k8s-pod-network.eddbe217bc77d46181026c34923a070052f92d085f6468851bbaa61b44b3ffd3" Workload="localhost-k8s-calico--apiserver--65f4874cbd--bgwxl-eth0" Nov 1 10:01:26.712281 containerd[1615]: 2025-11-01 10:01:26.689 [INFO][4407] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eddbe217bc77d46181026c34923a070052f92d085f6468851bbaa61b44b3ffd3" Namespace="calico-apiserver" Pod="calico-apiserver-65f4874cbd-bgwxl" WorkloadEndpoint="localhost-k8s-calico--apiserver--65f4874cbd--bgwxl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--65f4874cbd--bgwxl-eth0", GenerateName:"calico-apiserver-65f4874cbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"84d5906d-6e10-419b-a2c5-f35ab2809acd", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 0, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65f4874cbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-65f4874cbd-bgwxl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibe3e0630ba8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:01:26.712471 containerd[1615]: 2025-11-01 10:01:26.689 [INFO][4407] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="eddbe217bc77d46181026c34923a070052f92d085f6468851bbaa61b44b3ffd3" Namespace="calico-apiserver" Pod="calico-apiserver-65f4874cbd-bgwxl" WorkloadEndpoint="localhost-k8s-calico--apiserver--65f4874cbd--bgwxl-eth0" Nov 1 10:01:26.712471 containerd[1615]: 2025-11-01 10:01:26.690 [INFO][4407] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibe3e0630ba8 ContainerID="eddbe217bc77d46181026c34923a070052f92d085f6468851bbaa61b44b3ffd3" Namespace="calico-apiserver" Pod="calico-apiserver-65f4874cbd-bgwxl" WorkloadEndpoint="localhost-k8s-calico--apiserver--65f4874cbd--bgwxl-eth0" Nov 1 10:01:26.712471 containerd[1615]: 2025-11-01 10:01:26.691 [INFO][4407] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eddbe217bc77d46181026c34923a070052f92d085f6468851bbaa61b44b3ffd3" Namespace="calico-apiserver" Pod="calico-apiserver-65f4874cbd-bgwxl" WorkloadEndpoint="localhost-k8s-calico--apiserver--65f4874cbd--bgwxl-eth0" Nov 1 10:01:26.712593 containerd[1615]: 2025-11-01 10:01:26.693 [INFO][4407] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eddbe217bc77d46181026c34923a070052f92d085f6468851bbaa61b44b3ffd3" Namespace="calico-apiserver" Pod="calico-apiserver-65f4874cbd-bgwxl" WorkloadEndpoint="localhost-k8s-calico--apiserver--65f4874cbd--bgwxl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--65f4874cbd--bgwxl-eth0", GenerateName:"calico-apiserver-65f4874cbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"84d5906d-6e10-419b-a2c5-f35ab2809acd", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 0, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65f4874cbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eddbe217bc77d46181026c34923a070052f92d085f6468851bbaa61b44b3ffd3", Pod:"calico-apiserver-65f4874cbd-bgwxl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibe3e0630ba8", MAC:"4e:f1:a2:fb:7a:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:01:26.712674 containerd[1615]: 2025-11-01 10:01:26.706 [INFO][4407] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eddbe217bc77d46181026c34923a070052f92d085f6468851bbaa61b44b3ffd3" Namespace="calico-apiserver" Pod="calico-apiserver-65f4874cbd-bgwxl" WorkloadEndpoint="localhost-k8s-calico--apiserver--65f4874cbd--bgwxl-eth0" Nov 1 10:01:26.728020 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 10:01:26.743184 containerd[1615]: time="2025-11-01T10:01:26.743139037Z" level=info msg="connecting to shim eddbe217bc77d46181026c34923a070052f92d085f6468851bbaa61b44b3ffd3" address="unix:///run/containerd/s/c240f3622b6761d90d02636d748c2fadc4d707f95e8e4e732a459ba71cb85943" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:01:26.747943 containerd[1615]: time="2025-11-01T10:01:26.747898795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fgvv4,Uid:c846e0de-56ff-40b3-829b-1fda67e4a78f,Namespace:calico-system,Attempt:0,} returns sandbox id \"40cf4db82164955254333793dc0f36672b9e2a4af83433d5d0a3eae346f2a3e5\"" Nov 1 10:01:26.750208 containerd[1615]: time="2025-11-01T10:01:26.750171452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 10:01:26.771625 systemd[1]: Started cri-containerd-eddbe217bc77d46181026c34923a070052f92d085f6468851bbaa61b44b3ffd3.scope - libcontainer container eddbe217bc77d46181026c34923a070052f92d085f6468851bbaa61b44b3ffd3. Nov 1 10:01:26.788161 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 10:01:26.792813 systemd-networkd[1513]: calif9842ffa20c: Link UP Nov 1 10:01:26.793281 systemd-networkd[1513]: calif9842ffa20c: Gained carrier Nov 1 10:01:26.807127 containerd[1615]: 2025-11-01 10:01:26.459 [INFO][4413] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 10:01:26.807127 containerd[1615]: 2025-11-01 10:01:26.476 [INFO][4413] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--lmwk9-eth0 coredns-674b8bbfcf- kube-system 2ebaf64b-4b6c-45ec-b276-6e536781a90d 887 0 2025-11-01 10:00:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-lmwk9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif9842ffa20c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="152da9b9206ceae69046bbb45a022424f448951fe9b02e93894f6eba9fc62fca" Namespace="kube-system" Pod="coredns-674b8bbfcf-lmwk9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lmwk9-" Nov 1 10:01:26.807127 containerd[1615]: 2025-11-01 10:01:26.477 [INFO][4413] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="152da9b9206ceae69046bbb45a022424f448951fe9b02e93894f6eba9fc62fca" Namespace="kube-system" Pod="coredns-674b8bbfcf-lmwk9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lmwk9-eth0" Nov 1 10:01:26.807127 containerd[1615]: 2025-11-01 10:01:26.519 [INFO][4481] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="152da9b9206ceae69046bbb45a022424f448951fe9b02e93894f6eba9fc62fca" HandleID="k8s-pod-network.152da9b9206ceae69046bbb45a022424f448951fe9b02e93894f6eba9fc62fca" Workload="localhost-k8s-coredns--674b8bbfcf--lmwk9-eth0" Nov 1 10:01:26.807357 containerd[1615]: 2025-11-01 10:01:26.520 [INFO][4481] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="152da9b9206ceae69046bbb45a022424f448951fe9b02e93894f6eba9fc62fca" HandleID="k8s-pod-network.152da9b9206ceae69046bbb45a022424f448951fe9b02e93894f6eba9fc62fca" Workload="localhost-k8s-coredns--674b8bbfcf--lmwk9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c70c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-lmwk9", "timestamp":"2025-11-01 10:01:26.519270977 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 10:01:26.807357 containerd[1615]: 2025-11-01 10:01:26.520 [INFO][4481] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 10:01:26.807357 containerd[1615]: 2025-11-01 10:01:26.687 [INFO][4481] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 10:01:26.807357 containerd[1615]: 2025-11-01 10:01:26.687 [INFO][4481] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 10:01:26.807357 containerd[1615]: 2025-11-01 10:01:26.718 [INFO][4481] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.152da9b9206ceae69046bbb45a022424f448951fe9b02e93894f6eba9fc62fca" host="localhost" Nov 1 10:01:26.807357 containerd[1615]: 2025-11-01 10:01:26.763 [INFO][4481] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 10:01:26.807357 containerd[1615]: 2025-11-01 10:01:26.768 [INFO][4481] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 10:01:26.807357 containerd[1615]: 2025-11-01 10:01:26.770 [INFO][4481] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 10:01:26.807357 containerd[1615]: 2025-11-01 10:01:26.774 [INFO][4481] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 10:01:26.807357 containerd[1615]: 2025-11-01 10:01:26.774 [INFO][4481] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.152da9b9206ceae69046bbb45a022424f448951fe9b02e93894f6eba9fc62fca" host="localhost" Nov 1 10:01:26.807598 containerd[1615]: 2025-11-01 10:01:26.776 [INFO][4481] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.152da9b9206ceae69046bbb45a022424f448951fe9b02e93894f6eba9fc62fca Nov 1 10:01:26.807598 containerd[1615]: 2025-11-01 10:01:26.779 [INFO][4481] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.152da9b9206ceae69046bbb45a022424f448951fe9b02e93894f6eba9fc62fca" host="localhost" Nov 1 10:01:26.807598 containerd[1615]: 2025-11-01 10:01:26.786 [INFO][4481] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.152da9b9206ceae69046bbb45a022424f448951fe9b02e93894f6eba9fc62fca" host="localhost" Nov 1 10:01:26.807598 containerd[1615]: 2025-11-01 10:01:26.786 [INFO][4481] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.152da9b9206ceae69046bbb45a022424f448951fe9b02e93894f6eba9fc62fca" host="localhost" Nov 1 10:01:26.807598 containerd[1615]: 2025-11-01 10:01:26.786 [INFO][4481] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 10:01:26.807598 containerd[1615]: 2025-11-01 10:01:26.786 [INFO][4481] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="152da9b9206ceae69046bbb45a022424f448951fe9b02e93894f6eba9fc62fca" HandleID="k8s-pod-network.152da9b9206ceae69046bbb45a022424f448951fe9b02e93894f6eba9fc62fca" Workload="localhost-k8s-coredns--674b8bbfcf--lmwk9-eth0" Nov 1 10:01:26.807717 containerd[1615]: 2025-11-01 10:01:26.789 [INFO][4413] cni-plugin/k8s.go 418: Populated endpoint ContainerID="152da9b9206ceae69046bbb45a022424f448951fe9b02e93894f6eba9fc62fca" Namespace="kube-system" Pod="coredns-674b8bbfcf-lmwk9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lmwk9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--lmwk9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2ebaf64b-4b6c-45ec-b276-6e536781a90d", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 0, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-lmwk9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif9842ffa20c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:01:26.807780 containerd[1615]: 2025-11-01 10:01:26.790 [INFO][4413] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="152da9b9206ceae69046bbb45a022424f448951fe9b02e93894f6eba9fc62fca" Namespace="kube-system" Pod="coredns-674b8bbfcf-lmwk9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lmwk9-eth0" Nov 1 10:01:26.807780 containerd[1615]: 2025-11-01 10:01:26.790 [INFO][4413] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif9842ffa20c ContainerID="152da9b9206ceae69046bbb45a022424f448951fe9b02e93894f6eba9fc62fca" Namespace="kube-system" Pod="coredns-674b8bbfcf-lmwk9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lmwk9-eth0" Nov 1 10:01:26.807780 containerd[1615]: 2025-11-01 10:01:26.794 [INFO][4413] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="152da9b9206ceae69046bbb45a022424f448951fe9b02e93894f6eba9fc62fca" Namespace="kube-system" Pod="coredns-674b8bbfcf-lmwk9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lmwk9-eth0" Nov 1 10:01:26.807847 containerd[1615]: 2025-11-01 10:01:26.794 [INFO][4413] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="152da9b9206ceae69046bbb45a022424f448951fe9b02e93894f6eba9fc62fca" Namespace="kube-system" Pod="coredns-674b8bbfcf-lmwk9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lmwk9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--lmwk9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2ebaf64b-4b6c-45ec-b276-6e536781a90d", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 0, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"152da9b9206ceae69046bbb45a022424f448951fe9b02e93894f6eba9fc62fca", Pod:"coredns-674b8bbfcf-lmwk9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif9842ffa20c", MAC:"d6:ff:0e:be:bb:6e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:01:26.807847 containerd[1615]: 2025-11-01 10:01:26.804 [INFO][4413] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="152da9b9206ceae69046bbb45a022424f448951fe9b02e93894f6eba9fc62fca" Namespace="kube-system" Pod="coredns-674b8bbfcf-lmwk9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lmwk9-eth0" Nov 1 10:01:26.830837 containerd[1615]: time="2025-11-01T10:01:26.830780963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65f4874cbd-bgwxl,Uid:84d5906d-6e10-419b-a2c5-f35ab2809acd,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"eddbe217bc77d46181026c34923a070052f92d085f6468851bbaa61b44b3ffd3\"" Nov 1 10:01:26.835998 containerd[1615]: time="2025-11-01T10:01:26.835945571Z" level=info msg="connecting to shim 152da9b9206ceae69046bbb45a022424f448951fe9b02e93894f6eba9fc62fca" address="unix:///run/containerd/s/7e554cff59dc73712da3cba3561a1dd7a76525f13cd6808065d277ff384c98a7" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:01:26.860562 systemd[1]: Started cri-containerd-152da9b9206ceae69046bbb45a022424f448951fe9b02e93894f6eba9fc62fca.scope - libcontainer container 152da9b9206ceae69046bbb45a022424f448951fe9b02e93894f6eba9fc62fca. Nov 1 10:01:26.874542 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 10:01:26.904559 containerd[1615]: time="2025-11-01T10:01:26.904420667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lmwk9,Uid:2ebaf64b-4b6c-45ec-b276-6e536781a90d,Namespace:kube-system,Attempt:0,} returns sandbox id \"152da9b9206ceae69046bbb45a022424f448951fe9b02e93894f6eba9fc62fca\"" Nov 1 10:01:26.906258 kubelet[2779]: E1101 10:01:26.906215 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:01:26.911146 containerd[1615]: time="2025-11-01T10:01:26.911096012Z" level=info msg="CreateContainer within sandbox \"152da9b9206ceae69046bbb45a022424f448951fe9b02e93894f6eba9fc62fca\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 10:01:26.928139 containerd[1615]: time="2025-11-01T10:01:26.928090282Z" level=info msg="Container 12c9b4d1d8e4ef28765242c639a3e15668687a3fcce2a7ddfa4dd7b439af9f57: CDI devices from CRI Config.CDIDevices: []" Nov 1 10:01:26.934089 containerd[1615]: time="2025-11-01T10:01:26.934058158Z" level=info msg="CreateContainer within sandbox \"152da9b9206ceae69046bbb45a022424f448951fe9b02e93894f6eba9fc62fca\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"12c9b4d1d8e4ef28765242c639a3e15668687a3fcce2a7ddfa4dd7b439af9f57\"" Nov 1 10:01:26.935504 containerd[1615]: time="2025-11-01T10:01:26.935458597Z" level=info msg="StartContainer for \"12c9b4d1d8e4ef28765242c639a3e15668687a3fcce2a7ddfa4dd7b439af9f57\"" Nov 1 10:01:26.946987 containerd[1615]: time="2025-11-01T10:01:26.946960165Z" level=info msg="connecting to shim 12c9b4d1d8e4ef28765242c639a3e15668687a3fcce2a7ddfa4dd7b439af9f57" address="unix:///run/containerd/s/7e554cff59dc73712da3cba3561a1dd7a76525f13cd6808065d277ff384c98a7" protocol=ttrpc version=3 Nov 1 10:01:26.970540 systemd[1]: Started cri-containerd-12c9b4d1d8e4ef28765242c639a3e15668687a3fcce2a7ddfa4dd7b439af9f57.scope - libcontainer container 12c9b4d1d8e4ef28765242c639a3e15668687a3fcce2a7ddfa4dd7b439af9f57. Nov 1 10:01:27.014872 containerd[1615]: time="2025-11-01T10:01:27.014821968Z" level=info msg="StartContainer for \"12c9b4d1d8e4ef28765242c639a3e15668687a3fcce2a7ddfa4dd7b439af9f57\" returns successfully" Nov 1 10:01:27.098980 containerd[1615]: time="2025-11-01T10:01:27.098926559Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:01:27.101293 containerd[1615]: time="2025-11-01T10:01:27.101241285Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 10:01:27.101355 containerd[1615]: time="2025-11-01T10:01:27.101300486Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 1 10:01:27.101880 kubelet[2779]: E1101 10:01:27.101559 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 10:01:27.101880 kubelet[2779]: E1101 10:01:27.101622 2779 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 10:01:27.102256 kubelet[2779]: E1101 10:01:27.102164 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x2r8p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fgvv4_calico-system(c846e0de-56ff-40b3-829b-1fda67e4a78f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 10:01:27.102562 containerd[1615]: time="2025-11-01T10:01:27.102487966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 10:01:27.235414 systemd[1]: Started sshd@8-10.0.0.25:22-10.0.0.1:57008.service - OpenSSH per-connection server daemon (10.0.0.1:57008). Nov 1 10:01:27.315043 sshd[4689]: Accepted publickey for core from 10.0.0.1 port 57008 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:01:27.317092 sshd-session[4689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:01:27.322754 systemd-logind[1588]: New session 9 of user core. Nov 1 10:01:27.329705 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 1 10:01:27.387895 containerd[1615]: time="2025-11-01T10:01:27.387670986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65f4874cbd-sm9hg,Uid:380b8d89-6dd2-41d8-9c8c-26a95df82b99,Namespace:calico-apiserver,Attempt:0,}" Nov 1 10:01:27.391142 containerd[1615]: time="2025-11-01T10:01:27.391035202Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:01:27.393549 containerd[1615]: time="2025-11-01T10:01:27.393482217Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 10:01:27.393857 containerd[1615]: time="2025-11-01T10:01:27.393809751Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 1 10:01:27.394214 kubelet[2779]: E1101 10:01:27.394168 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 10:01:27.394266 kubelet[2779]: E1101 10:01:27.394222 2779 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 10:01:27.394614 kubelet[2779]: E1101 10:01:27.394568 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jzmhz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65f4874cbd-bgwxl_calico-apiserver(84d5906d-6e10-419b-a2c5-f35ab2809acd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 10:01:27.395687 kubelet[2779]: E1101 10:01:27.395661 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65f4874cbd-bgwxl" podUID="84d5906d-6e10-419b-a2c5-f35ab2809acd" Nov 1 10:01:27.395945 containerd[1615]: time="2025-11-01T10:01:27.395922529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 10:01:27.448912 sshd[4693]: Connection closed by 10.0.0.1 port 57008 Nov 1 10:01:27.449747 sshd-session[4689]: pam_unix(sshd:session): session closed for user core Nov 1 10:01:27.454463 systemd[1]: sshd@8-10.0.0.25:22-10.0.0.1:57008.service: Deactivated successfully. Nov 1 10:01:27.454912 systemd-logind[1588]: Session 9 logged out. Waiting for processes to exit. Nov 1 10:01:27.458070 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 10:01:27.461324 systemd-logind[1588]: Removed session 9. Nov 1 10:01:27.517571 systemd-networkd[1513]: cali2ad5322b767: Link UP Nov 1 10:01:27.519269 kubelet[2779]: E1101 10:01:27.518598 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65f4874cbd-bgwxl" podUID="84d5906d-6e10-419b-a2c5-f35ab2809acd" Nov 1 10:01:27.521754 systemd-networkd[1513]: cali2ad5322b767: Gained carrier Nov 1 10:01:27.523602 kubelet[2779]: E1101 10:01:27.523553 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:01:27.526954 kubelet[2779]: E1101 10:01:27.526871 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p67f4" podUID="93ee64e5-e5b1-4b2f-98fd-8f1562d11954" Nov 1 10:01:27.672354 containerd[1615]: 2025-11-01 10:01:27.418 [INFO][4704] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 10:01:27.672354 containerd[1615]: 2025-11-01 10:01:27.431 [INFO][4704] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--65f4874cbd--sm9hg-eth0 calico-apiserver-65f4874cbd- calico-apiserver 380b8d89-6dd2-41d8-9c8c-26a95df82b99 900 0 2025-11-01 10:00:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:65f4874cbd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-65f4874cbd-sm9hg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2ad5322b767 [] [] }} ContainerID="8ffa10ff3eb024c7d98e6b7f86b12019f549a2bf65ce2c25986428c988896e4b" Namespace="calico-apiserver" Pod="calico-apiserver-65f4874cbd-sm9hg" WorkloadEndpoint="localhost-k8s-calico--apiserver--65f4874cbd--sm9hg-" Nov 1 10:01:27.672354 containerd[1615]: 2025-11-01 10:01:27.431 [INFO][4704] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8ffa10ff3eb024c7d98e6b7f86b12019f549a2bf65ce2c25986428c988896e4b" Namespace="calico-apiserver" Pod="calico-apiserver-65f4874cbd-sm9hg" WorkloadEndpoint="localhost-k8s-calico--apiserver--65f4874cbd--sm9hg-eth0" Nov 1 10:01:27.672354 containerd[1615]: 2025-11-01 10:01:27.462 [INFO][4720] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8ffa10ff3eb024c7d98e6b7f86b12019f549a2bf65ce2c25986428c988896e4b" HandleID="k8s-pod-network.8ffa10ff3eb024c7d98e6b7f86b12019f549a2bf65ce2c25986428c988896e4b" Workload="localhost-k8s-calico--apiserver--65f4874cbd--sm9hg-eth0" Nov 1 10:01:27.672354 containerd[1615]: 2025-11-01 10:01:27.462 [INFO][4720] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8ffa10ff3eb024c7d98e6b7f86b12019f549a2bf65ce2c25986428c988896e4b" HandleID="k8s-pod-network.8ffa10ff3eb024c7d98e6b7f86b12019f549a2bf65ce2c25986428c988896e4b" Workload="localhost-k8s-calico--apiserver--65f4874cbd--sm9hg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000c1540), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-65f4874cbd-sm9hg", "timestamp":"2025-11-01 10:01:27.462002618 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 10:01:27.672354 containerd[1615]: 2025-11-01 10:01:27.462 [INFO][4720] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 10:01:27.672354 containerd[1615]: 2025-11-01 10:01:27.462 [INFO][4720] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 10:01:27.672354 containerd[1615]: 2025-11-01 10:01:27.462 [INFO][4720] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 10:01:27.672354 containerd[1615]: 2025-11-01 10:01:27.470 [INFO][4720] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8ffa10ff3eb024c7d98e6b7f86b12019f549a2bf65ce2c25986428c988896e4b" host="localhost" Nov 1 10:01:27.672354 containerd[1615]: 2025-11-01 10:01:27.474 [INFO][4720] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 10:01:27.672354 containerd[1615]: 2025-11-01 10:01:27.481 [INFO][4720] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 10:01:27.672354 containerd[1615]: 2025-11-01 10:01:27.483 [INFO][4720] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 10:01:27.672354 containerd[1615]: 2025-11-01 10:01:27.485 [INFO][4720] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 10:01:27.672354 containerd[1615]: 2025-11-01 10:01:27.485 [INFO][4720] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8ffa10ff3eb024c7d98e6b7f86b12019f549a2bf65ce2c25986428c988896e4b" host="localhost" Nov 1 10:01:27.672354 containerd[1615]: 2025-11-01 10:01:27.487 [INFO][4720] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8ffa10ff3eb024c7d98e6b7f86b12019f549a2bf65ce2c25986428c988896e4b Nov 1 10:01:27.672354 containerd[1615]: 2025-11-01 10:01:27.491 [INFO][4720] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8ffa10ff3eb024c7d98e6b7f86b12019f549a2bf65ce2c25986428c988896e4b" host="localhost" Nov 1 10:01:27.672354 containerd[1615]: 2025-11-01 10:01:27.509 [INFO][4720] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.8ffa10ff3eb024c7d98e6b7f86b12019f549a2bf65ce2c25986428c988896e4b" host="localhost" Nov 1 10:01:27.672354 containerd[1615]: 2025-11-01 10:01:27.509 [INFO][4720] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.8ffa10ff3eb024c7d98e6b7f86b12019f549a2bf65ce2c25986428c988896e4b" host="localhost" Nov 1 10:01:27.672354 containerd[1615]: 2025-11-01 10:01:27.509 [INFO][4720] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 10:01:27.672354 containerd[1615]: 2025-11-01 10:01:27.509 [INFO][4720] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="8ffa10ff3eb024c7d98e6b7f86b12019f549a2bf65ce2c25986428c988896e4b" HandleID="k8s-pod-network.8ffa10ff3eb024c7d98e6b7f86b12019f549a2bf65ce2c25986428c988896e4b" Workload="localhost-k8s-calico--apiserver--65f4874cbd--sm9hg-eth0" Nov 1 10:01:27.673153 containerd[1615]: 2025-11-01 10:01:27.513 [INFO][4704] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8ffa10ff3eb024c7d98e6b7f86b12019f549a2bf65ce2c25986428c988896e4b" Namespace="calico-apiserver" Pod="calico-apiserver-65f4874cbd-sm9hg" WorkloadEndpoint="localhost-k8s-calico--apiserver--65f4874cbd--sm9hg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--65f4874cbd--sm9hg-eth0", GenerateName:"calico-apiserver-65f4874cbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"380b8d89-6dd2-41d8-9c8c-26a95df82b99", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 0, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65f4874cbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-65f4874cbd-sm9hg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2ad5322b767", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:01:27.673153 containerd[1615]: 2025-11-01 10:01:27.513 [INFO][4704] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="8ffa10ff3eb024c7d98e6b7f86b12019f549a2bf65ce2c25986428c988896e4b" Namespace="calico-apiserver" Pod="calico-apiserver-65f4874cbd-sm9hg" WorkloadEndpoint="localhost-k8s-calico--apiserver--65f4874cbd--sm9hg-eth0" Nov 1 10:01:27.673153 containerd[1615]: 2025-11-01 10:01:27.513 [INFO][4704] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2ad5322b767 ContainerID="8ffa10ff3eb024c7d98e6b7f86b12019f549a2bf65ce2c25986428c988896e4b" Namespace="calico-apiserver" Pod="calico-apiserver-65f4874cbd-sm9hg" WorkloadEndpoint="localhost-k8s-calico--apiserver--65f4874cbd--sm9hg-eth0" Nov 1 10:01:27.673153 containerd[1615]: 2025-11-01 10:01:27.524 [INFO][4704] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8ffa10ff3eb024c7d98e6b7f86b12019f549a2bf65ce2c25986428c988896e4b" Namespace="calico-apiserver" Pod="calico-apiserver-65f4874cbd-sm9hg" WorkloadEndpoint="localhost-k8s-calico--apiserver--65f4874cbd--sm9hg-eth0" Nov 1 10:01:27.673153 containerd[1615]: 2025-11-01 10:01:27.526 [INFO][4704] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8ffa10ff3eb024c7d98e6b7f86b12019f549a2bf65ce2c25986428c988896e4b" Namespace="calico-apiserver" Pod="calico-apiserver-65f4874cbd-sm9hg" WorkloadEndpoint="localhost-k8s-calico--apiserver--65f4874cbd--sm9hg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--65f4874cbd--sm9hg-eth0", GenerateName:"calico-apiserver-65f4874cbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"380b8d89-6dd2-41d8-9c8c-26a95df82b99", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 0, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65f4874cbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8ffa10ff3eb024c7d98e6b7f86b12019f549a2bf65ce2c25986428c988896e4b", Pod:"calico-apiserver-65f4874cbd-sm9hg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2ad5322b767", MAC:"0a:92:b9:d9:10:e7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:01:27.673153 containerd[1615]: 2025-11-01 10:01:27.669 [INFO][4704] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8ffa10ff3eb024c7d98e6b7f86b12019f549a2bf65ce2c25986428c988896e4b" Namespace="calico-apiserver" Pod="calico-apiserver-65f4874cbd-sm9hg" WorkloadEndpoint="localhost-k8s-calico--apiserver--65f4874cbd--sm9hg-eth0" Nov 1 10:01:27.726595 systemd-networkd[1513]: calibe3e0630ba8: Gained IPv6LL Nov 1 10:01:27.727475 systemd-networkd[1513]: calie41595f726d: Gained IPv6LL Nov 1 10:01:27.763286 containerd[1615]: time="2025-11-01T10:01:27.763201259Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:01:27.853604 containerd[1615]: time="2025-11-01T10:01:27.853521160Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 10:01:27.853968 containerd[1615]: time="2025-11-01T10:01:27.853612442Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 1 10:01:27.854124 kubelet[2779]: E1101 10:01:27.854080 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 10:01:27.854261 kubelet[2779]: E1101 10:01:27.854128 2779 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 10:01:27.854348 kubelet[2779]: E1101 10:01:27.854306 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x2r8p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fgvv4_calico-system(c846e0de-56ff-40b3-829b-1fda67e4a78f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 10:01:27.854591 systemd-networkd[1513]: calif9842ffa20c: Gained IPv6LL Nov 1 10:01:27.855539 kubelet[2779]: E1101 10:01:27.855469 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fgvv4" podUID="c846e0de-56ff-40b3-829b-1fda67e4a78f" Nov 1 10:01:28.024200 kubelet[2779]: I1101 10:01:28.024133 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-lmwk9" podStartSLOduration=42.024111221 podStartE2EDuration="42.024111221s" podCreationTimestamp="2025-11-01 10:00:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:01:28.023737299 +0000 UTC m=+47.741247043" watchObservedRunningTime="2025-11-01 10:01:28.024111221 +0000 UTC m=+47.741620965" Nov 1 10:01:28.089433 containerd[1615]: time="2025-11-01T10:01:28.087807857Z" level=info msg="connecting to shim 8ffa10ff3eb024c7d98e6b7f86b12019f549a2bf65ce2c25986428c988896e4b" address="unix:///run/containerd/s/1cbb90f2093fef028fd302ee4385eb6c18c4fbda0430402280b2bc537ffcb047" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:01:28.124721 systemd[1]: Started cri-containerd-8ffa10ff3eb024c7d98e6b7f86b12019f549a2bf65ce2c25986428c988896e4b.scope - libcontainer container 8ffa10ff3eb024c7d98e6b7f86b12019f549a2bf65ce2c25986428c988896e4b. Nov 1 10:01:28.140542 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 10:01:28.185612 containerd[1615]: time="2025-11-01T10:01:28.185565309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65f4874cbd-sm9hg,Uid:380b8d89-6dd2-41d8-9c8c-26a95df82b99,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8ffa10ff3eb024c7d98e6b7f86b12019f549a2bf65ce2c25986428c988896e4b\"" Nov 1 10:01:28.188254 containerd[1615]: time="2025-11-01T10:01:28.188220063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 10:01:28.386258 kubelet[2779]: E1101 10:01:28.385851 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:01:28.386743 containerd[1615]: time="2025-11-01T10:01:28.386301276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zczfq,Uid:f28927fa-3258-46e8-940e-e151d8c21104,Namespace:kube-system,Attempt:0,}" Nov 1 10:01:28.386743 containerd[1615]: time="2025-11-01T10:01:28.386627098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cbc9dfd5f-2p8r7,Uid:3fb01811-b89e-4b02-a492-80496752165e,Namespace:calico-system,Attempt:0,}" Nov 1 10:01:28.516888 containerd[1615]: time="2025-11-01T10:01:28.516746869Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:01:28.519376 containerd[1615]: time="2025-11-01T10:01:28.518996802Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 10:01:28.520467 containerd[1615]: time="2025-11-01T10:01:28.519089487Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 1 10:01:28.520867 kubelet[2779]: E1101 10:01:28.520755 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 10:01:28.520867 kubelet[2779]: E1101 10:01:28.520840 2779 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 10:01:28.520999 systemd-networkd[1513]: cali7cb17d91dfc: Link UP Nov 1 10:01:28.521729 kubelet[2779]: E1101 10:01:28.521680 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mfbgc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65f4874cbd-sm9hg_calico-apiserver(380b8d89-6dd2-41d8-9c8c-26a95df82b99): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 10:01:28.522140 systemd-networkd[1513]: cali7cb17d91dfc: Gained carrier Nov 1 10:01:28.523007 kubelet[2779]: E1101 10:01:28.522965 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65f4874cbd-sm9hg" podUID="380b8d89-6dd2-41d8-9c8c-26a95df82b99" Nov 1 10:01:28.534997 kubelet[2779]: E1101 10:01:28.534819 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65f4874cbd-sm9hg" podUID="380b8d89-6dd2-41d8-9c8c-26a95df82b99" Nov 1 10:01:28.535777 kubelet[2779]: E1101 10:01:28.535739 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:01:28.539023 kubelet[2779]: E1101 10:01:28.538309 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65f4874cbd-bgwxl" podUID="84d5906d-6e10-419b-a2c5-f35ab2809acd" Nov 1 10:01:28.541885 containerd[1615]: 2025-11-01 10:01:28.426 [INFO][4813] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 10:01:28.541885 containerd[1615]: 2025-11-01 10:01:28.437 [INFO][4813] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6cbc9dfd5f--2p8r7-eth0 calico-kube-controllers-6cbc9dfd5f- calico-system 3fb01811-b89e-4b02-a492-80496752165e 895 0 2025-11-01 10:01:00 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6cbc9dfd5f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6cbc9dfd5f-2p8r7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7cb17d91dfc [] [] }} ContainerID="cc47cf930b33a9d3a783175302fcfb5a5a8ebcace72c794a9d0f6b3840f42a1c" Namespace="calico-system" Pod="calico-kube-controllers-6cbc9dfd5f-2p8r7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6cbc9dfd5f--2p8r7-" Nov 1 10:01:28.541885 containerd[1615]: 2025-11-01 10:01:28.438 [INFO][4813] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cc47cf930b33a9d3a783175302fcfb5a5a8ebcace72c794a9d0f6b3840f42a1c" Namespace="calico-system" Pod="calico-kube-controllers-6cbc9dfd5f-2p8r7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6cbc9dfd5f--2p8r7-eth0" Nov 1 10:01:28.541885 containerd[1615]: 2025-11-01 10:01:28.469 [INFO][4841] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cc47cf930b33a9d3a783175302fcfb5a5a8ebcace72c794a9d0f6b3840f42a1c" HandleID="k8s-pod-network.cc47cf930b33a9d3a783175302fcfb5a5a8ebcace72c794a9d0f6b3840f42a1c" Workload="localhost-k8s-calico--kube--controllers--6cbc9dfd5f--2p8r7-eth0" Nov 1 10:01:28.541885 containerd[1615]: 2025-11-01 10:01:28.469 [INFO][4841] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cc47cf930b33a9d3a783175302fcfb5a5a8ebcace72c794a9d0f6b3840f42a1c" HandleID="k8s-pod-network.cc47cf930b33a9d3a783175302fcfb5a5a8ebcace72c794a9d0f6b3840f42a1c" Workload="localhost-k8s-calico--kube--controllers--6cbc9dfd5f--2p8r7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139480), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6cbc9dfd5f-2p8r7", "timestamp":"2025-11-01 10:01:28.469608369 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 10:01:28.541885 containerd[1615]: 2025-11-01 10:01:28.469 [INFO][4841] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 10:01:28.541885 containerd[1615]: 2025-11-01 10:01:28.469 [INFO][4841] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 10:01:28.541885 containerd[1615]: 2025-11-01 10:01:28.470 [INFO][4841] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 10:01:28.541885 containerd[1615]: 2025-11-01 10:01:28.478 [INFO][4841] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cc47cf930b33a9d3a783175302fcfb5a5a8ebcace72c794a9d0f6b3840f42a1c" host="localhost" Nov 1 10:01:28.541885 containerd[1615]: 2025-11-01 10:01:28.484 [INFO][4841] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 10:01:28.541885 containerd[1615]: 2025-11-01 10:01:28.490 [INFO][4841] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 10:01:28.541885 containerd[1615]: 2025-11-01 10:01:28.492 [INFO][4841] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 10:01:28.541885 containerd[1615]: 2025-11-01 10:01:28.496 [INFO][4841] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 10:01:28.541885 containerd[1615]: 2025-11-01 10:01:28.496 [INFO][4841] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cc47cf930b33a9d3a783175302fcfb5a5a8ebcace72c794a9d0f6b3840f42a1c" host="localhost" Nov 1 10:01:28.541885 containerd[1615]: 2025-11-01 10:01:28.497 [INFO][4841] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cc47cf930b33a9d3a783175302fcfb5a5a8ebcace72c794a9d0f6b3840f42a1c Nov 1 10:01:28.541885 containerd[1615]: 2025-11-01 10:01:28.503 [INFO][4841] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cc47cf930b33a9d3a783175302fcfb5a5a8ebcace72c794a9d0f6b3840f42a1c" host="localhost" Nov 1 10:01:28.541885 containerd[1615]: 2025-11-01 10:01:28.511 [INFO][4841] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.cc47cf930b33a9d3a783175302fcfb5a5a8ebcace72c794a9d0f6b3840f42a1c" host="localhost" Nov 1 10:01:28.541885 containerd[1615]: 2025-11-01 10:01:28.511 [INFO][4841] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.cc47cf930b33a9d3a783175302fcfb5a5a8ebcace72c794a9d0f6b3840f42a1c" host="localhost" Nov 1 10:01:28.541885 containerd[1615]: 2025-11-01 10:01:28.511 [INFO][4841] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 10:01:28.541885 containerd[1615]: 2025-11-01 10:01:28.511 [INFO][4841] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="cc47cf930b33a9d3a783175302fcfb5a5a8ebcace72c794a9d0f6b3840f42a1c" HandleID="k8s-pod-network.cc47cf930b33a9d3a783175302fcfb5a5a8ebcace72c794a9d0f6b3840f42a1c" Workload="localhost-k8s-calico--kube--controllers--6cbc9dfd5f--2p8r7-eth0" Nov 1 10:01:28.543570 containerd[1615]: 2025-11-01 10:01:28.515 [INFO][4813] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cc47cf930b33a9d3a783175302fcfb5a5a8ebcace72c794a9d0f6b3840f42a1c" Namespace="calico-system" Pod="calico-kube-controllers-6cbc9dfd5f-2p8r7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6cbc9dfd5f--2p8r7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6cbc9dfd5f--2p8r7-eth0", GenerateName:"calico-kube-controllers-6cbc9dfd5f-", Namespace:"calico-system", SelfLink:"", UID:"3fb01811-b89e-4b02-a492-80496752165e", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 1, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6cbc9dfd5f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6cbc9dfd5f-2p8r7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7cb17d91dfc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:01:28.543570 containerd[1615]: 2025-11-01 10:01:28.515 [INFO][4813] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="cc47cf930b33a9d3a783175302fcfb5a5a8ebcace72c794a9d0f6b3840f42a1c" Namespace="calico-system" Pod="calico-kube-controllers-6cbc9dfd5f-2p8r7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6cbc9dfd5f--2p8r7-eth0" Nov 1 10:01:28.543570 containerd[1615]: 2025-11-01 10:01:28.515 [INFO][4813] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7cb17d91dfc ContainerID="cc47cf930b33a9d3a783175302fcfb5a5a8ebcace72c794a9d0f6b3840f42a1c" Namespace="calico-system" Pod="calico-kube-controllers-6cbc9dfd5f-2p8r7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6cbc9dfd5f--2p8r7-eth0" Nov 1 10:01:28.543570 containerd[1615]: 2025-11-01 10:01:28.522 [INFO][4813] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cc47cf930b33a9d3a783175302fcfb5a5a8ebcace72c794a9d0f6b3840f42a1c" Namespace="calico-system" Pod="calico-kube-controllers-6cbc9dfd5f-2p8r7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6cbc9dfd5f--2p8r7-eth0" Nov 1 10:01:28.543570 containerd[1615]: 2025-11-01 10:01:28.523 [INFO][4813] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cc47cf930b33a9d3a783175302fcfb5a5a8ebcace72c794a9d0f6b3840f42a1c" Namespace="calico-system" Pod="calico-kube-controllers-6cbc9dfd5f-2p8r7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6cbc9dfd5f--2p8r7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6cbc9dfd5f--2p8r7-eth0", GenerateName:"calico-kube-controllers-6cbc9dfd5f-", Namespace:"calico-system", SelfLink:"", UID:"3fb01811-b89e-4b02-a492-80496752165e", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 1, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6cbc9dfd5f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cc47cf930b33a9d3a783175302fcfb5a5a8ebcace72c794a9d0f6b3840f42a1c", Pod:"calico-kube-controllers-6cbc9dfd5f-2p8r7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7cb17d91dfc", MAC:"66:48:73:49:20:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:01:28.543570 containerd[1615]: 2025-11-01 10:01:28.537 [INFO][4813] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cc47cf930b33a9d3a783175302fcfb5a5a8ebcace72c794a9d0f6b3840f42a1c" Namespace="calico-system" Pod="calico-kube-controllers-6cbc9dfd5f-2p8r7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6cbc9dfd5f--2p8r7-eth0" Nov 1 10:01:28.546013 kubelet[2779]: E1101 10:01:28.545367 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fgvv4" podUID="c846e0de-56ff-40b3-829b-1fda67e4a78f" Nov 1 10:01:28.602902 containerd[1615]: time="2025-11-01T10:01:28.602841354Z" level=info msg="connecting to shim cc47cf930b33a9d3a783175302fcfb5a5a8ebcace72c794a9d0f6b3840f42a1c" address="unix:///run/containerd/s/815e16aab5eca85b07775b7891e89e243e401c29d009b852abe54514a81012f7" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:01:28.637668 systemd[1]: Started cri-containerd-cc47cf930b33a9d3a783175302fcfb5a5a8ebcace72c794a9d0f6b3840f42a1c.scope - libcontainer container cc47cf930b33a9d3a783175302fcfb5a5a8ebcace72c794a9d0f6b3840f42a1c. Nov 1 10:01:28.656088 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 10:01:28.701225 containerd[1615]: time="2025-11-01T10:01:28.701164078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cbc9dfd5f-2p8r7,Uid:3fb01811-b89e-4b02-a492-80496752165e,Namespace:calico-system,Attempt:0,} returns sandbox id \"cc47cf930b33a9d3a783175302fcfb5a5a8ebcace72c794a9d0f6b3840f42a1c\"" Nov 1 10:01:28.706649 containerd[1615]: time="2025-11-01T10:01:28.706561201Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 10:01:28.769425 systemd-networkd[1513]: cali68215ff0b6d: Link UP Nov 1 10:01:28.770768 systemd-networkd[1513]: cali68215ff0b6d: Gained carrier Nov 1 10:01:28.791184 containerd[1615]: 2025-11-01 10:01:28.418 [INFO][4812] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 10:01:28.791184 containerd[1615]: 2025-11-01 10:01:28.435 [INFO][4812] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--zczfq-eth0 coredns-674b8bbfcf- kube-system f28927fa-3258-46e8-940e-e151d8c21104 897 0 2025-11-01 10:00:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-zczfq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali68215ff0b6d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="024dcdd8fcf838987671044a1c1992107d0fa6a1fde167dcd9811c2a84a08185" Namespace="kube-system" Pod="coredns-674b8bbfcf-zczfq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zczfq-" Nov 1 10:01:28.791184 containerd[1615]: 2025-11-01 10:01:28.435 [INFO][4812] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="024dcdd8fcf838987671044a1c1992107d0fa6a1fde167dcd9811c2a84a08185" Namespace="kube-system" Pod="coredns-674b8bbfcf-zczfq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zczfq-eth0" Nov 1 10:01:28.791184 containerd[1615]: 2025-11-01 10:01:28.469 [INFO][4839] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="024dcdd8fcf838987671044a1c1992107d0fa6a1fde167dcd9811c2a84a08185" HandleID="k8s-pod-network.024dcdd8fcf838987671044a1c1992107d0fa6a1fde167dcd9811c2a84a08185" Workload="localhost-k8s-coredns--674b8bbfcf--zczfq-eth0" Nov 1 10:01:28.791184 containerd[1615]: 2025-11-01 10:01:28.470 [INFO][4839] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="024dcdd8fcf838987671044a1c1992107d0fa6a1fde167dcd9811c2a84a08185" HandleID="k8s-pod-network.024dcdd8fcf838987671044a1c1992107d0fa6a1fde167dcd9811c2a84a08185" Workload="localhost-k8s-coredns--674b8bbfcf--zczfq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000528b00), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-zczfq", "timestamp":"2025-11-01 10:01:28.469803966 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 10:01:28.791184 containerd[1615]: 2025-11-01 10:01:28.470 [INFO][4839] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 10:01:28.791184 containerd[1615]: 2025-11-01 10:01:28.511 [INFO][4839] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 10:01:28.791184 containerd[1615]: 2025-11-01 10:01:28.512 [INFO][4839] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 10:01:28.791184 containerd[1615]: 2025-11-01 10:01:28.582 [INFO][4839] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.024dcdd8fcf838987671044a1c1992107d0fa6a1fde167dcd9811c2a84a08185" host="localhost" Nov 1 10:01:28.791184 containerd[1615]: 2025-11-01 10:01:28.707 [INFO][4839] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 10:01:28.791184 containerd[1615]: 2025-11-01 10:01:28.713 [INFO][4839] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 10:01:28.791184 containerd[1615]: 2025-11-01 10:01:28.715 [INFO][4839] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 10:01:28.791184 containerd[1615]: 2025-11-01 10:01:28.719 [INFO][4839] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 10:01:28.791184 containerd[1615]: 2025-11-01 10:01:28.719 [INFO][4839] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.024dcdd8fcf838987671044a1c1992107d0fa6a1fde167dcd9811c2a84a08185" host="localhost" Nov 1 10:01:28.791184 containerd[1615]: 2025-11-01 10:01:28.722 [INFO][4839] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.024dcdd8fcf838987671044a1c1992107d0fa6a1fde167dcd9811c2a84a08185 Nov 1 10:01:28.791184 containerd[1615]: 2025-11-01 10:01:28.749 [INFO][4839] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.024dcdd8fcf838987671044a1c1992107d0fa6a1fde167dcd9811c2a84a08185" host="localhost" Nov 1 10:01:28.791184 containerd[1615]: 2025-11-01 10:01:28.759 [INFO][4839] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.024dcdd8fcf838987671044a1c1992107d0fa6a1fde167dcd9811c2a84a08185" host="localhost" Nov 1 10:01:28.791184 containerd[1615]: 2025-11-01 10:01:28.759 [INFO][4839] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.024dcdd8fcf838987671044a1c1992107d0fa6a1fde167dcd9811c2a84a08185" host="localhost" Nov 1 10:01:28.791184 containerd[1615]: 2025-11-01 10:01:28.759 [INFO][4839] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 10:01:28.791184 containerd[1615]: 2025-11-01 10:01:28.759 [INFO][4839] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="024dcdd8fcf838987671044a1c1992107d0fa6a1fde167dcd9811c2a84a08185" HandleID="k8s-pod-network.024dcdd8fcf838987671044a1c1992107d0fa6a1fde167dcd9811c2a84a08185" Workload="localhost-k8s-coredns--674b8bbfcf--zczfq-eth0" Nov 1 10:01:28.791840 containerd[1615]: 2025-11-01 10:01:28.765 [INFO][4812] cni-plugin/k8s.go 418: Populated endpoint ContainerID="024dcdd8fcf838987671044a1c1992107d0fa6a1fde167dcd9811c2a84a08185" Namespace="kube-system" Pod="coredns-674b8bbfcf-zczfq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zczfq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--zczfq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f28927fa-3258-46e8-940e-e151d8c21104", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 0, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-zczfq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali68215ff0b6d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:01:28.791840 containerd[1615]: 2025-11-01 10:01:28.765 [INFO][4812] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="024dcdd8fcf838987671044a1c1992107d0fa6a1fde167dcd9811c2a84a08185" Namespace="kube-system" Pod="coredns-674b8bbfcf-zczfq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zczfq-eth0" Nov 1 10:01:28.791840 containerd[1615]: 2025-11-01 10:01:28.765 [INFO][4812] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali68215ff0b6d ContainerID="024dcdd8fcf838987671044a1c1992107d0fa6a1fde167dcd9811c2a84a08185" Namespace="kube-system" Pod="coredns-674b8bbfcf-zczfq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zczfq-eth0" Nov 1 10:01:28.791840 containerd[1615]: 2025-11-01 10:01:28.771 [INFO][4812] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="024dcdd8fcf838987671044a1c1992107d0fa6a1fde167dcd9811c2a84a08185" Namespace="kube-system" Pod="coredns-674b8bbfcf-zczfq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zczfq-eth0" Nov 1 10:01:28.791840 containerd[1615]: 2025-11-01 10:01:28.775 [INFO][4812] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="024dcdd8fcf838987671044a1c1992107d0fa6a1fde167dcd9811c2a84a08185" Namespace="kube-system" Pod="coredns-674b8bbfcf-zczfq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zczfq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--zczfq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f28927fa-3258-46e8-940e-e151d8c21104", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 0, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"024dcdd8fcf838987671044a1c1992107d0fa6a1fde167dcd9811c2a84a08185", Pod:"coredns-674b8bbfcf-zczfq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali68215ff0b6d", MAC:"26:e5:25:aa:bb:f7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:01:28.791840 containerd[1615]: 2025-11-01 10:01:28.786 [INFO][4812] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="024dcdd8fcf838987671044a1c1992107d0fa6a1fde167dcd9811c2a84a08185" Namespace="kube-system" Pod="coredns-674b8bbfcf-zczfq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zczfq-eth0" Nov 1 10:01:28.819329 containerd[1615]: time="2025-11-01T10:01:28.819274792Z" level=info msg="connecting to shim 024dcdd8fcf838987671044a1c1992107d0fa6a1fde167dcd9811c2a84a08185" address="unix:///run/containerd/s/90663c545b74e981131132fe2bd00c35f8d1febb02b84f8b6d534675169deef1" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:01:28.853642 systemd[1]: Started cri-containerd-024dcdd8fcf838987671044a1c1992107d0fa6a1fde167dcd9811c2a84a08185.scope - libcontainer container 024dcdd8fcf838987671044a1c1992107d0fa6a1fde167dcd9811c2a84a08185. Nov 1 10:01:28.873615 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 10:01:28.922735 containerd[1615]: time="2025-11-01T10:01:28.921879696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zczfq,Uid:f28927fa-3258-46e8-940e-e151d8c21104,Namespace:kube-system,Attempt:0,} returns sandbox id \"024dcdd8fcf838987671044a1c1992107d0fa6a1fde167dcd9811c2a84a08185\"" Nov 1 10:01:28.924637 kubelet[2779]: E1101 10:01:28.924599 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:01:28.931506 containerd[1615]: time="2025-11-01T10:01:28.931441397Z" level=info msg="CreateContainer within sandbox \"024dcdd8fcf838987671044a1c1992107d0fa6a1fde167dcd9811c2a84a08185\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 10:01:28.943787 containerd[1615]: time="2025-11-01T10:01:28.943732974Z" level=info msg="Container 59d1a632df8355c54597a7cc9e5c48658f3cbae5617d7199b8d17882e63d9562: CDI devices from CRI Config.CDIDevices: []" Nov 1 10:01:28.951411 containerd[1615]: time="2025-11-01T10:01:28.951346037Z" level=info msg="CreateContainer within sandbox \"024dcdd8fcf838987671044a1c1992107d0fa6a1fde167dcd9811c2a84a08185\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"59d1a632df8355c54597a7cc9e5c48658f3cbae5617d7199b8d17882e63d9562\"" Nov 1 10:01:28.953407 containerd[1615]: time="2025-11-01T10:01:28.952260724Z" level=info msg="StartContainer for \"59d1a632df8355c54597a7cc9e5c48658f3cbae5617d7199b8d17882e63d9562\"" Nov 1 10:01:28.953407 containerd[1615]: time="2025-11-01T10:01:28.953352193Z" level=info msg="connecting to shim 59d1a632df8355c54597a7cc9e5c48658f3cbae5617d7199b8d17882e63d9562" address="unix:///run/containerd/s/90663c545b74e981131132fe2bd00c35f8d1febb02b84f8b6d534675169deef1" protocol=ttrpc version=3 Nov 1 10:01:28.984907 systemd[1]: Started cri-containerd-59d1a632df8355c54597a7cc9e5c48658f3cbae5617d7199b8d17882e63d9562.scope - libcontainer container 59d1a632df8355c54597a7cc9e5c48658f3cbae5617d7199b8d17882e63d9562. Nov 1 10:01:29.006740 systemd-networkd[1513]: cali2ad5322b767: Gained IPv6LL Nov 1 10:01:29.024139 containerd[1615]: time="2025-11-01T10:01:29.024088052Z" level=info msg="StartContainer for \"59d1a632df8355c54597a7cc9e5c48658f3cbae5617d7199b8d17882e63d9562\" returns successfully" Nov 1 10:01:29.057632 containerd[1615]: time="2025-11-01T10:01:29.057568204Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:01:29.059036 containerd[1615]: time="2025-11-01T10:01:29.058959897Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 10:01:29.059126 containerd[1615]: time="2025-11-01T10:01:29.058968353Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 1 10:01:29.059373 kubelet[2779]: E1101 10:01:29.059316 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 10:01:29.059455 kubelet[2779]: E1101 10:01:29.059434 2779 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 10:01:29.059810 kubelet[2779]: E1101 10:01:29.059667 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xznvf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6cbc9dfd5f-2p8r7_calico-system(3fb01811-b89e-4b02-a492-80496752165e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 10:01:29.061140 kubelet[2779]: E1101 10:01:29.061090 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6cbc9dfd5f-2p8r7" podUID="3fb01811-b89e-4b02-a492-80496752165e" Nov 1 10:01:29.542832 kubelet[2779]: E1101 10:01:29.542781 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:01:29.544181 kubelet[2779]: E1101 10:01:29.544125 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65f4874cbd-sm9hg" podUID="380b8d89-6dd2-41d8-9c8c-26a95df82b99" Nov 1 10:01:29.557769 kubelet[2779]: E1101 10:01:29.557706 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6cbc9dfd5f-2p8r7" podUID="3fb01811-b89e-4b02-a492-80496752165e" Nov 1 10:01:29.646671 systemd-networkd[1513]: cali7cb17d91dfc: Gained IPv6LL Nov 1 10:01:29.902561 systemd-networkd[1513]: cali68215ff0b6d: Gained IPv6LL Nov 1 10:01:29.920007 kubelet[2779]: I1101 10:01:29.919930 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-zczfq" podStartSLOduration=43.919906009 podStartE2EDuration="43.919906009s" podCreationTimestamp="2025-11-01 10:00:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:01:29.918213321 +0000 UTC m=+49.635723086" watchObservedRunningTime="2025-11-01 10:01:29.919906009 +0000 UTC m=+49.637415753" Nov 1 10:01:30.544833 kubelet[2779]: E1101 10:01:30.544782 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:01:30.545699 kubelet[2779]: E1101 10:01:30.545634 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6cbc9dfd5f-2p8r7" podUID="3fb01811-b89e-4b02-a492-80496752165e" Nov 1 10:01:30.793204 kubelet[2779]: I1101 10:01:30.793147 2779 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 10:01:30.793638 kubelet[2779]: E1101 10:01:30.793542 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:01:31.546819 kubelet[2779]: E1101 10:01:31.546772 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:01:31.547455 kubelet[2779]: E1101 10:01:31.546988 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:01:31.855260 systemd-networkd[1513]: vxlan.calico: Link UP Nov 1 10:01:31.855275 systemd-networkd[1513]: vxlan.calico: Gained carrier Nov 1 10:01:32.464556 systemd[1]: Started sshd@9-10.0.0.25:22-10.0.0.1:36488.service - OpenSSH per-connection server daemon (10.0.0.1:36488). Nov 1 10:01:32.536456 sshd[5194]: Accepted publickey for core from 10.0.0.1 port 36488 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:01:32.538829 sshd-session[5194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:01:32.544243 systemd-logind[1588]: New session 10 of user core. Nov 1 10:01:32.552613 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 1 10:01:32.846962 sshd[5197]: Connection closed by 10.0.0.1 port 36488 Nov 1 10:01:32.847343 sshd-session[5194]: pam_unix(sshd:session): session closed for user core Nov 1 10:01:32.853048 systemd[1]: sshd@9-10.0.0.25:22-10.0.0.1:36488.service: Deactivated successfully. Nov 1 10:01:32.855308 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 10:01:32.860518 systemd-logind[1588]: Session 10 logged out. Waiting for processes to exit. Nov 1 10:01:32.862048 systemd-logind[1588]: Removed session 10. Nov 1 10:01:33.936709 systemd-networkd[1513]: vxlan.calico: Gained IPv6LL Nov 1 10:01:36.387335 containerd[1615]: time="2025-11-01T10:01:36.387253276Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 10:01:36.802905 containerd[1615]: time="2025-11-01T10:01:36.802731409Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:01:36.819076 containerd[1615]: time="2025-11-01T10:01:36.818971324Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 10:01:36.819076 containerd[1615]: time="2025-11-01T10:01:36.819044010Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 1 10:01:36.819396 kubelet[2779]: E1101 10:01:36.819332 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 10:01:36.819867 kubelet[2779]: E1101 10:01:36.819434 2779 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 10:01:36.823548 kubelet[2779]: E1101 10:01:36.823492 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c1eb14a891d84fedb5e547d6cabc368b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fdl67,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74c9bcdf97-7n95m_calico-system(bbcb1281-6226-4685-891f-c7986a6aa61d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 10:01:36.825722 containerd[1615]: time="2025-11-01T10:01:36.825483745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 10:01:37.134339 containerd[1615]: time="2025-11-01T10:01:37.134268929Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:01:37.142751 containerd[1615]: time="2025-11-01T10:01:37.142685673Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 10:01:37.142893 containerd[1615]: time="2025-11-01T10:01:37.142781223Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 1 10:01:37.143032 kubelet[2779]: E1101 10:01:37.142961 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 10:01:37.143096 kubelet[2779]: E1101 10:01:37.143039 2779 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 10:01:37.143266 kubelet[2779]: E1101 10:01:37.143198 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fdl67,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74c9bcdf97-7n95m_calico-system(bbcb1281-6226-4685-891f-c7986a6aa61d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 10:01:37.144470 kubelet[2779]: E1101 10:01:37.144414 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74c9bcdf97-7n95m" podUID="bbcb1281-6226-4685-891f-c7986a6aa61d" Nov 1 10:01:37.859316 systemd[1]: Started sshd@10-10.0.0.25:22-10.0.0.1:36502.service - OpenSSH per-connection server daemon (10.0.0.1:36502). Nov 1 10:01:37.916687 sshd[5245]: Accepted publickey for core from 10.0.0.1 port 36502 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:01:37.918257 sshd-session[5245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:01:37.922568 systemd-logind[1588]: New session 11 of user core. Nov 1 10:01:37.929507 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 1 10:01:38.067088 sshd[5248]: Connection closed by 10.0.0.1 port 36502 Nov 1 10:01:38.067855 sshd-session[5245]: pam_unix(sshd:session): session closed for user core Nov 1 10:01:38.080219 systemd[1]: sshd@10-10.0.0.25:22-10.0.0.1:36502.service: Deactivated successfully. Nov 1 10:01:38.082186 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 10:01:38.082924 systemd-logind[1588]: Session 11 logged out. Waiting for processes to exit. Nov 1 10:01:38.085574 systemd[1]: Started sshd@11-10.0.0.25:22-10.0.0.1:36516.service - OpenSSH per-connection server daemon (10.0.0.1:36516). Nov 1 10:01:38.086220 systemd-logind[1588]: Removed session 11. Nov 1 10:01:38.143834 sshd[5262]: Accepted publickey for core from 10.0.0.1 port 36516 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:01:38.145009 sshd-session[5262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:01:38.149251 systemd-logind[1588]: New session 12 of user core. Nov 1 10:01:38.159498 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 1 10:01:38.278520 sshd[5266]: Connection closed by 10.0.0.1 port 36516 Nov 1 10:01:38.278903 sshd-session[5262]: pam_unix(sshd:session): session closed for user core Nov 1 10:01:38.289125 systemd[1]: sshd@11-10.0.0.25:22-10.0.0.1:36516.service: Deactivated successfully. Nov 1 10:01:38.290975 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 10:01:38.291924 systemd-logind[1588]: Session 12 logged out. Waiting for processes to exit. Nov 1 10:01:38.296957 systemd[1]: Started sshd@12-10.0.0.25:22-10.0.0.1:36532.service - OpenSSH per-connection server daemon (10.0.0.1:36532). Nov 1 10:01:38.297831 systemd-logind[1588]: Removed session 12. Nov 1 10:01:38.349566 sshd[5278]: Accepted publickey for core from 10.0.0.1 port 36532 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:01:38.351417 sshd-session[5278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:01:38.356415 systemd-logind[1588]: New session 13 of user core. Nov 1 10:01:38.367717 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 1 10:01:38.474978 sshd[5281]: Connection closed by 10.0.0.1 port 36532 Nov 1 10:01:38.475263 sshd-session[5278]: pam_unix(sshd:session): session closed for user core Nov 1 10:01:38.479968 systemd[1]: sshd@12-10.0.0.25:22-10.0.0.1:36532.service: Deactivated successfully. Nov 1 10:01:38.482285 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 10:01:38.483652 systemd-logind[1588]: Session 13 logged out. Waiting for processes to exit. Nov 1 10:01:38.484929 systemd-logind[1588]: Removed session 13. Nov 1 10:01:39.386159 containerd[1615]: time="2025-11-01T10:01:39.386109956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 10:01:39.703927 containerd[1615]: time="2025-11-01T10:01:39.703767912Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:01:39.704971 containerd[1615]: time="2025-11-01T10:01:39.704936043Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 10:01:39.705138 containerd[1615]: time="2025-11-01T10:01:39.704969526Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 1 10:01:39.705362 kubelet[2779]: E1101 10:01:39.705307 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 10:01:39.705796 kubelet[2779]: E1101 10:01:39.705370 2779 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 10:01:39.705796 kubelet[2779]: E1101 10:01:39.705569 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x2r8p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fgvv4_calico-system(c846e0de-56ff-40b3-829b-1fda67e4a78f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 10:01:39.707594 containerd[1615]: time="2025-11-01T10:01:39.707544197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 10:01:40.337008 containerd[1615]: time="2025-11-01T10:01:40.336936542Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:01:40.483977 containerd[1615]: time="2025-11-01T10:01:40.483908467Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 10:01:40.483977 containerd[1615]: time="2025-11-01T10:01:40.483958631Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 1 10:01:40.484611 kubelet[2779]: E1101 10:01:40.484176 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 10:01:40.484611 kubelet[2779]: E1101 10:01:40.484234 2779 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 10:01:40.484611 kubelet[2779]: E1101 10:01:40.484561 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x2r8p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fgvv4_calico-system(c846e0de-56ff-40b3-829b-1fda67e4a78f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 10:01:40.484828 containerd[1615]: time="2025-11-01T10:01:40.484662382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 10:01:40.485829 kubelet[2779]: E1101 10:01:40.485781 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fgvv4" podUID="c846e0de-56ff-40b3-829b-1fda67e4a78f" Nov 1 10:01:40.865016 containerd[1615]: time="2025-11-01T10:01:40.864953115Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:01:40.866298 containerd[1615]: time="2025-11-01T10:01:40.866248375Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 10:01:40.866791 containerd[1615]: time="2025-11-01T10:01:40.866345688Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 1 10:01:40.866839 kubelet[2779]: E1101 10:01:40.866498 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 10:01:40.866839 kubelet[2779]: E1101 10:01:40.866558 2779 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 10:01:40.866839 kubelet[2779]: E1101 10:01:40.866735 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tkgn4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-p67f4_calico-system(93ee64e5-e5b1-4b2f-98fd-8f1562d11954): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 10:01:40.867942 kubelet[2779]: E1101 10:01:40.867893 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p67f4" podUID="93ee64e5-e5b1-4b2f-98fd-8f1562d11954" Nov 1 10:01:41.387464 containerd[1615]: time="2025-11-01T10:01:41.387127153Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 10:01:41.721455 containerd[1615]: time="2025-11-01T10:01:41.721282413Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:01:41.722938 containerd[1615]: time="2025-11-01T10:01:41.722883788Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 10:01:41.723047 containerd[1615]: time="2025-11-01T10:01:41.723006799Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 1 10:01:41.723251 kubelet[2779]: E1101 10:01:41.723195 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 10:01:41.723305 kubelet[2779]: E1101 10:01:41.723261 2779 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 10:01:41.723509 kubelet[2779]: E1101 10:01:41.723451 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mfbgc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65f4874cbd-sm9hg_calico-apiserver(380b8d89-6dd2-41d8-9c8c-26a95df82b99): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 10:01:41.724593 kubelet[2779]: E1101 10:01:41.724541 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65f4874cbd-sm9hg" podUID="380b8d89-6dd2-41d8-9c8c-26a95df82b99" Nov 1 10:01:42.386675 containerd[1615]: time="2025-11-01T10:01:42.386606596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 10:01:42.702255 containerd[1615]: time="2025-11-01T10:01:42.702078037Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:01:42.703898 containerd[1615]: time="2025-11-01T10:01:42.703829526Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 10:01:42.704013 containerd[1615]: time="2025-11-01T10:01:42.703870336Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 1 10:01:42.704246 kubelet[2779]: E1101 10:01:42.704189 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 10:01:42.704657 kubelet[2779]: E1101 10:01:42.704262 2779 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 10:01:42.704657 kubelet[2779]: E1101 10:01:42.704497 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jzmhz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65f4874cbd-bgwxl_calico-apiserver(84d5906d-6e10-419b-a2c5-f35ab2809acd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 10:01:42.705928 kubelet[2779]: E1101 10:01:42.705705 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65f4874cbd-bgwxl" podUID="84d5906d-6e10-419b-a2c5-f35ab2809acd" Nov 1 10:01:43.490409 systemd[1]: Started sshd@13-10.0.0.25:22-10.0.0.1:46944.service - OpenSSH per-connection server daemon (10.0.0.1:46944). Nov 1 10:01:43.555028 sshd[5304]: Accepted publickey for core from 10.0.0.1 port 46944 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:01:43.557000 sshd-session[5304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:01:43.563166 systemd-logind[1588]: New session 14 of user core. Nov 1 10:01:43.568580 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 1 10:01:43.650700 sshd[5307]: Connection closed by 10.0.0.1 port 46944 Nov 1 10:01:43.651060 sshd-session[5304]: pam_unix(sshd:session): session closed for user core Nov 1 10:01:43.656487 systemd[1]: sshd@13-10.0.0.25:22-10.0.0.1:46944.service: Deactivated successfully. Nov 1 10:01:43.659480 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 10:01:43.660379 systemd-logind[1588]: Session 14 logged out. Waiting for processes to exit. Nov 1 10:01:43.662265 systemd-logind[1588]: Removed session 14. Nov 1 10:01:45.392241 containerd[1615]: time="2025-11-01T10:01:45.392186602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 10:01:45.724408 containerd[1615]: time="2025-11-01T10:01:45.724192157Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:01:45.725653 containerd[1615]: time="2025-11-01T10:01:45.725594541Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 10:01:45.725714 containerd[1615]: time="2025-11-01T10:01:45.725676300Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 1 10:01:45.725903 kubelet[2779]: E1101 10:01:45.725858 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 10:01:45.726289 kubelet[2779]: E1101 10:01:45.725921 2779 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 10:01:45.726289 kubelet[2779]: E1101 10:01:45.726099 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xznvf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6cbc9dfd5f-2p8r7_calico-system(3fb01811-b89e-4b02-a492-80496752165e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 10:01:45.727416 kubelet[2779]: E1101 10:01:45.727333 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6cbc9dfd5f-2p8r7" podUID="3fb01811-b89e-4b02-a492-80496752165e" Nov 1 10:01:48.385342 kubelet[2779]: E1101 10:01:48.385266 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:01:48.664902 systemd[1]: Started sshd@14-10.0.0.25:22-10.0.0.1:46946.service - OpenSSH per-connection server daemon (10.0.0.1:46946). Nov 1 10:01:48.729632 sshd[5328]: Accepted publickey for core from 10.0.0.1 port 46946 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:01:48.731659 sshd-session[5328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:01:48.736979 systemd-logind[1588]: New session 15 of user core. Nov 1 10:01:48.744688 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 1 10:01:48.828863 sshd[5331]: Connection closed by 10.0.0.1 port 46946 Nov 1 10:01:48.829155 sshd-session[5328]: pam_unix(sshd:session): session closed for user core Nov 1 10:01:48.834226 systemd[1]: sshd@14-10.0.0.25:22-10.0.0.1:46946.service: Deactivated successfully. Nov 1 10:01:48.836800 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 10:01:48.837730 systemd-logind[1588]: Session 15 logged out. Waiting for processes to exit. Nov 1 10:01:48.839481 systemd-logind[1588]: Removed session 15. Nov 1 10:01:51.389562 kubelet[2779]: E1101 10:01:51.388822 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74c9bcdf97-7n95m" podUID="bbcb1281-6226-4685-891f-c7986a6aa61d" Nov 1 10:01:53.596339 kubelet[2779]: E1101 10:01:53.596301 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:01:53.843095 systemd[1]: Started sshd@15-10.0.0.25:22-10.0.0.1:52486.service - OpenSSH per-connection server daemon (10.0.0.1:52486). Nov 1 10:01:53.901987 sshd[5379]: Accepted publickey for core from 10.0.0.1 port 52486 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:01:53.903787 sshd-session[5379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:01:53.908032 systemd-logind[1588]: New session 16 of user core. Nov 1 10:01:53.921581 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 1 10:01:54.004873 sshd[5382]: Connection closed by 10.0.0.1 port 52486 Nov 1 10:01:54.005174 sshd-session[5379]: pam_unix(sshd:session): session closed for user core Nov 1 10:01:54.010050 systemd[1]: sshd@15-10.0.0.25:22-10.0.0.1:52486.service: Deactivated successfully. Nov 1 10:01:54.012131 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 10:01:54.012930 systemd-logind[1588]: Session 16 logged out. Waiting for processes to exit. Nov 1 10:01:54.013969 systemd-logind[1588]: Removed session 16. Nov 1 10:01:54.386422 kubelet[2779]: E1101 10:01:54.386262 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65f4874cbd-bgwxl" podUID="84d5906d-6e10-419b-a2c5-f35ab2809acd" Nov 1 10:01:54.386914 kubelet[2779]: E1101 10:01:54.386860 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fgvv4" podUID="c846e0de-56ff-40b3-829b-1fda67e4a78f" Nov 1 10:01:55.386311 kubelet[2779]: E1101 10:01:55.386235 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p67f4" podUID="93ee64e5-e5b1-4b2f-98fd-8f1562d11954" Nov 1 10:01:57.386546 kubelet[2779]: E1101 10:01:57.386477 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65f4874cbd-sm9hg" podUID="380b8d89-6dd2-41d8-9c8c-26a95df82b99" Nov 1 10:01:58.385339 kubelet[2779]: E1101 10:01:58.385286 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:01:59.029210 systemd[1]: Started sshd@16-10.0.0.25:22-10.0.0.1:52502.service - OpenSSH per-connection server daemon (10.0.0.1:52502). Nov 1 10:01:59.090754 sshd[5397]: Accepted publickey for core from 10.0.0.1 port 52502 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:01:59.092457 sshd-session[5397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:01:59.097155 systemd-logind[1588]: New session 17 of user core. Nov 1 10:01:59.103529 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 1 10:01:59.187180 sshd[5400]: Connection closed by 10.0.0.1 port 52502 Nov 1 10:01:59.187618 sshd-session[5397]: pam_unix(sshd:session): session closed for user core Nov 1 10:01:59.202102 systemd[1]: sshd@16-10.0.0.25:22-10.0.0.1:52502.service: Deactivated successfully. Nov 1 10:01:59.204254 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 10:01:59.205258 systemd-logind[1588]: Session 17 logged out. Waiting for processes to exit. Nov 1 10:01:59.209276 systemd[1]: Started sshd@17-10.0.0.25:22-10.0.0.1:52510.service - OpenSSH per-connection server daemon (10.0.0.1:52510). Nov 1 10:01:59.210376 systemd-logind[1588]: Removed session 17. Nov 1 10:01:59.283758 sshd[5413]: Accepted publickey for core from 10.0.0.1 port 52510 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:01:59.285239 sshd-session[5413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:01:59.290171 systemd-logind[1588]: New session 18 of user core. Nov 1 10:01:59.298527 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 1 10:01:59.395873 kubelet[2779]: E1101 10:01:59.395776 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:01:59.556175 sshd[5416]: Connection closed by 10.0.0.1 port 52510 Nov 1 10:01:59.556508 sshd-session[5413]: pam_unix(sshd:session): session closed for user core Nov 1 10:01:59.566230 systemd[1]: sshd@17-10.0.0.25:22-10.0.0.1:52510.service: Deactivated successfully. Nov 1 10:01:59.568317 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 10:01:59.569063 systemd-logind[1588]: Session 18 logged out. Waiting for processes to exit. Nov 1 10:01:59.571994 systemd[1]: Started sshd@18-10.0.0.25:22-10.0.0.1:52512.service - OpenSSH per-connection server daemon (10.0.0.1:52512). Nov 1 10:01:59.572673 systemd-logind[1588]: Removed session 18. Nov 1 10:01:59.629795 sshd[5429]: Accepted publickey for core from 10.0.0.1 port 52512 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:01:59.631456 sshd-session[5429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:01:59.636078 systemd-logind[1588]: New session 19 of user core. Nov 1 10:01:59.648565 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 1 10:02:00.119435 sshd[5432]: Connection closed by 10.0.0.1 port 52512 Nov 1 10:02:00.121039 sshd-session[5429]: pam_unix(sshd:session): session closed for user core Nov 1 10:02:00.132091 systemd[1]: sshd@18-10.0.0.25:22-10.0.0.1:52512.service: Deactivated successfully. Nov 1 10:02:00.136421 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 10:02:00.138315 systemd-logind[1588]: Session 19 logged out. Waiting for processes to exit. Nov 1 10:02:00.143885 systemd[1]: Started sshd@19-10.0.0.25:22-10.0.0.1:44680.service - OpenSSH per-connection server daemon (10.0.0.1:44680). Nov 1 10:02:00.145934 systemd-logind[1588]: Removed session 19. Nov 1 10:02:00.198018 sshd[5455]: Accepted publickey for core from 10.0.0.1 port 44680 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:02:00.199490 sshd-session[5455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:02:00.204008 systemd-logind[1588]: New session 20 of user core. Nov 1 10:02:00.210503 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 1 10:02:00.374241 sshd[5458]: Connection closed by 10.0.0.1 port 44680 Nov 1 10:02:00.374414 sshd-session[5455]: pam_unix(sshd:session): session closed for user core Nov 1 10:02:00.385695 systemd[1]: sshd@19-10.0.0.25:22-10.0.0.1:44680.service: Deactivated successfully. Nov 1 10:02:00.388832 kubelet[2779]: E1101 10:02:00.388001 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6cbc9dfd5f-2p8r7" podUID="3fb01811-b89e-4b02-a492-80496752165e" Nov 1 10:02:00.393176 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 10:02:00.394106 systemd-logind[1588]: Session 20 logged out. Waiting for processes to exit. Nov 1 10:02:00.400712 systemd[1]: Started sshd@20-10.0.0.25:22-10.0.0.1:44696.service - OpenSSH per-connection server daemon (10.0.0.1:44696). Nov 1 10:02:00.404239 systemd-logind[1588]: Removed session 20. Nov 1 10:02:00.470531 sshd[5469]: Accepted publickey for core from 10.0.0.1 port 44696 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:02:00.472068 sshd-session[5469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:02:00.476632 systemd-logind[1588]: New session 21 of user core. Nov 1 10:02:00.481585 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 1 10:02:00.561006 sshd[5472]: Connection closed by 10.0.0.1 port 44696 Nov 1 10:02:00.561339 sshd-session[5469]: pam_unix(sshd:session): session closed for user core Nov 1 10:02:00.566252 systemd[1]: sshd@20-10.0.0.25:22-10.0.0.1:44696.service: Deactivated successfully. Nov 1 10:02:00.568310 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 10:02:00.569293 systemd-logind[1588]: Session 21 logged out. Waiting for processes to exit. Nov 1 10:02:00.570414 systemd-logind[1588]: Removed session 21. Nov 1 10:02:04.388089 containerd[1615]: time="2025-11-01T10:02:04.388019144Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 10:02:04.732484 containerd[1615]: time="2025-11-01T10:02:04.732312807Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:02:04.733701 containerd[1615]: time="2025-11-01T10:02:04.733658640Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 10:02:04.733768 containerd[1615]: time="2025-11-01T10:02:04.733716340Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 1 10:02:04.733965 kubelet[2779]: E1101 10:02:04.733915 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 10:02:04.734353 kubelet[2779]: E1101 10:02:04.733979 2779 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 10:02:04.734353 kubelet[2779]: E1101 10:02:04.734147 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c1eb14a891d84fedb5e547d6cabc368b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fdl67,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74c9bcdf97-7n95m_calico-system(bbcb1281-6226-4685-891f-c7986a6aa61d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 10:02:04.737219 containerd[1615]: time="2025-11-01T10:02:04.736948058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 10:02:05.094392 containerd[1615]: time="2025-11-01T10:02:05.094352048Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:02:05.095565 containerd[1615]: time="2025-11-01T10:02:05.095525611Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 10:02:05.095689 containerd[1615]: time="2025-11-01T10:02:05.095632626Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 1 10:02:05.095827 kubelet[2779]: E1101 10:02:05.095774 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 10:02:05.095885 kubelet[2779]: E1101 10:02:05.095831 2779 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 10:02:05.096019 kubelet[2779]: E1101 10:02:05.095964 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fdl67,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74c9bcdf97-7n95m_calico-system(bbcb1281-6226-4685-891f-c7986a6aa61d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 10:02:05.097197 kubelet[2779]: E1101 10:02:05.097148 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74c9bcdf97-7n95m" podUID="bbcb1281-6226-4685-891f-c7986a6aa61d" Nov 1 10:02:05.387100 containerd[1615]: time="2025-11-01T10:02:05.386837520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 10:02:05.573461 systemd[1]: Started sshd@21-10.0.0.25:22-10.0.0.1:44702.service - OpenSSH per-connection server daemon (10.0.0.1:44702). Nov 1 10:02:05.645343 sshd[5487]: Accepted publickey for core from 10.0.0.1 port 44702 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:02:05.647219 sshd-session[5487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:02:05.652225 systemd-logind[1588]: New session 22 of user core. Nov 1 10:02:05.666529 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 1 10:02:05.737808 sshd[5490]: Connection closed by 10.0.0.1 port 44702 Nov 1 10:02:05.738115 sshd-session[5487]: pam_unix(sshd:session): session closed for user core Nov 1 10:02:05.742927 systemd[1]: sshd@21-10.0.0.25:22-10.0.0.1:44702.service: Deactivated successfully. Nov 1 10:02:05.745037 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 10:02:05.745762 systemd-logind[1588]: Session 22 logged out. Waiting for processes to exit. Nov 1 10:02:05.746902 systemd-logind[1588]: Removed session 22. Nov 1 10:02:05.761626 containerd[1615]: time="2025-11-01T10:02:05.761578477Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:02:05.762811 containerd[1615]: time="2025-11-01T10:02:05.762777468Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 10:02:05.762877 containerd[1615]: time="2025-11-01T10:02:05.762857320Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 1 10:02:05.763018 kubelet[2779]: E1101 10:02:05.762976 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 10:02:05.763259 kubelet[2779]: E1101 10:02:05.763030 2779 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 10:02:05.763259 kubelet[2779]: E1101 10:02:05.763172 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x2r8p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fgvv4_calico-system(c846e0de-56ff-40b3-829b-1fda67e4a78f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 10:02:05.765353 containerd[1615]: time="2025-11-01T10:02:05.765167945Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 10:02:06.367314 containerd[1615]: time="2025-11-01T10:02:06.367252776Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:02:06.368535 containerd[1615]: time="2025-11-01T10:02:06.368482925Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 10:02:06.368535 containerd[1615]: time="2025-11-01T10:02:06.368514385Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 1 10:02:06.368792 kubelet[2779]: E1101 10:02:06.368741 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 10:02:06.368849 kubelet[2779]: E1101 10:02:06.368798 2779 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 10:02:06.369007 kubelet[2779]: E1101 10:02:06.368952 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x2r8p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fgvv4_calico-system(c846e0de-56ff-40b3-829b-1fda67e4a78f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 10:02:06.370160 kubelet[2779]: E1101 10:02:06.370128 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fgvv4" podUID="c846e0de-56ff-40b3-829b-1fda67e4a78f" Nov 1 10:02:06.386920 containerd[1615]: time="2025-11-01T10:02:06.386654760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 10:02:06.739408 containerd[1615]: time="2025-11-01T10:02:06.739252148Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:02:06.740514 containerd[1615]: time="2025-11-01T10:02:06.740469643Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 10:02:06.740514 containerd[1615]: time="2025-11-01T10:02:06.740503889Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 1 10:02:06.740763 kubelet[2779]: E1101 10:02:06.740712 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 10:02:06.740839 kubelet[2779]: E1101 10:02:06.740768 2779 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 10:02:06.740950 kubelet[2779]: E1101 10:02:06.740905 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jzmhz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65f4874cbd-bgwxl_calico-apiserver(84d5906d-6e10-419b-a2c5-f35ab2809acd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 10:02:06.742093 kubelet[2779]: E1101 10:02:06.742053 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65f4874cbd-bgwxl" podUID="84d5906d-6e10-419b-a2c5-f35ab2809acd" Nov 1 10:02:08.386433 containerd[1615]: time="2025-11-01T10:02:08.386149493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 10:02:08.699196 containerd[1615]: time="2025-11-01T10:02:08.699043862Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:02:08.700358 containerd[1615]: time="2025-11-01T10:02:08.700283047Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 10:02:08.700358 containerd[1615]: time="2025-11-01T10:02:08.700319927Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 1 10:02:08.700564 kubelet[2779]: E1101 10:02:08.700447 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 10:02:08.700564 kubelet[2779]: E1101 10:02:08.700485 2779 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 10:02:08.700993 kubelet[2779]: E1101 10:02:08.700615 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tkgn4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-p67f4_calico-system(93ee64e5-e5b1-4b2f-98fd-8f1562d11954): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 10:02:08.701815 kubelet[2779]: E1101 10:02:08.701789 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p67f4" podUID="93ee64e5-e5b1-4b2f-98fd-8f1562d11954" Nov 1 10:02:09.385372 kubelet[2779]: E1101 10:02:09.385325 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:02:10.757693 systemd[1]: Started sshd@22-10.0.0.25:22-10.0.0.1:34670.service - OpenSSH per-connection server daemon (10.0.0.1:34670). Nov 1 10:02:10.818111 sshd[5503]: Accepted publickey for core from 10.0.0.1 port 34670 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:02:10.819821 sshd-session[5503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:02:10.824505 systemd-logind[1588]: New session 23 of user core. Nov 1 10:02:10.838535 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 1 10:02:10.925538 sshd[5506]: Connection closed by 10.0.0.1 port 34670 Nov 1 10:02:10.925964 sshd-session[5503]: pam_unix(sshd:session): session closed for user core Nov 1 10:02:10.932013 systemd[1]: sshd@22-10.0.0.25:22-10.0.0.1:34670.service: Deactivated successfully. Nov 1 10:02:10.934509 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 10:02:10.935350 systemd-logind[1588]: Session 23 logged out. Waiting for processes to exit. Nov 1 10:02:10.936867 systemd-logind[1588]: Removed session 23. Nov 1 10:02:11.386689 containerd[1615]: time="2025-11-01T10:02:11.386628040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 10:02:11.743872 containerd[1615]: time="2025-11-01T10:02:11.743699528Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:02:11.745260 containerd[1615]: time="2025-11-01T10:02:11.745218424Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 10:02:11.745321 containerd[1615]: time="2025-11-01T10:02:11.745257859Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 1 10:02:11.745544 kubelet[2779]: E1101 10:02:11.745480 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 10:02:11.745951 kubelet[2779]: E1101 10:02:11.745544 2779 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 10:02:11.745993 containerd[1615]: time="2025-11-01T10:02:11.745891506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 10:02:11.746108 kubelet[2779]: E1101 10:02:11.746027 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xznvf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6cbc9dfd5f-2p8r7_calico-system(3fb01811-b89e-4b02-a492-80496752165e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 10:02:11.747468 kubelet[2779]: E1101 10:02:11.747428 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6cbc9dfd5f-2p8r7" podUID="3fb01811-b89e-4b02-a492-80496752165e" Nov 1 10:02:12.077124 containerd[1615]: time="2025-11-01T10:02:12.076958728Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:02:12.078290 containerd[1615]: time="2025-11-01T10:02:12.078245210Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 10:02:12.078512 containerd[1615]: time="2025-11-01T10:02:12.078454098Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 1 10:02:12.078818 kubelet[2779]: E1101 10:02:12.078738 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 10:02:12.078886 kubelet[2779]: E1101 10:02:12.078831 2779 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 10:02:12.079163 kubelet[2779]: E1101 10:02:12.079091 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mfbgc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65f4874cbd-sm9hg_calico-apiserver(380b8d89-6dd2-41d8-9c8c-26a95df82b99): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 10:02:12.080552 kubelet[2779]: E1101 10:02:12.080471 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65f4874cbd-sm9hg" podUID="380b8d89-6dd2-41d8-9c8c-26a95df82b99" Nov 1 10:02:15.942500 systemd[1]: Started sshd@23-10.0.0.25:22-10.0.0.1:34686.service - OpenSSH per-connection server daemon (10.0.0.1:34686). Nov 1 10:02:16.030062 sshd[5527]: Accepted publickey for core from 10.0.0.1 port 34686 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:02:16.031882 sshd-session[5527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:02:16.037142 systemd-logind[1588]: New session 24 of user core. Nov 1 10:02:16.047517 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 1 10:02:16.141819 sshd[5530]: Connection closed by 10.0.0.1 port 34686 Nov 1 10:02:16.142109 sshd-session[5527]: pam_unix(sshd:session): session closed for user core Nov 1 10:02:16.147238 systemd[1]: sshd@23-10.0.0.25:22-10.0.0.1:34686.service: Deactivated successfully. Nov 1 10:02:16.149483 systemd[1]: session-24.scope: Deactivated successfully. Nov 1 10:02:16.150338 systemd-logind[1588]: Session 24 logged out. Waiting for processes to exit. Nov 1 10:02:16.151751 systemd-logind[1588]: Removed session 24.