Nov 1 10:03:47.336762 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Sat Nov 1 08:12:41 -00 2025 Nov 1 10:03:47.336807 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=91cbcb3658f876d239d31cc29b206c4e950f20e536a8e14bd58a23c6f0ecf128 Nov 1 10:03:47.336817 kernel: BIOS-provided physical RAM map: Nov 1 10:03:47.336849 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 1 10:03:47.336857 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Nov 1 10:03:47.336863 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Nov 1 10:03:47.336872 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Nov 1 10:03:47.336879 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Nov 1 10:03:47.336897 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Nov 1 10:03:47.336914 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Nov 1 10:03:47.336931 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Nov 1 10:03:47.336965 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Nov 1 10:03:47.336986 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Nov 1 10:03:47.336994 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Nov 1 10:03:47.337002 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Nov 1 10:03:47.337010 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Nov 1 10:03:47.337021 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Nov 1 10:03:47.337028 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Nov 1 10:03:47.337035 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Nov 1 10:03:47.337043 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Nov 1 10:03:47.337050 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Nov 1 10:03:47.337058 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Nov 1 10:03:47.337065 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 1 10:03:47.337072 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 1 10:03:47.337080 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Nov 1 10:03:47.337087 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 1 10:03:47.337097 kernel: NX (Execute Disable) protection: active Nov 1 10:03:47.337104 kernel: APIC: Static calls initialized Nov 1 10:03:47.337111 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Nov 1 10:03:47.337119 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Nov 1 10:03:47.337126 kernel: extended physical RAM map: Nov 1 10:03:47.337134 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 1 10:03:47.337141 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Nov 1 10:03:47.337149 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Nov 1 10:03:47.337156 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Nov 1 10:03:47.337164 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Nov 1 10:03:47.337171 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Nov 1 10:03:47.337180 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Nov 1 10:03:47.337188 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Nov 1 10:03:47.337195 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Nov 1 10:03:47.337206 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Nov 1 10:03:47.337216 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Nov 1 10:03:47.337224 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Nov 1 10:03:47.337232 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Nov 1 10:03:47.337240 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Nov 1 10:03:47.337247 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Nov 1 10:03:47.337255 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Nov 1 10:03:47.337263 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Nov 1 10:03:47.337271 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Nov 1 10:03:47.337279 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Nov 1 10:03:47.337289 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Nov 1 10:03:47.337297 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Nov 1 10:03:47.337308 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Nov 1 10:03:47.337321 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Nov 1 10:03:47.337329 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 1 10:03:47.337337 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 1 10:03:47.337344 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Nov 1 10:03:47.337359 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 1 10:03:47.337371 kernel: efi: EFI v2.7 by EDK II Nov 1 10:03:47.337379 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Nov 1 10:03:47.337389 kernel: random: crng init done Nov 1 10:03:47.337399 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Nov 1 10:03:47.337407 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Nov 1 10:03:47.337415 kernel: secureboot: Secure boot disabled Nov 1 10:03:47.337423 kernel: SMBIOS 2.8 present. Nov 1 10:03:47.337430 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Nov 1 10:03:47.337438 kernel: DMI: Memory slots populated: 1/1 Nov 1 10:03:47.337446 kernel: Hypervisor detected: KVM Nov 1 10:03:47.337461 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Nov 1 10:03:47.337476 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 1 10:03:47.337492 kernel: kvm-clock: using sched offset of 4672437734 cycles Nov 1 10:03:47.337504 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 1 10:03:47.337527 kernel: tsc: Detected 2794.748 MHz processor Nov 1 10:03:47.337539 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 10:03:47.337547 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 10:03:47.337556 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Nov 1 10:03:47.337564 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 1 10:03:47.337576 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 10:03:47.337595 kernel: Using GB pages for direct mapping Nov 1 10:03:47.337608 kernel: ACPI: Early table checksum verification disabled Nov 1 10:03:47.337616 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Nov 1 10:03:47.337627 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Nov 1 10:03:47.337635 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 10:03:47.337644 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 10:03:47.337652 kernel: ACPI: FACS 0x000000009CBDD000 000040 Nov 1 10:03:47.337661 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 10:03:47.337671 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 10:03:47.337679 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 10:03:47.337688 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 10:03:47.337697 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Nov 1 10:03:47.337705 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Nov 1 10:03:47.337713 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Nov 1 10:03:47.337721 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Nov 1 10:03:47.337732 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Nov 1 10:03:47.337739 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Nov 1 10:03:47.337748 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Nov 1 10:03:47.337756 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Nov 1 10:03:47.337764 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Nov 1 10:03:47.337772 kernel: No NUMA configuration found Nov 1 10:03:47.337781 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Nov 1 10:03:47.337789 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Nov 1 10:03:47.337800 kernel: Zone ranges: Nov 1 10:03:47.337808 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 10:03:47.337816 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Nov 1 10:03:47.337831 kernel: Normal empty Nov 1 10:03:47.337839 kernel: Device empty Nov 1 10:03:47.337848 kernel: Movable zone start for each node Nov 1 10:03:47.337856 kernel: Early memory node ranges Nov 1 10:03:47.337867 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 1 10:03:47.337875 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Nov 1 10:03:47.337883 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Nov 1 10:03:47.337891 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Nov 1 10:03:47.337901 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Nov 1 10:03:47.337912 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Nov 1 10:03:47.337922 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Nov 1 10:03:47.337932 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Nov 1 10:03:47.337950 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Nov 1 10:03:47.337976 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 10:03:47.337995 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 1 10:03:47.338006 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Nov 1 10:03:47.338015 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 10:03:47.338023 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Nov 1 10:03:47.338032 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Nov 1 10:03:47.338041 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Nov 1 10:03:47.338049 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Nov 1 10:03:47.338061 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Nov 1 10:03:47.338069 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 1 10:03:47.338078 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 1 10:03:47.338086 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 1 10:03:47.338097 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 1 10:03:47.338106 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 1 10:03:47.338114 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 10:03:47.338123 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 1 10:03:47.338131 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 1 10:03:47.338140 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 10:03:47.338157 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 1 10:03:47.338188 kernel: TSC deadline timer available Nov 1 10:03:47.338204 kernel: CPU topo: Max. logical packages: 1 Nov 1 10:03:47.338213 kernel: CPU topo: Max. logical dies: 1 Nov 1 10:03:47.338232 kernel: CPU topo: Max. dies per package: 1 Nov 1 10:03:47.338248 kernel: CPU topo: Max. threads per core: 1 Nov 1 10:03:47.338429 kernel: CPU topo: Num. cores per package: 4 Nov 1 10:03:47.338441 kernel: CPU topo: Num. threads per package: 4 Nov 1 10:03:47.338450 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Nov 1 10:03:47.338461 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 1 10:03:47.338470 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 1 10:03:47.338479 kernel: kvm-guest: setup PV sched yield Nov 1 10:03:47.338490 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Nov 1 10:03:47.338507 kernel: Booting paravirtualized kernel on KVM Nov 1 10:03:47.338524 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 10:03:47.338533 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 1 10:03:47.338544 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Nov 1 10:03:47.338553 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Nov 1 10:03:47.338561 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 1 10:03:47.338570 kernel: kvm-guest: PV spinlocks enabled Nov 1 10:03:47.338578 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 1 10:03:47.338588 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=91cbcb3658f876d239d31cc29b206c4e950f20e536a8e14bd58a23c6f0ecf128 Nov 1 10:03:47.338597 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 1 10:03:47.338609 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 10:03:47.338621 kernel: Fallback order for Node 0: 0 Nov 1 10:03:47.338638 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Nov 1 10:03:47.338653 kernel: Policy zone: DMA32 Nov 1 10:03:47.338661 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 10:03:47.338670 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 1 10:03:47.338688 kernel: ftrace: allocating 40092 entries in 157 pages Nov 1 10:03:47.338705 kernel: ftrace: allocated 157 pages with 5 groups Nov 1 10:03:47.338714 kernel: Dynamic Preempt: voluntary Nov 1 10:03:47.338732 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 1 10:03:47.338747 kernel: rcu: RCU event tracing is enabled. Nov 1 10:03:47.338756 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 1 10:03:47.338764 kernel: Trampoline variant of Tasks RCU enabled. Nov 1 10:03:47.338773 kernel: Rude variant of Tasks RCU enabled. Nov 1 10:03:47.338784 kernel: Tracing variant of Tasks RCU enabled. Nov 1 10:03:47.338792 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 10:03:47.338801 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 1 10:03:47.338812 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 1 10:03:47.338829 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 1 10:03:47.338840 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 1 10:03:47.338849 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 1 10:03:47.338860 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 1 10:03:47.338869 kernel: Console: colour dummy device 80x25 Nov 1 10:03:47.338877 kernel: printk: legacy console [ttyS0] enabled Nov 1 10:03:47.338886 kernel: ACPI: Core revision 20240827 Nov 1 10:03:47.338894 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 1 10:03:47.338903 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 10:03:47.338911 kernel: x2apic enabled Nov 1 10:03:47.338920 kernel: APIC: Switched APIC routing to: physical x2apic Nov 1 10:03:47.338931 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 1 10:03:47.338939 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 1 10:03:47.338948 kernel: kvm-guest: setup PV IPIs Nov 1 10:03:47.338969 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 1 10:03:47.338978 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Nov 1 10:03:47.338986 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Nov 1 10:03:47.338998 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 1 10:03:47.339006 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 1 10:03:47.339020 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 1 10:03:47.339029 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 10:03:47.339038 kernel: Spectre V2 : Mitigation: Retpolines Nov 1 10:03:47.339046 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 1 10:03:47.339063 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 1 10:03:47.339183 kernel: active return thunk: retbleed_return_thunk Nov 1 10:03:47.339193 kernel: RETBleed: Mitigation: untrained return thunk Nov 1 10:03:47.339202 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 10:03:47.339211 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 1 10:03:47.339220 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 1 10:03:47.339229 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 1 10:03:47.339238 kernel: active return thunk: srso_return_thunk Nov 1 10:03:47.339252 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 1 10:03:47.339261 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 10:03:47.339270 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 10:03:47.339278 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 10:03:47.339287 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 10:03:47.339296 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 1 10:03:47.339304 kernel: Freeing SMP alternatives memory: 32K Nov 1 10:03:47.339315 kernel: pid_max: default: 32768 minimum: 301 Nov 1 10:03:47.339323 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 1 10:03:47.339332 kernel: landlock: Up and running. Nov 1 10:03:47.339340 kernel: SELinux: Initializing. Nov 1 10:03:47.339349 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 10:03:47.339358 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 10:03:47.339366 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 1 10:03:47.339377 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 1 10:03:47.339385 kernel: ... version: 0 Nov 1 10:03:47.339394 kernel: ... bit width: 48 Nov 1 10:03:47.339402 kernel: ... generic registers: 6 Nov 1 10:03:47.339411 kernel: ... value mask: 0000ffffffffffff Nov 1 10:03:47.339419 kernel: ... max period: 00007fffffffffff Nov 1 10:03:47.339428 kernel: ... fixed-purpose events: 0 Nov 1 10:03:47.339436 kernel: ... event mask: 000000000000003f Nov 1 10:03:47.339458 kernel: signal: max sigframe size: 1776 Nov 1 10:03:47.339470 kernel: rcu: Hierarchical SRCU implementation. Nov 1 10:03:47.339479 kernel: rcu: Max phase no-delay instances is 400. Nov 1 10:03:47.339490 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 1 10:03:47.339499 kernel: smp: Bringing up secondary CPUs ... Nov 1 10:03:47.339507 kernel: smpboot: x86: Booting SMP configuration: Nov 1 10:03:47.339516 kernel: .... node #0, CPUs: #1 #2 #3 Nov 1 10:03:47.339527 kernel: smp: Brought up 1 node, 4 CPUs Nov 1 10:03:47.339535 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Nov 1 10:03:47.339544 kernel: Memory: 2441096K/2565800K available (14336K kernel code, 2443K rwdata, 29892K rodata, 15356K init, 2688K bss, 118764K reserved, 0K cma-reserved) Nov 1 10:03:47.339553 kernel: devtmpfs: initialized Nov 1 10:03:47.339561 kernel: x86/mm: Memory block size: 128MB Nov 1 10:03:47.339570 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Nov 1 10:03:47.339578 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Nov 1 10:03:47.339589 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Nov 1 10:03:47.339598 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Nov 1 10:03:47.339607 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Nov 1 10:03:47.339615 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Nov 1 10:03:47.339624 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 10:03:47.339633 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 1 10:03:47.339641 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 10:03:47.339652 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 10:03:47.339660 kernel: audit: initializing netlink subsys (disabled) Nov 1 10:03:47.339668 kernel: audit: type=2000 audit(1761991424.760:1): state=initialized audit_enabled=0 res=1 Nov 1 10:03:47.339677 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 10:03:47.339685 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 10:03:47.339693 kernel: cpuidle: using governor menu Nov 1 10:03:47.339702 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 10:03:47.339713 kernel: dca service started, version 1.12.1 Nov 1 10:03:47.339730 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Nov 1 10:03:47.339846 kernel: PCI: Using configuration type 1 for base access Nov 1 10:03:47.339855 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 10:03:47.339864 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 10:03:47.339873 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 1 10:03:47.339881 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 10:03:47.339893 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 1 10:03:47.339913 kernel: ACPI: Added _OSI(Module Device) Nov 1 10:03:47.339923 kernel: ACPI: Added _OSI(Processor Device) Nov 1 10:03:47.339931 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 10:03:47.339949 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 1 10:03:47.339978 kernel: ACPI: Interpreter enabled Nov 1 10:03:47.339989 kernel: ACPI: PM: (supports S0 S3 S5) Nov 1 10:03:47.340000 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 10:03:47.340009 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 10:03:47.340017 kernel: PCI: Using E820 reservations for host bridge windows Nov 1 10:03:47.340026 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 1 10:03:47.340035 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 1 10:03:47.340374 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 1 10:03:47.340603 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 1 10:03:47.340863 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 1 10:03:47.340888 kernel: PCI host bridge to bus 0000:00 Nov 1 10:03:47.341188 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 10:03:47.341566 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 10:03:47.341939 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 10:03:47.342257 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Nov 1 10:03:47.342554 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Nov 1 10:03:47.342777 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Nov 1 10:03:47.343177 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 1 10:03:47.343449 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 1 10:03:47.343653 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Nov 1 10:03:47.343837 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Nov 1 10:03:47.344132 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Nov 1 10:03:47.344327 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Nov 1 10:03:47.344567 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 10:03:47.344816 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 1 10:03:47.345179 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Nov 1 10:03:47.345443 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Nov 1 10:03:47.345703 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Nov 1 10:03:47.346110 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 1 10:03:47.346341 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Nov 1 10:03:47.346533 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Nov 1 10:03:47.346855 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Nov 1 10:03:47.347136 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 1 10:03:47.347401 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Nov 1 10:03:47.347657 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Nov 1 10:03:47.347935 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Nov 1 10:03:47.348130 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Nov 1 10:03:47.348310 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 1 10:03:47.348490 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 1 10:03:47.348688 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 1 10:03:47.348892 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Nov 1 10:03:47.349189 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Nov 1 10:03:47.349471 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 1 10:03:47.350382 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Nov 1 10:03:47.350404 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 1 10:03:47.350426 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 1 10:03:47.350441 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 1 10:03:47.350460 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 1 10:03:47.350491 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 1 10:03:47.350515 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 1 10:03:47.350538 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 1 10:03:47.350554 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 1 10:03:47.350573 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 1 10:03:47.350586 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 1 10:03:47.350600 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 1 10:03:47.350625 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 1 10:03:47.350638 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 1 10:03:47.350649 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 1 10:03:47.350669 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 1 10:03:47.350694 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 1 10:03:47.350710 kernel: iommu: Default domain type: Translated Nov 1 10:03:47.350723 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 10:03:47.350734 kernel: efivars: Registered efivars operations Nov 1 10:03:47.350753 kernel: PCI: Using ACPI for IRQ routing Nov 1 10:03:47.350765 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 10:03:47.350777 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Nov 1 10:03:47.350792 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Nov 1 10:03:47.350813 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Nov 1 10:03:47.350847 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Nov 1 10:03:47.350859 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Nov 1 10:03:47.350874 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Nov 1 10:03:47.350886 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Nov 1 10:03:47.350897 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Nov 1 10:03:47.351120 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 1 10:03:47.351342 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 1 10:03:47.351813 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 10:03:47.351847 kernel: vgaarb: loaded Nov 1 10:03:47.351867 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 1 10:03:47.351897 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 1 10:03:47.351921 kernel: clocksource: Switched to clocksource kvm-clock Nov 1 10:03:47.351940 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 10:03:47.351982 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 10:03:47.352127 kernel: pnp: PnP ACPI init Nov 1 10:03:47.352444 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Nov 1 10:03:47.352488 kernel: pnp: PnP ACPI: found 6 devices Nov 1 10:03:47.352498 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 10:03:47.352511 kernel: NET: Registered PF_INET protocol family Nov 1 10:03:47.352531 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 10:03:47.352541 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 1 10:03:47.352557 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 10:03:47.352585 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 1 10:03:47.352601 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 1 10:03:47.352610 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 1 10:03:47.352619 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 10:03:47.352628 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 10:03:47.352637 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 10:03:47.352646 kernel: NET: Registered PF_XDP protocol family Nov 1 10:03:47.352946 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Nov 1 10:03:47.353207 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Nov 1 10:03:47.353391 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 10:03:47.353680 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 10:03:47.354024 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 10:03:47.354241 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Nov 1 10:03:47.354405 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Nov 1 10:03:47.354643 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Nov 1 10:03:47.354659 kernel: PCI: CLS 0 bytes, default 64 Nov 1 10:03:47.354672 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Nov 1 10:03:47.354693 kernel: Initialise system trusted keyrings Nov 1 10:03:47.354708 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 1 10:03:47.354720 kernel: Key type asymmetric registered Nov 1 10:03:47.354732 kernel: Asymmetric key parser 'x509' registered Nov 1 10:03:47.354743 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 1 10:03:47.354752 kernel: io scheduler mq-deadline registered Nov 1 10:03:47.354761 kernel: io scheduler kyber registered Nov 1 10:03:47.354795 kernel: io scheduler bfq registered Nov 1 10:03:47.354838 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 10:03:47.354870 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 1 10:03:47.354889 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 1 10:03:47.354914 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 1 10:03:47.354928 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 10:03:47.354941 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 10:03:47.354966 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 1 10:03:47.354986 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 1 10:03:47.355004 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 1 10:03:47.355254 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 1 10:03:47.355269 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 1 10:03:47.355489 kernel: rtc_cmos 00:04: registered as rtc0 Nov 1 10:03:47.355711 kernel: rtc_cmos 00:04: setting system clock to 2025-11-01T10:03:45 UTC (1761991425) Nov 1 10:03:47.355998 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Nov 1 10:03:47.356018 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 1 10:03:47.356027 kernel: efifb: probing for efifb Nov 1 10:03:47.356039 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Nov 1 10:03:47.356048 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Nov 1 10:03:47.356057 kernel: efifb: scrolling: redraw Nov 1 10:03:47.356066 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 1 10:03:47.356077 kernel: Console: switching to colour frame buffer device 160x50 Nov 1 10:03:47.356086 kernel: fb0: EFI VGA frame buffer device Nov 1 10:03:47.356097 kernel: pstore: Using crash dump compression: deflate Nov 1 10:03:47.356106 kernel: pstore: Registered efi_pstore as persistent store backend Nov 1 10:03:47.356115 kernel: NET: Registered PF_INET6 protocol family Nov 1 10:03:47.356124 kernel: Segment Routing with IPv6 Nov 1 10:03:47.356133 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 10:03:47.356143 kernel: NET: Registered PF_PACKET protocol family Nov 1 10:03:47.356152 kernel: Key type dns_resolver registered Nov 1 10:03:47.356161 kernel: IPI shorthand broadcast: enabled Nov 1 10:03:47.356172 kernel: sched_clock: Marking stable (2369003490, 287671259)->(2715859736, -59184987) Nov 1 10:03:47.356184 kernel: registered taskstats version 1 Nov 1 10:03:47.356196 kernel: Loading compiled-in X.509 certificates Nov 1 10:03:47.356209 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: d8ad6d63e9d0f6e32055e659cacaf9092255a45e' Nov 1 10:03:47.356224 kernel: Demotion targets for Node 0: null Nov 1 10:03:47.356235 kernel: Key type .fscrypt registered Nov 1 10:03:47.356247 kernel: Key type fscrypt-provisioning registered Nov 1 10:03:47.356258 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 1 10:03:47.356267 kernel: ima: Allocated hash algorithm: sha1 Nov 1 10:03:47.356276 kernel: ima: No architecture policies found Nov 1 10:03:47.356285 kernel: clk: Disabling unused clocks Nov 1 10:03:47.356296 kernel: Freeing unused kernel image (initmem) memory: 15356K Nov 1 10:03:47.356305 kernel: Write protecting the kernel read-only data: 45056k Nov 1 10:03:47.356314 kernel: Freeing unused kernel image (rodata/data gap) memory: 828K Nov 1 10:03:47.356323 kernel: Run /init as init process Nov 1 10:03:47.356331 kernel: with arguments: Nov 1 10:03:47.356340 kernel: /init Nov 1 10:03:47.356349 kernel: with environment: Nov 1 10:03:47.356357 kernel: HOME=/ Nov 1 10:03:47.356368 kernel: TERM=linux Nov 1 10:03:47.356376 kernel: SCSI subsystem initialized Nov 1 10:03:47.356385 kernel: libata version 3.00 loaded. Nov 1 10:03:47.356581 kernel: ahci 0000:00:1f.2: version 3.0 Nov 1 10:03:47.356595 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 1 10:03:47.356899 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 1 10:03:47.357193 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 1 10:03:47.357431 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 1 10:03:47.357748 kernel: scsi host0: ahci Nov 1 10:03:47.358006 kernel: scsi host1: ahci Nov 1 10:03:47.358243 kernel: scsi host2: ahci Nov 1 10:03:47.358516 kernel: scsi host3: ahci Nov 1 10:03:47.358789 kernel: scsi host4: ahci Nov 1 10:03:47.359271 kernel: scsi host5: ahci Nov 1 10:03:47.359386 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Nov 1 10:03:47.359396 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Nov 1 10:03:47.359405 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Nov 1 10:03:47.359414 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Nov 1 10:03:47.359427 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Nov 1 10:03:47.359436 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Nov 1 10:03:47.359445 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 1 10:03:47.359454 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 1 10:03:47.359463 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 1 10:03:47.359472 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 1 10:03:47.359481 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 1 10:03:47.359492 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 1 10:03:47.359503 kernel: ata3.00: LPM support broken, forcing max_power Nov 1 10:03:47.359532 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 1 10:03:47.359545 kernel: ata3.00: applying bridge limits Nov 1 10:03:47.359556 kernel: ata3.00: LPM support broken, forcing max_power Nov 1 10:03:47.359571 kernel: ata3.00: configured for UDMA/100 Nov 1 10:03:47.359914 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 1 10:03:47.360263 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 1 10:03:47.360484 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Nov 1 10:03:47.360500 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 10:03:47.360509 kernel: GPT:16515071 != 27000831 Nov 1 10:03:47.360517 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 10:03:47.360528 kernel: GPT:16515071 != 27000831 Nov 1 10:03:47.360541 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 10:03:47.360550 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 10:03:47.360807 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 1 10:03:47.360831 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 1 10:03:47.361087 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 1 10:03:47.361102 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 10:03:47.361111 kernel: device-mapper: uevent: version 1.0.3 Nov 1 10:03:47.361124 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 1 10:03:47.361134 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 1 10:03:47.361143 kernel: raid6: avx2x4 gen() 29458 MB/s Nov 1 10:03:47.361168 kernel: raid6: avx2x2 gen() 28790 MB/s Nov 1 10:03:47.361182 kernel: raid6: avx2x1 gen() 25160 MB/s Nov 1 10:03:47.361194 kernel: raid6: using algorithm avx2x4 gen() 29458 MB/s Nov 1 10:03:47.361216 kernel: raid6: .... xor() 7407 MB/s, rmw enabled Nov 1 10:03:47.361231 kernel: raid6: using avx2x2 recovery algorithm Nov 1 10:03:47.361240 kernel: xor: automatically using best checksumming function avx Nov 1 10:03:47.361249 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 1 10:03:47.361258 kernel: BTRFS: device fsid 8763e8a0-bf7f-4ffe-acc8-da149b03dd0b devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (181) Nov 1 10:03:47.361267 kernel: BTRFS info (device dm-0): first mount of filesystem 8763e8a0-bf7f-4ffe-acc8-da149b03dd0b Nov 1 10:03:47.361276 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 1 10:03:47.361298 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 1 10:03:47.361319 kernel: BTRFS info (device dm-0): enabling free space tree Nov 1 10:03:47.361328 kernel: loop: module loaded Nov 1 10:03:47.361337 kernel: loop0: detected capacity change from 0 to 100136 Nov 1 10:03:47.361346 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 10:03:47.361356 systemd[1]: Successfully made /usr/ read-only. Nov 1 10:03:47.361372 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 1 10:03:47.361386 systemd[1]: Detected virtualization kvm. Nov 1 10:03:47.361395 systemd[1]: Detected architecture x86-64. Nov 1 10:03:47.361404 systemd[1]: Running in initrd. Nov 1 10:03:47.361419 systemd[1]: No hostname configured, using default hostname. Nov 1 10:03:47.361429 systemd[1]: Hostname set to . Nov 1 10:03:47.361438 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 1 10:03:47.361452 systemd[1]: Queued start job for default target initrd.target. Nov 1 10:03:47.361462 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 1 10:03:47.361475 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 10:03:47.361501 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 10:03:47.361527 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 1 10:03:47.361542 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 10:03:47.361555 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 1 10:03:47.361570 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 1 10:03:47.361590 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 10:03:47.361605 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 10:03:47.361620 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 1 10:03:47.361629 systemd[1]: Reached target paths.target - Path Units. Nov 1 10:03:47.361646 systemd[1]: Reached target slices.target - Slice Units. Nov 1 10:03:47.361661 systemd[1]: Reached target swap.target - Swaps. Nov 1 10:03:47.361670 systemd[1]: Reached target timers.target - Timer Units. Nov 1 10:03:47.361680 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 10:03:47.361689 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 10:03:47.361703 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 1 10:03:47.361722 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 1 10:03:47.361740 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 10:03:47.361751 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 10:03:47.361760 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 10:03:47.361769 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 10:03:47.361779 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 1 10:03:47.361789 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 1 10:03:47.361802 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 10:03:47.361868 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 1 10:03:47.361879 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 1 10:03:47.361907 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 10:03:47.361921 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 10:03:47.361932 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 10:03:47.361942 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 10:03:47.361973 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 1 10:03:47.361983 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 10:03:47.361992 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 10:03:47.362002 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 10:03:47.362050 systemd-journald[316]: Collecting audit messages is disabled. Nov 1 10:03:47.362229 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 10:03:47.362252 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 10:03:47.362281 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 10:03:47.362292 kernel: Bridge firewalling registered Nov 1 10:03:47.362729 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 10:03:47.362755 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 10:03:47.362784 systemd-journald[316]: Journal started Nov 1 10:03:47.362832 systemd-journald[316]: Runtime Journal (/run/log/journal/1b2ac240d48244d1a3e4d87d40e54efe) is 6M, max 48.1M, 42M free. Nov 1 10:03:47.367158 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 10:03:47.350194 systemd-modules-load[318]: Inserted module 'br_netfilter' Nov 1 10:03:47.371992 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 10:03:47.377041 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 10:03:47.384406 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 10:03:47.400386 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 10:03:47.403188 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 10:03:47.406004 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 1 10:03:47.419829 systemd-tmpfiles[341]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 1 10:03:47.424576 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 10:03:47.427978 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 10:03:47.432149 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 10:03:47.442494 dracut-cmdline[352]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=91cbcb3658f876d239d31cc29b206c4e950f20e536a8e14bd58a23c6f0ecf128 Nov 1 10:03:47.499109 systemd-resolved[358]: Positive Trust Anchors: Nov 1 10:03:47.499125 systemd-resolved[358]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 10:03:47.499129 systemd-resolved[358]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 1 10:03:47.499160 systemd-resolved[358]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 10:03:47.535758 systemd-resolved[358]: Defaulting to hostname 'linux'. Nov 1 10:03:47.537331 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 10:03:47.540856 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 10:03:47.582002 kernel: Loading iSCSI transport class v2.0-870. Nov 1 10:03:47.599032 kernel: iscsi: registered transport (tcp) Nov 1 10:03:47.632102 kernel: iscsi: registered transport (qla4xxx) Nov 1 10:03:47.632238 kernel: QLogic iSCSI HBA Driver Nov 1 10:03:47.662845 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 1 10:03:47.682540 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 1 10:03:47.684451 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 1 10:03:47.760740 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 1 10:03:47.764684 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 1 10:03:47.766460 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 1 10:03:47.806487 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 1 10:03:47.835939 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 10:03:47.876689 systemd-udevd[625]: Using default interface naming scheme 'v257'. Nov 1 10:03:47.890892 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 10:03:47.907555 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 1 10:03:47.920092 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 10:03:47.924366 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 10:03:47.938353 dracut-pre-trigger[698]: rd.md=0: removing MD RAID activation Nov 1 10:03:47.971495 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 10:03:47.974350 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 10:03:47.988117 systemd-networkd[703]: lo: Link UP Nov 1 10:03:47.988127 systemd-networkd[703]: lo: Gained carrier Nov 1 10:03:47.988774 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 10:03:47.989694 systemd[1]: Reached target network.target - Network. Nov 1 10:03:48.077573 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 10:03:48.084544 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 1 10:03:48.156164 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 1 10:03:48.165582 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 1 10:03:48.176564 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 1 10:03:48.179835 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 1 10:03:48.240084 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 1 10:03:48.282247 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 10:03:48.289352 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 10:03:48.290602 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 10:03:48.295346 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 10:03:48.301887 kernel: AES CTR mode by8 optimization enabled Nov 1 10:03:48.301755 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 10:03:48.405214 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 10:03:48.405353 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 10:03:48.412535 disk-uuid[778]: Primary Header is updated. Nov 1 10:03:48.412535 disk-uuid[778]: Secondary Entries is updated. Nov 1 10:03:48.412535 disk-uuid[778]: Secondary Header is updated. Nov 1 10:03:48.406450 systemd-networkd[703]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 1 10:03:48.406458 systemd-networkd[703]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 10:03:48.406923 systemd-networkd[703]: eth0: Link UP Nov 1 10:03:48.409455 systemd-networkd[703]: eth0: Gained carrier Nov 1 10:03:48.409472 systemd-networkd[703]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 1 10:03:48.452233 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 1 10:03:48.419081 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 10:03:48.446883 systemd-networkd[703]: eth0: DHCPv4 address 10.0.0.55/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 1 10:03:48.480538 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 1 10:03:48.483322 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 10:03:48.494425 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 10:03:48.495588 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 10:03:48.498006 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 1 10:03:48.500331 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 10:03:48.527895 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 1 10:03:49.477235 disk-uuid[838]: Warning: The kernel is still using the old partition table. Nov 1 10:03:49.477235 disk-uuid[838]: The new table will be used at the next reboot or after you Nov 1 10:03:49.477235 disk-uuid[838]: run partprobe(8) or kpartx(8) Nov 1 10:03:49.477235 disk-uuid[838]: The operation has completed successfully. Nov 1 10:03:49.490047 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 10:03:49.491184 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 1 10:03:49.495681 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 1 10:03:49.527994 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (869) Nov 1 10:03:49.531745 kernel: BTRFS info (device vda6): first mount of filesystem 75c18d9e-3deb-43e1-a433-af20f45ab517 Nov 1 10:03:49.531780 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 10:03:49.535780 kernel: BTRFS info (device vda6): turning on async discard Nov 1 10:03:49.535806 kernel: BTRFS info (device vda6): enabling free space tree Nov 1 10:03:49.543976 kernel: BTRFS info (device vda6): last unmount of filesystem 75c18d9e-3deb-43e1-a433-af20f45ab517 Nov 1 10:03:49.544467 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 1 10:03:49.546507 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 1 10:03:49.830164 ignition[888]: Ignition 2.22.0 Nov 1 10:03:49.830180 ignition[888]: Stage: fetch-offline Nov 1 10:03:49.830242 ignition[888]: no configs at "/usr/lib/ignition/base.d" Nov 1 10:03:49.830258 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 10:03:49.830392 ignition[888]: parsed url from cmdline: "" Nov 1 10:03:49.830396 ignition[888]: no config URL provided Nov 1 10:03:49.830402 ignition[888]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 10:03:49.830414 ignition[888]: no config at "/usr/lib/ignition/user.ign" Nov 1 10:03:49.830464 ignition[888]: op(1): [started] loading QEMU firmware config module Nov 1 10:03:49.830468 ignition[888]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 1 10:03:49.843527 ignition[888]: op(1): [finished] loading QEMU firmware config module Nov 1 10:03:49.924854 ignition[888]: parsing config with SHA512: 99e8ab0a7887f173755acb9225b69180c694fde237c8b5ff6478b495a6965710eb0b7b25a14198c6d7114b7d54c4bdbcf63ea8409cd735d4499a44d184e2ae05 Nov 1 10:03:49.929728 unknown[888]: fetched base config from "system" Nov 1 10:03:49.929773 unknown[888]: fetched user config from "qemu" Nov 1 10:03:49.930240 ignition[888]: fetch-offline: fetch-offline passed Nov 1 10:03:49.930369 ignition[888]: Ignition finished successfully Nov 1 10:03:49.937868 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 10:03:49.940152 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 1 10:03:49.941372 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 1 10:03:50.011508 ignition[898]: Ignition 2.22.0 Nov 1 10:03:50.011521 ignition[898]: Stage: kargs Nov 1 10:03:50.011674 ignition[898]: no configs at "/usr/lib/ignition/base.d" Nov 1 10:03:50.011684 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 10:03:50.012406 ignition[898]: kargs: kargs passed Nov 1 10:03:50.017814 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 1 10:03:50.012448 ignition[898]: Ignition finished successfully Nov 1 10:03:50.020696 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 1 10:03:50.069318 ignition[906]: Ignition 2.22.0 Nov 1 10:03:50.069333 ignition[906]: Stage: disks Nov 1 10:03:50.069517 ignition[906]: no configs at "/usr/lib/ignition/base.d" Nov 1 10:03:50.069529 ignition[906]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 10:03:50.073658 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 1 10:03:50.070649 ignition[906]: disks: disks passed Nov 1 10:03:50.076592 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 1 10:03:50.070696 ignition[906]: Ignition finished successfully Nov 1 10:03:50.079666 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 1 10:03:50.081706 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 10:03:50.083380 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 10:03:50.086008 systemd[1]: Reached target basic.target - Basic System. Nov 1 10:03:50.087802 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 1 10:03:50.111165 systemd-networkd[703]: eth0: Gained IPv6LL Nov 1 10:03:50.147768 systemd-fsck[916]: ROOT: clean, 15/456736 files, 38230/456704 blocks Nov 1 10:03:50.158143 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 1 10:03:50.160622 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 1 10:03:50.278992 kernel: EXT4-fs (vda9): mounted filesystem 9a0b584a-8c68-48a6-a0f9-92613ad0f15d r/w with ordered data mode. Quota mode: none. Nov 1 10:03:50.279516 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 1 10:03:50.281024 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 1 10:03:50.285684 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 10:03:50.287141 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 1 10:03:50.289180 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 1 10:03:50.289215 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 10:03:50.289237 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 10:03:50.312503 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 1 10:03:50.316745 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 1 10:03:50.322505 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (924) Nov 1 10:03:50.325698 kernel: BTRFS info (device vda6): first mount of filesystem 75c18d9e-3deb-43e1-a433-af20f45ab517 Nov 1 10:03:50.325800 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 10:03:50.329536 kernel: BTRFS info (device vda6): turning on async discard Nov 1 10:03:50.329576 kernel: BTRFS info (device vda6): enabling free space tree Nov 1 10:03:50.332269 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 10:03:50.451546 initrd-setup-root[948]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 10:03:50.457145 initrd-setup-root[955]: cut: /sysroot/etc/group: No such file or directory Nov 1 10:03:50.463090 initrd-setup-root[962]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 10:03:50.496994 initrd-setup-root[969]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 10:03:50.595159 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 1 10:03:50.598536 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 1 10:03:50.601087 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 1 10:03:50.622533 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 1 10:03:50.625138 kernel: BTRFS info (device vda6): last unmount of filesystem 75c18d9e-3deb-43e1-a433-af20f45ab517 Nov 1 10:03:50.641122 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 1 10:03:50.740702 ignition[1038]: INFO : Ignition 2.22.0 Nov 1 10:03:50.740702 ignition[1038]: INFO : Stage: mount Nov 1 10:03:50.743523 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 10:03:50.743523 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 10:03:50.743523 ignition[1038]: INFO : mount: mount passed Nov 1 10:03:50.743523 ignition[1038]: INFO : Ignition finished successfully Nov 1 10:03:50.752031 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 1 10:03:50.756279 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 1 10:03:50.792909 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 10:03:50.829704 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1050) Nov 1 10:03:50.829774 kernel: BTRFS info (device vda6): first mount of filesystem 75c18d9e-3deb-43e1-a433-af20f45ab517 Nov 1 10:03:50.829787 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 10:03:50.835884 kernel: BTRFS info (device vda6): turning on async discard Nov 1 10:03:50.835931 kernel: BTRFS info (device vda6): enabling free space tree Nov 1 10:03:50.837938 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 10:03:50.891220 ignition[1067]: INFO : Ignition 2.22.0 Nov 1 10:03:50.891220 ignition[1067]: INFO : Stage: files Nov 1 10:03:50.894152 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 10:03:50.894152 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 10:03:50.894152 ignition[1067]: DEBUG : files: compiled without relabeling support, skipping Nov 1 10:03:50.894152 ignition[1067]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 10:03:50.894152 ignition[1067]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 10:03:50.904841 ignition[1067]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 10:03:50.904841 ignition[1067]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 10:03:50.904841 ignition[1067]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 10:03:50.904841 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 10:03:50.904841 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 1 10:03:50.898767 unknown[1067]: wrote ssh authorized keys file for user: core Nov 1 10:03:50.949246 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 1 10:03:51.039113 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 10:03:51.039113 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 1 10:03:51.046089 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 10:03:51.046089 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 10:03:51.046089 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 10:03:51.046089 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 10:03:51.046089 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 10:03:51.046089 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 10:03:51.046089 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 10:03:51.065834 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 10:03:51.065834 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 10:03:51.065834 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 10:03:51.065834 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 10:03:51.065834 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 10:03:51.065834 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 1 10:03:51.336741 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 1 10:03:51.792539 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 10:03:51.792539 ignition[1067]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 1 10:03:51.799264 ignition[1067]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 10:03:51.807572 ignition[1067]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 10:03:51.807572 ignition[1067]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 1 10:03:51.807572 ignition[1067]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 1 10:03:51.815382 ignition[1067]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 1 10:03:51.815382 ignition[1067]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 1 10:03:51.815382 ignition[1067]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 1 10:03:51.815382 ignition[1067]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 1 10:03:51.844083 ignition[1067]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 1 10:03:51.852944 ignition[1067]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 1 10:03:51.855584 ignition[1067]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 1 10:03:51.855584 ignition[1067]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 1 10:03:51.855584 ignition[1067]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 10:03:51.855584 ignition[1067]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 10:03:51.855584 ignition[1067]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 10:03:51.855584 ignition[1067]: INFO : files: files passed Nov 1 10:03:51.855584 ignition[1067]: INFO : Ignition finished successfully Nov 1 10:03:51.867313 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 1 10:03:51.871118 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 1 10:03:51.875670 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 1 10:03:51.890593 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 10:03:51.890749 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 1 10:03:51.899302 initrd-setup-root-after-ignition[1099]: grep: /sysroot/oem/oem-release: No such file or directory Nov 1 10:03:51.903996 initrd-setup-root-after-ignition[1101]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 10:03:51.903996 initrd-setup-root-after-ignition[1101]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 1 10:03:51.909077 initrd-setup-root-after-ignition[1105]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 10:03:51.913269 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 10:03:51.914110 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 1 10:03:51.920707 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 1 10:03:51.992157 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 10:03:51.992303 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 1 10:03:51.993649 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 1 10:03:51.998434 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 1 10:03:52.003405 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 1 10:03:52.006314 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 1 10:03:52.055618 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 10:03:52.058164 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 1 10:03:52.088183 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 1 10:03:52.088406 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 1 10:03:52.089927 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 10:03:52.095823 systemd[1]: Stopped target timers.target - Timer Units. Nov 1 10:03:52.096641 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 10:03:52.096779 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 10:03:52.101779 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 1 10:03:52.106028 systemd[1]: Stopped target basic.target - Basic System. Nov 1 10:03:52.109103 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 1 10:03:52.111849 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 10:03:52.115449 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 1 10:03:52.118677 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 1 10:03:52.122395 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 1 10:03:52.125473 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 10:03:52.126251 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 1 10:03:52.134808 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 1 10:03:52.135602 systemd[1]: Stopped target swap.target - Swaps. Nov 1 10:03:52.138694 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 10:03:52.138895 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 1 10:03:52.143804 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 1 10:03:52.144665 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 10:03:52.149573 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 1 10:03:52.149771 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 10:03:52.153052 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 10:03:52.153217 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 1 10:03:52.159631 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 10:03:52.159804 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 10:03:52.162991 systemd[1]: Stopped target paths.target - Path Units. Nov 1 10:03:52.163707 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 10:03:52.167054 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 10:03:52.167995 systemd[1]: Stopped target slices.target - Slice Units. Nov 1 10:03:52.171681 systemd[1]: Stopped target sockets.target - Socket Units. Nov 1 10:03:52.174435 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 10:03:52.174569 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 10:03:52.177623 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 10:03:52.177759 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 10:03:52.181487 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 10:03:52.181658 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 10:03:52.184633 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 10:03:52.184796 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 1 10:03:52.191441 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 1 10:03:52.193245 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 1 10:03:52.206176 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 1 10:03:52.206488 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 10:03:52.207415 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 10:03:52.207518 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 10:03:52.210664 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 10:03:52.210784 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 10:03:52.221493 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 10:03:52.221620 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 1 10:03:52.249318 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 10:03:52.291294 ignition[1125]: INFO : Ignition 2.22.0 Nov 1 10:03:52.291294 ignition[1125]: INFO : Stage: umount Nov 1 10:03:52.294179 ignition[1125]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 10:03:52.294179 ignition[1125]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 10:03:52.294179 ignition[1125]: INFO : umount: umount passed Nov 1 10:03:52.294179 ignition[1125]: INFO : Ignition finished successfully Nov 1 10:03:52.299633 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 10:03:52.299912 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 1 10:03:52.301646 systemd[1]: Stopped target network.target - Network. Nov 1 10:03:52.304406 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 10:03:52.304494 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 1 10:03:52.304990 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 10:03:52.305043 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 1 10:03:52.311005 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 10:03:52.311089 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 1 10:03:52.314780 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 1 10:03:52.314837 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 1 10:03:52.315842 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 1 10:03:52.325045 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 1 10:03:52.336085 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 10:03:52.336295 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 1 10:03:52.343391 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 10:03:52.343541 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 1 10:03:52.349306 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 10:03:52.349420 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 1 10:03:52.351324 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 1 10:03:52.357104 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 10:03:52.357164 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 1 10:03:52.360368 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 10:03:52.360434 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 1 10:03:52.362373 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 1 10:03:52.368005 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 10:03:52.368069 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 10:03:52.368922 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 10:03:52.368982 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 1 10:03:52.369450 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 10:03:52.369494 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 1 10:03:52.377017 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 10:03:52.401880 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 10:03:52.402119 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 10:03:52.404521 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 10:03:52.404614 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 1 10:03:52.408572 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 10:03:52.408642 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 10:03:52.412011 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 10:03:52.412130 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 1 10:03:52.413364 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 10:03:52.413464 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 1 10:03:52.414787 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 10:03:52.414847 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 10:03:52.438861 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 1 10:03:52.439564 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 1 10:03:52.439632 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 1 10:03:52.440515 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 1 10:03:52.440615 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 10:03:52.441330 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 1 10:03:52.441376 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 10:03:52.441876 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 10:03:52.441921 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 10:03:52.454008 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 10:03:52.454067 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 10:03:52.460600 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 10:03:52.460723 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 1 10:03:52.487584 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 10:03:52.487753 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 1 10:03:52.489000 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 1 10:03:52.496309 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 1 10:03:52.528547 systemd[1]: Switching root. Nov 1 10:03:52.567004 systemd-journald[316]: Journal stopped Nov 1 10:03:54.339220 systemd-journald[316]: Received SIGTERM from PID 1 (systemd). Nov 1 10:03:54.339330 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 10:03:54.339359 kernel: SELinux: policy capability open_perms=1 Nov 1 10:03:54.339376 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 10:03:54.339394 kernel: SELinux: policy capability always_check_network=0 Nov 1 10:03:54.339428 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 10:03:54.339445 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 10:03:54.339462 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 10:03:54.339478 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 10:03:54.339500 kernel: SELinux: policy capability userspace_initial_context=0 Nov 1 10:03:54.339525 kernel: audit: type=1403 audit(1761991433.299:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 10:03:54.339543 systemd[1]: Successfully loaded SELinux policy in 79.456ms. Nov 1 10:03:54.339592 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.955ms. Nov 1 10:03:54.339612 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 1 10:03:54.339648 systemd[1]: Detected virtualization kvm. Nov 1 10:03:54.339665 systemd[1]: Detected architecture x86-64. Nov 1 10:03:54.339683 systemd[1]: Detected first boot. Nov 1 10:03:54.339700 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 1 10:03:54.339721 zram_generator::config[1170]: No configuration found. Nov 1 10:03:54.339752 kernel: Guest personality initialized and is inactive Nov 1 10:03:54.339770 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 1 10:03:54.339787 kernel: Initialized host personality Nov 1 10:03:54.339804 kernel: NET: Registered PF_VSOCK protocol family Nov 1 10:03:54.339825 systemd[1]: Populated /etc with preset unit settings. Nov 1 10:03:54.339842 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 1 10:03:54.339859 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 1 10:03:54.339887 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 1 10:03:54.339909 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 1 10:03:54.339927 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 1 10:03:54.339945 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 1 10:03:54.339982 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 1 10:03:54.340000 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 1 10:03:54.340029 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 1 10:03:54.340047 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 1 10:03:54.340064 systemd[1]: Created slice user.slice - User and Session Slice. Nov 1 10:03:54.340085 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 10:03:54.340103 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 10:03:54.340126 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 1 10:03:54.340145 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 1 10:03:54.340182 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 1 10:03:54.340202 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 10:03:54.340220 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 1 10:03:54.340243 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 10:03:54.340261 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 10:03:54.340280 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 1 10:03:54.340297 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 1 10:03:54.340326 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 1 10:03:54.340343 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 1 10:03:54.340361 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 10:03:54.340382 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 10:03:54.340402 systemd[1]: Reached target slices.target - Slice Units. Nov 1 10:03:54.340421 systemd[1]: Reached target swap.target - Swaps. Nov 1 10:03:54.340439 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 1 10:03:54.340467 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 1 10:03:54.340485 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 1 10:03:54.340502 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 10:03:54.340519 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 10:03:54.340536 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 10:03:54.340554 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 1 10:03:54.340573 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 1 10:03:54.340602 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 1 10:03:54.340621 systemd[1]: Mounting media.mount - External Media Directory... Nov 1 10:03:54.340647 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 10:03:54.340664 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 1 10:03:54.340681 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 1 10:03:54.340698 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 1 10:03:54.340716 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 10:03:54.340746 systemd[1]: Reached target machines.target - Containers. Nov 1 10:03:54.340765 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 1 10:03:54.340783 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 10:03:54.340801 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 10:03:54.340819 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 1 10:03:54.340835 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 10:03:54.340852 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 10:03:54.340880 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 10:03:54.340898 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 1 10:03:54.340916 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 10:03:54.340935 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 10:03:54.340954 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 1 10:03:54.340990 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 1 10:03:54.341019 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 1 10:03:54.341036 systemd[1]: Stopped systemd-fsck-usr.service. Nov 1 10:03:54.341054 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 1 10:03:54.341072 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 10:03:54.341105 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 10:03:54.341124 kernel: fuse: init (API version 7.41) Nov 1 10:03:54.341142 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 1 10:03:54.341159 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 1 10:03:54.341175 kernel: ACPI: bus type drm_connector registered Nov 1 10:03:54.341191 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 1 10:03:54.341209 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 10:03:54.341239 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 10:03:54.341283 systemd-journald[1234]: Collecting audit messages is disabled. Nov 1 10:03:54.341321 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 1 10:03:54.341339 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 1 10:03:54.341356 systemd-journald[1234]: Journal started Nov 1 10:03:54.341395 systemd-journald[1234]: Runtime Journal (/run/log/journal/1b2ac240d48244d1a3e4d87d40e54efe) is 6M, max 48.1M, 42M free. Nov 1 10:03:53.931757 systemd[1]: Queued start job for default target multi-user.target. Nov 1 10:03:53.954949 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 1 10:03:53.955502 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 1 10:03:54.347076 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 10:03:54.348485 systemd[1]: Mounted media.mount - External Media Directory. Nov 1 10:03:54.350329 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 1 10:03:54.352294 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 1 10:03:54.354338 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 1 10:03:54.356573 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 10:03:54.359028 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 10:03:54.359323 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 1 10:03:54.361681 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 10:03:54.361907 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 10:03:54.364428 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 10:03:54.364712 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 10:03:54.367230 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 10:03:54.367511 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 10:03:54.370307 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 10:03:54.370566 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 1 10:03:54.372860 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 10:03:54.373242 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 10:03:54.375725 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 10:03:54.378572 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 1 10:03:54.387846 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 1 10:03:54.390702 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 1 10:03:54.407883 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 1 10:03:54.411639 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 1 10:03:54.415230 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 1 10:03:54.422045 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 1 10:03:54.428869 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 10:03:54.429020 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 10:03:54.432775 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 1 10:03:54.436214 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 10:03:54.438779 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 1 10:03:54.462879 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 1 10:03:54.464763 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 10:03:54.465888 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 1 10:03:54.467688 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 10:03:54.478070 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 10:03:54.484708 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 1 10:03:54.488716 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 10:03:54.491615 systemd-journald[1234]: Time spent on flushing to /var/log/journal/1b2ac240d48244d1a3e4d87d40e54efe is 16.334ms for 1053 entries. Nov 1 10:03:54.491615 systemd-journald[1234]: System Journal (/var/log/journal/1b2ac240d48244d1a3e4d87d40e54efe) is 8M, max 163.5M, 155.5M free. Nov 1 10:03:55.014048 systemd-journald[1234]: Received client request to flush runtime journal. Nov 1 10:03:55.014151 kernel: loop1: detected capacity change from 0 to 111544 Nov 1 10:03:55.014183 kernel: loop2: detected capacity change from 0 to 224512 Nov 1 10:03:55.014213 kernel: loop3: detected capacity change from 0 to 119080 Nov 1 10:03:55.014244 kernel: loop4: detected capacity change from 0 to 111544 Nov 1 10:03:55.014273 kernel: loop5: detected capacity change from 0 to 224512 Nov 1 10:03:55.014302 kernel: loop6: detected capacity change from 0 to 119080 Nov 1 10:03:55.014330 zram_generator::config[1330]: No configuration found. Nov 1 10:03:54.492742 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 10:03:54.500488 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 1 10:03:54.513814 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 1 10:03:54.518992 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 1 10:03:54.531781 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Nov 1 10:03:54.531794 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Nov 1 10:03:54.533134 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 10:03:54.537420 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 10:03:54.541017 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 1 10:03:54.737096 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 1 10:03:54.739813 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 1 10:03:54.743415 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 1 10:03:54.762348 (sd-merge)[1303]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Nov 1 10:03:54.767665 (sd-merge)[1303]: Merged extensions into '/usr'. Nov 1 10:03:54.775016 systemd[1]: Reload requested from client PID 1288 ('systemd-sysext') (unit systemd-sysext.service)... Nov 1 10:03:54.775032 systemd[1]: Reloading... Nov 1 10:03:55.057732 systemd[1]: Reloading finished in 282 ms. Nov 1 10:03:55.090501 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 10:03:55.111705 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 1 10:03:55.115124 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 1 10:03:55.160915 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 1 10:03:55.165286 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 1 10:03:55.176366 systemd[1]: Starting ensure-sysext.service... Nov 1 10:03:55.181978 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 10:03:55.186730 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 10:03:55.194314 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 10:03:55.215149 systemd-tmpfiles[1374]: ACLs are not supported, ignoring. Nov 1 10:03:55.215174 systemd-tmpfiles[1374]: ACLs are not supported, ignoring. Nov 1 10:03:55.221208 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 10:03:55.230633 systemd-tmpfiles[1375]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 1 10:03:55.230682 systemd-tmpfiles[1375]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 1 10:03:55.231065 systemd-tmpfiles[1375]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 10:03:55.231763 systemd-tmpfiles[1375]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 1 10:03:55.233031 systemd-tmpfiles[1375]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 10:03:55.233415 systemd-tmpfiles[1375]: ACLs are not supported, ignoring. Nov 1 10:03:55.233505 systemd-tmpfiles[1375]: ACLs are not supported, ignoring. Nov 1 10:03:55.238678 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 1 10:03:55.243035 systemd[1]: Reload requested from client PID 1372 ('systemctl') (unit ensure-sysext.service)... Nov 1 10:03:55.243136 systemd[1]: Reloading... Nov 1 10:03:55.249781 systemd-tmpfiles[1375]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 10:03:55.249800 systemd-tmpfiles[1375]: Skipping /boot Nov 1 10:03:55.272270 systemd-tmpfiles[1375]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 10:03:55.272284 systemd-tmpfiles[1375]: Skipping /boot Nov 1 10:03:55.307987 zram_generator::config[1406]: No configuration found. Nov 1 10:03:55.385890 systemd-resolved[1373]: Positive Trust Anchors: Nov 1 10:03:55.385920 systemd-resolved[1373]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 10:03:55.385926 systemd-resolved[1373]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 1 10:03:55.385990 systemd-resolved[1373]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 10:03:55.390668 systemd-resolved[1373]: Defaulting to hostname 'linux'. Nov 1 10:03:55.498497 systemd[1]: Reloading finished in 254 ms. Nov 1 10:03:55.523716 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 1 10:03:55.526055 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 10:03:55.528621 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 1 10:03:55.549304 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 10:03:55.557679 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 10:03:55.561387 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 1 10:03:55.564421 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 1 10:03:55.584176 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 1 10:03:55.588106 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 1 10:03:55.594260 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 10:03:55.602392 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 1 10:03:55.609306 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 10:03:55.613657 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 10:03:55.618676 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 10:03:55.632387 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 10:03:55.634729 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 10:03:55.635077 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 1 10:03:55.637622 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 10:03:55.638651 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 10:03:55.651048 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 1 10:03:55.654088 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 10:03:55.656138 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 10:03:55.663774 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 10:03:55.664260 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 10:03:55.676333 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 10:03:55.677505 systemd-udevd[1456]: Using default interface naming scheme 'v257'. Nov 1 10:03:55.678366 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 10:03:55.681992 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 10:03:55.684502 augenrules[1485]: No rules Nov 1 10:03:55.693056 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 10:03:55.695092 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 10:03:55.695269 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 1 10:03:55.697076 systemd[1]: audit-rules.service: Deactivated successfully. Nov 1 10:03:55.697417 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 1 10:03:55.700225 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 1 10:03:55.703114 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 10:03:55.703509 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 10:03:55.706564 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 10:03:55.706843 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 10:03:55.709529 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 10:03:55.709744 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 10:03:55.719353 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 1 10:03:55.722142 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 10:03:55.735235 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 1 10:03:55.737160 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 10:03:55.739468 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 10:03:55.743047 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 10:03:55.751061 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 10:03:55.754776 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 10:03:55.757174 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 10:03:55.757340 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 1 10:03:55.762290 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 10:03:55.764133 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 10:03:55.766250 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 10:03:55.766532 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 10:03:55.782501 systemd[1]: Finished ensure-sysext.service. Nov 1 10:03:55.803407 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 1 10:03:55.806531 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 10:03:55.807016 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 10:03:55.809923 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 10:03:55.810843 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 10:03:55.813522 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 10:03:55.814214 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 10:03:55.816035 augenrules[1515]: /sbin/augenrules: No change Nov 1 10:03:55.826814 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 10:03:55.826937 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 10:03:55.845102 augenrules[1545]: No rules Nov 1 10:03:55.841027 systemd[1]: audit-rules.service: Deactivated successfully. Nov 1 10:03:55.841329 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 1 10:03:55.843376 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 1 10:03:55.909313 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 1 10:03:55.907798 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 1 10:03:55.916774 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 1 10:03:55.925009 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 10:03:55.929990 kernel: ACPI: button: Power Button [PWRF] Nov 1 10:03:55.944423 systemd-networkd[1526]: lo: Link UP Nov 1 10:03:55.944434 systemd-networkd[1526]: lo: Gained carrier Nov 1 10:03:55.945616 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 10:03:55.946350 systemd[1]: Reached target network.target - Network. Nov 1 10:03:55.951707 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 1 10:03:55.960378 systemd-networkd[1526]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 1 10:03:55.960392 systemd-networkd[1526]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 10:03:55.960965 systemd-networkd[1526]: eth0: Link UP Nov 1 10:03:55.961238 systemd-networkd[1526]: eth0: Gained carrier Nov 1 10:03:55.961260 systemd-networkd[1526]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 1 10:03:55.962577 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 1 10:03:55.975017 systemd-networkd[1526]: eth0: DHCPv4 address 10.0.0.55/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 1 10:03:55.976399 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 1 10:03:55.979141 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 1 10:03:57.627785 systemd-resolved[1373]: Clock change detected. Flushing caches. Nov 1 10:03:57.628119 systemd[1]: Reached target time-set.target - System Time Set. Nov 1 10:03:57.628366 systemd-timesyncd[1533]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 1 10:03:57.630318 systemd-timesyncd[1533]: Initial clock synchronization to Sat 2025-11-01 10:03:57.627735 UTC. Nov 1 10:03:57.649491 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 1 10:03:57.850554 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 10:03:57.850582 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 10:03:57.856306 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Nov 1 10:03:57.856633 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 1 10:03:57.859195 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 1 10:03:57.866266 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 10:03:58.044554 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 10:03:58.052119 kernel: kvm_amd: TSC scaling supported Nov 1 10:03:58.052160 kernel: kvm_amd: Nested Virtualization enabled Nov 1 10:03:58.052174 kernel: kvm_amd: Nested Paging enabled Nov 1 10:03:58.053751 kernel: kvm_amd: LBR virtualization supported Nov 1 10:03:58.053774 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 1 10:03:58.054725 kernel: kvm_amd: Virtual GIF supported Nov 1 10:03:58.078132 kernel: EDAC MC: Ver: 3.0.0 Nov 1 10:03:58.084503 ldconfig[1453]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 10:03:58.127483 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 1 10:03:58.131318 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 1 10:03:58.169900 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 1 10:03:58.172316 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 10:03:58.174515 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 1 10:03:58.176579 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 1 10:03:58.178682 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 1 10:03:58.181453 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 1 10:03:58.183422 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 1 10:03:58.185625 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 1 10:03:58.187616 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 10:03:58.187649 systemd[1]: Reached target paths.target - Path Units. Nov 1 10:03:58.189063 systemd[1]: Reached target timers.target - Timer Units. Nov 1 10:03:58.191451 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 1 10:03:58.194588 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 1 10:03:58.198289 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 1 10:03:58.200484 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 1 10:03:58.202453 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 1 10:03:58.211279 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 1 10:03:58.213171 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 1 10:03:58.215656 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 1 10:03:58.217984 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 10:03:58.219495 systemd[1]: Reached target basic.target - Basic System. Nov 1 10:03:58.220981 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 1 10:03:58.221012 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 1 10:03:58.222157 systemd[1]: Starting containerd.service - containerd container runtime... Nov 1 10:03:58.224948 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 1 10:03:58.227337 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 1 10:03:58.232466 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 1 10:03:58.236081 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 1 10:03:58.237840 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 1 10:03:58.239789 jq[1603]: false Nov 1 10:03:58.239072 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 1 10:03:58.241730 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 1 10:03:58.250415 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 1 10:03:58.327237 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 1 10:03:58.332395 oslogin_cache_refresh[1605]: Refreshing passwd entry cache Nov 1 10:03:58.333579 google_oslogin_nss_cache[1605]: oslogin_cache_refresh[1605]: Refreshing passwd entry cache Nov 1 10:03:58.332779 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 1 10:03:58.339298 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 1 10:03:58.341011 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 1 10:03:58.341179 google_oslogin_nss_cache[1605]: oslogin_cache_refresh[1605]: Failure getting users, quitting Nov 1 10:03:58.341171 oslogin_cache_refresh[1605]: Failure getting users, quitting Nov 1 10:03:58.341381 google_oslogin_nss_cache[1605]: oslogin_cache_refresh[1605]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 1 10:03:58.341381 google_oslogin_nss_cache[1605]: oslogin_cache_refresh[1605]: Refreshing group entry cache Nov 1 10:03:58.341197 oslogin_cache_refresh[1605]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 1 10:03:58.341257 oslogin_cache_refresh[1605]: Refreshing group entry cache Nov 1 10:03:58.341509 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 1 10:03:58.343336 extend-filesystems[1604]: Found /dev/vda6 Nov 1 10:03:58.344227 systemd[1]: Starting update-engine.service - Update Engine... Nov 1 10:03:58.347362 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 1 10:03:58.350019 google_oslogin_nss_cache[1605]: oslogin_cache_refresh[1605]: Failure getting groups, quitting Nov 1 10:03:58.350019 google_oslogin_nss_cache[1605]: oslogin_cache_refresh[1605]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 1 10:03:58.350008 oslogin_cache_refresh[1605]: Failure getting groups, quitting Nov 1 10:03:58.350022 oslogin_cache_refresh[1605]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 1 10:03:58.351770 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 1 10:03:58.353499 extend-filesystems[1604]: Found /dev/vda9 Nov 1 10:03:58.354238 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 10:03:58.354470 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 1 10:03:58.354813 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 1 10:03:58.355051 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 1 10:03:58.358070 extend-filesystems[1604]: Checking size of /dev/vda9 Nov 1 10:03:58.358635 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 10:03:58.358878 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 1 10:03:58.362591 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 10:03:58.363212 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 1 10:03:58.365276 jq[1622]: true Nov 1 10:03:58.376412 update_engine[1619]: I20251101 10:03:58.376350 1619 main.cc:92] Flatcar Update Engine starting Nov 1 10:03:58.387038 jq[1635]: true Nov 1 10:03:58.392819 tar[1631]: linux-amd64/LICENSE Nov 1 10:03:58.407477 tar[1631]: linux-amd64/helm Nov 1 10:03:58.427852 dbus-daemon[1601]: [system] SELinux support is enabled Nov 1 10:03:58.429808 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 1 10:03:58.431205 update_engine[1619]: I20251101 10:03:58.430773 1619 update_check_scheduler.cc:74] Next update check in 2m21s Nov 1 10:03:58.453639 systemd-logind[1618]: Watching system buttons on /dev/input/event2 (Power Button) Nov 1 10:03:58.453667 systemd-logind[1618]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 1 10:03:58.456743 systemd-logind[1618]: New seat seat0. Nov 1 10:03:58.460775 systemd[1]: Started systemd-logind.service - User Login Management. Nov 1 10:03:58.462756 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 10:03:58.462786 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 1 10:03:58.465439 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 10:03:58.465454 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 1 10:03:58.468894 dbus-daemon[1601]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 1 10:03:58.469265 systemd[1]: Started update-engine.service - Update Engine. Nov 1 10:03:58.473159 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 1 10:03:58.484198 extend-filesystems[1604]: Resized partition /dev/vda9 Nov 1 10:03:58.566366 extend-filesystems[1669]: resize2fs 1.47.3 (8-Jul-2025) Nov 1 10:03:58.739503 locksmithd[1664]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 10:03:58.749140 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Nov 1 10:03:58.850147 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Nov 1 10:03:58.893288 extend-filesystems[1669]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 1 10:03:58.893288 extend-filesystems[1669]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 1 10:03:58.893288 extend-filesystems[1669]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Nov 1 10:03:58.902358 extend-filesystems[1604]: Resized filesystem in /dev/vda9 Nov 1 10:03:58.896997 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 10:03:58.897359 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 1 10:03:58.910394 bash[1663]: Updated "/home/core/.ssh/authorized_keys" Nov 1 10:03:58.911522 sshd_keygen[1637]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 10:03:58.912962 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 1 10:03:58.917642 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 1 10:03:58.925314 systemd-networkd[1526]: eth0: Gained IPv6LL Nov 1 10:03:58.928548 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 1 10:03:58.933168 systemd[1]: Reached target network-online.target - Network is Online. Nov 1 10:03:58.941353 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 1 10:03:58.948825 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 10:03:58.965401 tar[1631]: linux-amd64/README.md Nov 1 10:03:58.965456 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 1 10:03:58.986504 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 1 10:03:58.994121 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 1 10:03:59.002012 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 1 10:03:59.015525 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 1 10:03:59.020046 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 1 10:03:59.020487 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 1 10:03:59.024299 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 1 10:03:59.027037 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 10:03:59.027391 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 1 10:03:59.033565 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 1 10:03:59.049561 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 1 10:03:59.056594 containerd[1640]: time="2025-11-01T10:03:59Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 1 10:03:59.057358 containerd[1640]: time="2025-11-01T10:03:59.057304896Z" level=info msg="starting containerd" revision=75cb2b7193e4e490e9fbdc236c0e811ccaba3376 version=v2.1.4 Nov 1 10:03:59.067603 containerd[1640]: time="2025-11-01T10:03:59.067552694Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.462µs" Nov 1 10:03:59.067767 containerd[1640]: time="2025-11-01T10:03:59.067749774Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 1 10:03:59.067883 containerd[1640]: time="2025-11-01T10:03:59.067862565Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 1 10:03:59.067947 containerd[1640]: time="2025-11-01T10:03:59.067930322Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 1 10:03:59.068223 containerd[1640]: time="2025-11-01T10:03:59.068197013Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 1 10:03:59.068286 containerd[1640]: time="2025-11-01T10:03:59.068273145Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 1 10:03:59.068408 containerd[1640]: time="2025-11-01T10:03:59.068390335Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 1 10:03:59.068490 containerd[1640]: time="2025-11-01T10:03:59.068469373Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 1 10:03:59.069399 containerd[1640]: time="2025-11-01T10:03:59.069335608Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 1 10:03:59.069399 containerd[1640]: time="2025-11-01T10:03:59.069380662Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 1 10:03:59.069399 containerd[1640]: time="2025-11-01T10:03:59.069401932Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 1 10:03:59.069399 containerd[1640]: time="2025-11-01T10:03:59.069412312Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Nov 1 10:03:59.069702 containerd[1640]: time="2025-11-01T10:03:59.069669093Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Nov 1 10:03:59.069702 containerd[1640]: time="2025-11-01T10:03:59.069697266Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 1 10:03:59.069858 containerd[1640]: time="2025-11-01T10:03:59.069829214Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 1 10:03:59.070079 containerd[1640]: time="2025-11-01T10:03:59.070051651Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 1 10:03:59.070140 containerd[1640]: time="2025-11-01T10:03:59.070120770Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 1 10:03:59.070140 containerd[1640]: time="2025-11-01T10:03:59.070135368Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 1 10:03:59.070179 containerd[1640]: time="2025-11-01T10:03:59.070171435Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 1 10:03:59.070455 containerd[1640]: time="2025-11-01T10:03:59.070418248Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 1 10:03:59.070546 containerd[1640]: time="2025-11-01T10:03:59.070527523Z" level=info msg="metadata content store policy set" policy=shared Nov 1 10:03:59.091842 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 1 10:03:59.094567 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 1 10:03:59.096560 systemd[1]: Reached target getty.target - Login Prompts. Nov 1 10:03:59.328300 containerd[1640]: time="2025-11-01T10:03:59.328218846Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 1 10:03:59.328300 containerd[1640]: time="2025-11-01T10:03:59.328329043Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Nov 1 10:03:59.328551 containerd[1640]: time="2025-11-01T10:03:59.328510103Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Nov 1 10:03:59.328551 containerd[1640]: time="2025-11-01T10:03:59.328529188Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 1 10:03:59.328609 containerd[1640]: time="2025-11-01T10:03:59.328553193Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 1 10:03:59.328609 containerd[1640]: time="2025-11-01T10:03:59.328579903Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 1 10:03:59.328609 containerd[1640]: time="2025-11-01T10:03:59.328595813Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 1 10:03:59.328747 containerd[1640]: time="2025-11-01T10:03:59.328626892Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 1 10:03:59.328747 containerd[1640]: time="2025-11-01T10:03:59.328649113Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 1 10:03:59.328747 containerd[1640]: time="2025-11-01T10:03:59.328666075Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 1 10:03:59.328747 containerd[1640]: time="2025-11-01T10:03:59.328689770Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 1 10:03:59.328747 containerd[1640]: time="2025-11-01T10:03:59.328704427Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 1 10:03:59.328747 containerd[1640]: time="2025-11-01T10:03:59.328716600Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 1 10:03:59.328747 containerd[1640]: time="2025-11-01T10:03:59.328742528Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 1 10:03:59.329001 containerd[1640]: time="2025-11-01T10:03:59.328955768Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 1 10:03:59.329001 containerd[1640]: time="2025-11-01T10:03:59.328990814Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 1 10:03:59.329058 containerd[1640]: time="2025-11-01T10:03:59.329007686Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 1 10:03:59.329058 containerd[1640]: time="2025-11-01T10:03:59.329027904Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 1 10:03:59.329058 containerd[1640]: time="2025-11-01T10:03:59.329038704Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 1 10:03:59.329058 containerd[1640]: time="2025-11-01T10:03:59.329049855Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 1 10:03:59.329058 containerd[1640]: time="2025-11-01T10:03:59.329060855Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 1 10:03:59.329235 containerd[1640]: time="2025-11-01T10:03:59.329074611Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 1 10:03:59.329235 containerd[1640]: time="2025-11-01T10:03:59.329087245Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 1 10:03:59.329235 containerd[1640]: time="2025-11-01T10:03:59.329098767Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 1 10:03:59.329235 containerd[1640]: time="2025-11-01T10:03:59.329133802Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 1 10:03:59.329235 containerd[1640]: time="2025-11-01T10:03:59.329158448Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 1 10:03:59.329235 containerd[1640]: time="2025-11-01T10:03:59.329222218Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 1 10:03:59.329235 containerd[1640]: time="2025-11-01T10:03:59.329243077Z" level=info msg="Start snapshots syncer" Nov 1 10:03:59.329416 containerd[1640]: time="2025-11-01T10:03:59.329267703Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 1 10:03:59.329614 containerd[1640]: time="2025-11-01T10:03:59.329544472Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 1 10:03:59.329614 containerd[1640]: time="2025-11-01T10:03:59.329614514Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 1 10:03:59.329843 containerd[1640]: time="2025-11-01T10:03:59.329735982Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 1 10:03:59.329875 containerd[1640]: time="2025-11-01T10:03:59.329841820Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 1 10:03:59.329875 containerd[1640]: time="2025-11-01T10:03:59.329863330Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 1 10:03:59.329875 containerd[1640]: time="2025-11-01T10:03:59.329874431Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 1 10:03:59.329967 containerd[1640]: time="2025-11-01T10:03:59.329885101Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 1 10:03:59.329967 containerd[1640]: time="2025-11-01T10:03:59.329898015Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 1 10:03:59.329967 containerd[1640]: time="2025-11-01T10:03:59.329935856Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 1 10:03:59.329967 containerd[1640]: time="2025-11-01T10:03:59.329947107Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 1 10:03:59.329967 containerd[1640]: time="2025-11-01T10:03:59.329958819Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 1 10:03:59.329967 containerd[1640]: time="2025-11-01T10:03:59.329969409Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 1 10:03:59.330143 containerd[1640]: time="2025-11-01T10:03:59.330002051Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 1 10:03:59.330143 containerd[1640]: time="2025-11-01T10:03:59.330015736Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 1 10:03:59.330143 containerd[1640]: time="2025-11-01T10:03:59.330024142Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 1 10:03:59.330143 containerd[1640]: time="2025-11-01T10:03:59.330032898Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 1 10:03:59.330143 containerd[1640]: time="2025-11-01T10:03:59.330041685Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 1 10:03:59.330143 containerd[1640]: time="2025-11-01T10:03:59.330051503Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 1 10:03:59.330143 containerd[1640]: time="2025-11-01T10:03:59.330062334Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 1 10:03:59.330143 containerd[1640]: time="2025-11-01T10:03:59.330081019Z" level=info msg="runtime interface created" Nov 1 10:03:59.330143 containerd[1640]: time="2025-11-01T10:03:59.330086218Z" level=info msg="created NRI interface" Nov 1 10:03:59.330143 containerd[1640]: time="2025-11-01T10:03:59.330095496Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 1 10:03:59.330143 containerd[1640]: time="2025-11-01T10:03:59.330124149Z" level=info msg="Connect containerd service" Nov 1 10:03:59.330143 containerd[1640]: time="2025-11-01T10:03:59.330143526Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 1 10:03:59.331013 containerd[1640]: time="2025-11-01T10:03:59.330983772Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 10:03:59.529537 containerd[1640]: time="2025-11-01T10:03:59.529190109Z" level=info msg="Start subscribing containerd event" Nov 1 10:03:59.529537 containerd[1640]: time="2025-11-01T10:03:59.529251254Z" level=info msg="Start recovering state" Nov 1 10:03:59.529537 containerd[1640]: time="2025-11-01T10:03:59.529363354Z" level=info msg="Start event monitor" Nov 1 10:03:59.529537 containerd[1640]: time="2025-11-01T10:03:59.529377611Z" level=info msg="Start cni network conf syncer for default" Nov 1 10:03:59.529537 containerd[1640]: time="2025-11-01T10:03:59.529390054Z" level=info msg="Start streaming server" Nov 1 10:03:59.529537 containerd[1640]: time="2025-11-01T10:03:59.529413518Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 1 10:03:59.529537 containerd[1640]: time="2025-11-01T10:03:59.529423086Z" level=info msg="runtime interface starting up..." Nov 1 10:03:59.529537 containerd[1640]: time="2025-11-01T10:03:59.529429548Z" level=info msg="starting plugins..." Nov 1 10:03:59.529537 containerd[1640]: time="2025-11-01T10:03:59.529448644Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 1 10:03:59.529823 containerd[1640]: time="2025-11-01T10:03:59.529603164Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 10:03:59.529823 containerd[1640]: time="2025-11-01T10:03:59.529697571Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 10:03:59.529927 containerd[1640]: time="2025-11-01T10:03:59.529889150Z" level=info msg="containerd successfully booted in 0.473899s" Nov 1 10:03:59.530148 systemd[1]: Started containerd.service - containerd container runtime. Nov 1 10:04:00.811963 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 10:04:00.814402 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 1 10:04:00.816336 systemd[1]: Startup finished in 3.673s (kernel) + 6.382s (initrd) + 5.950s (userspace) = 16.006s. Nov 1 10:04:00.826411 (kubelet)[1741]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 10:04:01.026699 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 1 10:04:01.028162 systemd[1]: Started sshd@0-10.0.0.55:22-10.0.0.1:34356.service - OpenSSH per-connection server daemon (10.0.0.1:34356). Nov 1 10:04:01.122423 sshd[1751]: Accepted publickey for core from 10.0.0.1 port 34356 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:04:01.124351 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:04:01.132349 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 1 10:04:01.133885 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 1 10:04:01.141142 systemd-logind[1618]: New session 1 of user core. Nov 1 10:04:01.181179 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 1 10:04:01.184503 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 1 10:04:01.209691 (systemd)[1758]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 10:04:01.212602 systemd-logind[1618]: New session c1 of user core. Nov 1 10:04:01.420980 systemd[1758]: Queued start job for default target default.target. Nov 1 10:04:01.440400 systemd[1758]: Created slice app.slice - User Application Slice. Nov 1 10:04:01.440427 systemd[1758]: Reached target paths.target - Paths. Nov 1 10:04:01.440466 systemd[1758]: Reached target timers.target - Timers. Nov 1 10:04:01.442058 systemd[1758]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 1 10:04:01.455355 systemd[1758]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 1 10:04:01.455477 systemd[1758]: Reached target sockets.target - Sockets. Nov 1 10:04:01.455514 systemd[1758]: Reached target basic.target - Basic System. Nov 1 10:04:01.455553 systemd[1758]: Reached target default.target - Main User Target. Nov 1 10:04:01.455585 systemd[1758]: Startup finished in 232ms. Nov 1 10:04:01.455967 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 1 10:04:01.457981 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 1 10:04:01.461981 kubelet[1741]: E1101 10:04:01.461937 1741 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 10:04:01.465807 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 10:04:01.465995 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 10:04:01.466402 systemd[1]: kubelet.service: Consumed 2.135s CPU time, 267.7M memory peak. Nov 1 10:04:01.484385 systemd[1]: Started sshd@1-10.0.0.55:22-10.0.0.1:34370.service - OpenSSH per-connection server daemon (10.0.0.1:34370). Nov 1 10:04:01.552117 sshd[1770]: Accepted publickey for core from 10.0.0.1 port 34370 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:04:01.553567 sshd-session[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:04:01.558445 systemd-logind[1618]: New session 2 of user core. Nov 1 10:04:01.572297 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 1 10:04:01.585451 sshd[1773]: Connection closed by 10.0.0.1 port 34370 Nov 1 10:04:01.585882 sshd-session[1770]: pam_unix(sshd:session): session closed for user core Nov 1 10:04:01.597928 systemd[1]: sshd@1-10.0.0.55:22-10.0.0.1:34370.service: Deactivated successfully. Nov 1 10:04:01.599858 systemd[1]: session-2.scope: Deactivated successfully. Nov 1 10:04:01.600705 systemd-logind[1618]: Session 2 logged out. Waiting for processes to exit. Nov 1 10:04:01.604396 systemd[1]: Started sshd@2-10.0.0.55:22-10.0.0.1:34382.service - OpenSSH per-connection server daemon (10.0.0.1:34382). Nov 1 10:04:01.605207 systemd-logind[1618]: Removed session 2. Nov 1 10:04:01.666879 sshd[1779]: Accepted publickey for core from 10.0.0.1 port 34382 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:04:01.668146 sshd-session[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:04:01.672840 systemd-logind[1618]: New session 3 of user core. Nov 1 10:04:01.682428 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 1 10:04:01.692412 sshd[1783]: Connection closed by 10.0.0.1 port 34382 Nov 1 10:04:01.692857 sshd-session[1779]: pam_unix(sshd:session): session closed for user core Nov 1 10:04:01.706976 systemd[1]: sshd@2-10.0.0.55:22-10.0.0.1:34382.service: Deactivated successfully. Nov 1 10:04:01.708949 systemd[1]: session-3.scope: Deactivated successfully. Nov 1 10:04:01.709675 systemd-logind[1618]: Session 3 logged out. Waiting for processes to exit. Nov 1 10:04:01.712332 systemd[1]: Started sshd@3-10.0.0.55:22-10.0.0.1:34398.service - OpenSSH per-connection server daemon (10.0.0.1:34398). Nov 1 10:04:01.713172 systemd-logind[1618]: Removed session 3. Nov 1 10:04:01.782072 sshd[1789]: Accepted publickey for core from 10.0.0.1 port 34398 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:04:01.783709 sshd-session[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:04:01.790073 systemd-logind[1618]: New session 4 of user core. Nov 1 10:04:01.796325 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 1 10:04:01.812729 sshd[1792]: Connection closed by 10.0.0.1 port 34398 Nov 1 10:04:01.813140 sshd-session[1789]: pam_unix(sshd:session): session closed for user core Nov 1 10:04:01.824777 systemd[1]: sshd@3-10.0.0.55:22-10.0.0.1:34398.service: Deactivated successfully. Nov 1 10:04:01.827015 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 10:04:01.828272 systemd-logind[1618]: Session 4 logged out. Waiting for processes to exit. Nov 1 10:04:01.831623 systemd[1]: Started sshd@4-10.0.0.55:22-10.0.0.1:34402.service - OpenSSH per-connection server daemon (10.0.0.1:34402). Nov 1 10:04:01.832592 systemd-logind[1618]: Removed session 4. Nov 1 10:04:01.902150 sshd[1798]: Accepted publickey for core from 10.0.0.1 port 34402 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:04:01.903762 sshd-session[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:04:01.908523 systemd-logind[1618]: New session 5 of user core. Nov 1 10:04:01.922267 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 1 10:04:01.943763 sudo[1802]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 1 10:04:01.944051 sudo[1802]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 10:04:01.959713 sudo[1802]: pam_unix(sudo:session): session closed for user root Nov 1 10:04:01.961344 sshd[1801]: Connection closed by 10.0.0.1 port 34402 Nov 1 10:04:01.961673 sshd-session[1798]: pam_unix(sshd:session): session closed for user core Nov 1 10:04:01.972825 systemd[1]: sshd@4-10.0.0.55:22-10.0.0.1:34402.service: Deactivated successfully. Nov 1 10:04:01.974681 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 10:04:01.975442 systemd-logind[1618]: Session 5 logged out. Waiting for processes to exit. Nov 1 10:04:01.978294 systemd[1]: Started sshd@5-10.0.0.55:22-10.0.0.1:34408.service - OpenSSH per-connection server daemon (10.0.0.1:34408). Nov 1 10:04:01.978885 systemd-logind[1618]: Removed session 5. Nov 1 10:04:02.036622 sshd[1808]: Accepted publickey for core from 10.0.0.1 port 34408 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:04:02.037932 sshd-session[1808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:04:02.042328 systemd-logind[1618]: New session 6 of user core. Nov 1 10:04:02.052245 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 1 10:04:02.066170 sudo[1814]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 1 10:04:02.066536 sudo[1814]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 10:04:02.074055 sudo[1814]: pam_unix(sudo:session): session closed for user root Nov 1 10:04:02.082024 sudo[1813]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 1 10:04:02.082349 sudo[1813]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 10:04:02.093529 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 1 10:04:02.141803 augenrules[1836]: No rules Nov 1 10:04:02.143445 systemd[1]: audit-rules.service: Deactivated successfully. Nov 1 10:04:02.143764 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 1 10:04:02.145052 sudo[1813]: pam_unix(sudo:session): session closed for user root Nov 1 10:04:02.146987 sshd[1812]: Connection closed by 10.0.0.1 port 34408 Nov 1 10:04:02.147322 sshd-session[1808]: pam_unix(sshd:session): session closed for user core Nov 1 10:04:02.157354 systemd[1]: sshd@5-10.0.0.55:22-10.0.0.1:34408.service: Deactivated successfully. Nov 1 10:04:02.159883 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 10:04:02.161042 systemd-logind[1618]: Session 6 logged out. Waiting for processes to exit. Nov 1 10:04:02.164516 systemd[1]: Started sshd@6-10.0.0.55:22-10.0.0.1:34416.service - OpenSSH per-connection server daemon (10.0.0.1:34416). Nov 1 10:04:02.165411 systemd-logind[1618]: Removed session 6. Nov 1 10:04:02.221424 sshd[1845]: Accepted publickey for core from 10.0.0.1 port 34416 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:04:02.223018 sshd-session[1845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:04:02.227900 systemd-logind[1618]: New session 7 of user core. Nov 1 10:04:02.241275 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 1 10:04:02.255022 sudo[1849]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 10:04:02.255403 sudo[1849]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 10:04:02.788860 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 1 10:04:02.817727 (dockerd)[1869]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 1 10:04:03.272835 dockerd[1869]: time="2025-11-01T10:04:03.272639553Z" level=info msg="Starting up" Nov 1 10:04:03.273749 dockerd[1869]: time="2025-11-01T10:04:03.273690724Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 1 10:04:03.295503 dockerd[1869]: time="2025-11-01T10:04:03.295446645Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 1 10:04:03.942666 dockerd[1869]: time="2025-11-01T10:04:03.942578488Z" level=info msg="Loading containers: start." Nov 1 10:04:03.954414 kernel: Initializing XFRM netlink socket Nov 1 10:04:04.346004 systemd-networkd[1526]: docker0: Link UP Nov 1 10:04:04.396507 dockerd[1869]: time="2025-11-01T10:04:04.396388083Z" level=info msg="Loading containers: done." Nov 1 10:04:04.423404 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck57534304-merged.mount: Deactivated successfully. Nov 1 10:04:04.423960 dockerd[1869]: time="2025-11-01T10:04:04.423899787Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 10:04:04.424088 dockerd[1869]: time="2025-11-01T10:04:04.424053756Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 1 10:04:04.424279 dockerd[1869]: time="2025-11-01T10:04:04.424241047Z" level=info msg="Initializing buildkit" Nov 1 10:04:04.471483 dockerd[1869]: time="2025-11-01T10:04:04.471406960Z" level=info msg="Completed buildkit initialization" Nov 1 10:04:04.480003 dockerd[1869]: time="2025-11-01T10:04:04.479897443Z" level=info msg="Daemon has completed initialization" Nov 1 10:04:04.480207 dockerd[1869]: time="2025-11-01T10:04:04.480025914Z" level=info msg="API listen on /run/docker.sock" Nov 1 10:04:04.480463 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 1 10:04:05.566431 containerd[1640]: time="2025-11-01T10:04:05.566365228Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 1 10:04:06.329544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3528876381.mount: Deactivated successfully. Nov 1 10:04:07.317939 containerd[1640]: time="2025-11-01T10:04:07.317867409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:04:07.319088 containerd[1640]: time="2025-11-01T10:04:07.318600454Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=27191854" Nov 1 10:04:07.319810 containerd[1640]: time="2025-11-01T10:04:07.319780908Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:04:07.322143 containerd[1640]: time="2025-11-01T10:04:07.322118121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:04:07.323081 containerd[1640]: time="2025-11-01T10:04:07.323031414Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 1.756605352s" Nov 1 10:04:07.323151 containerd[1640]: time="2025-11-01T10:04:07.323087469Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 1 10:04:07.323804 containerd[1640]: time="2025-11-01T10:04:07.323779617Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 1 10:04:08.608828 containerd[1640]: time="2025-11-01T10:04:08.608742100Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:04:08.609689 containerd[1640]: time="2025-11-01T10:04:08.609605800Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24778872" Nov 1 10:04:08.610978 containerd[1640]: time="2025-11-01T10:04:08.610906950Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:04:08.613866 containerd[1640]: time="2025-11-01T10:04:08.613835793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:04:08.614746 containerd[1640]: time="2025-11-01T10:04:08.614690265Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.290880642s" Nov 1 10:04:08.614746 containerd[1640]: time="2025-11-01T10:04:08.614735059Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 1 10:04:08.615441 containerd[1640]: time="2025-11-01T10:04:08.615404965Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 1 10:04:10.607146 containerd[1640]: time="2025-11-01T10:04:10.606678785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:04:10.607907 containerd[1640]: time="2025-11-01T10:04:10.607662980Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19168154" Nov 1 10:04:10.608917 containerd[1640]: time="2025-11-01T10:04:10.608879331Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:04:10.611670 containerd[1640]: time="2025-11-01T10:04:10.611609501Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:04:10.612522 containerd[1640]: time="2025-11-01T10:04:10.612466008Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.997020206s" Nov 1 10:04:10.612522 containerd[1640]: time="2025-11-01T10:04:10.612516923Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 1 10:04:10.613575 containerd[1640]: time="2025-11-01T10:04:10.613519624Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 1 10:04:11.539608 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 10:04:11.541803 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 10:04:11.806866 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 10:04:11.818394 (kubelet)[2167]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 10:04:11.829868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1629441725.mount: Deactivated successfully. Nov 1 10:04:12.547481 kubelet[2167]: E1101 10:04:12.547411 2167 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 10:04:12.553729 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 10:04:12.553919 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 10:04:12.554507 systemd[1]: kubelet.service: Consumed 321ms CPU time, 111.3M memory peak. Nov 1 10:04:13.169236 containerd[1640]: time="2025-11-01T10:04:13.169156124Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:04:13.170278 containerd[1640]: time="2025-11-01T10:04:13.170245457Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=19362742" Nov 1 10:04:13.171593 containerd[1640]: time="2025-11-01T10:04:13.171554252Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:04:13.173922 containerd[1640]: time="2025-11-01T10:04:13.173885303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:04:13.174693 containerd[1640]: time="2025-11-01T10:04:13.174659766Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 2.561086692s" Nov 1 10:04:13.174693 containerd[1640]: time="2025-11-01T10:04:13.174691084Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 1 10:04:13.175263 containerd[1640]: time="2025-11-01T10:04:13.175222721Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 1 10:04:14.163480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount786790146.mount: Deactivated successfully. Nov 1 10:04:14.872398 containerd[1640]: time="2025-11-01T10:04:14.872326236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:04:14.873279 containerd[1640]: time="2025-11-01T10:04:14.873210805Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=17590866" Nov 1 10:04:14.874704 containerd[1640]: time="2025-11-01T10:04:14.874630588Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:04:14.876985 containerd[1640]: time="2025-11-01T10:04:14.876951721Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:04:14.878023 containerd[1640]: time="2025-11-01T10:04:14.877963608Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.702707364s" Nov 1 10:04:14.878023 containerd[1640]: time="2025-11-01T10:04:14.878015175Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 1 10:04:14.878761 containerd[1640]: time="2025-11-01T10:04:14.878734023Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 1 10:04:15.440236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4221951638.mount: Deactivated successfully. Nov 1 10:04:15.446842 containerd[1640]: time="2025-11-01T10:04:15.446786564Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 10:04:15.447574 containerd[1640]: time="2025-11-01T10:04:15.447543684Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 1 10:04:15.448740 containerd[1640]: time="2025-11-01T10:04:15.448693871Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 10:04:15.450841 containerd[1640]: time="2025-11-01T10:04:15.450786957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 10:04:15.451398 containerd[1640]: time="2025-11-01T10:04:15.451331498Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 572.569322ms" Nov 1 10:04:15.451398 containerd[1640]: time="2025-11-01T10:04:15.451376703Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 1 10:04:15.451870 containerd[1640]: time="2025-11-01T10:04:15.451826075Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 1 10:04:16.423674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2202837663.mount: Deactivated successfully. Nov 1 10:04:18.074651 containerd[1640]: time="2025-11-01T10:04:18.074585906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:04:18.075440 containerd[1640]: time="2025-11-01T10:04:18.075392719Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57133525" Nov 1 10:04:18.076786 containerd[1640]: time="2025-11-01T10:04:18.076744815Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:04:18.079196 containerd[1640]: time="2025-11-01T10:04:18.079161407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:04:18.080073 containerd[1640]: time="2025-11-01T10:04:18.080043331Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.628190495s" Nov 1 10:04:18.080131 containerd[1640]: time="2025-11-01T10:04:18.080071824Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 1 10:04:20.301865 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 10:04:20.302218 systemd[1]: kubelet.service: Consumed 321ms CPU time, 111.3M memory peak. Nov 1 10:04:20.304896 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 10:04:20.332877 systemd[1]: Reload requested from client PID 2323 ('systemctl') (unit session-7.scope)... Nov 1 10:04:20.332913 systemd[1]: Reloading... Nov 1 10:04:20.421136 zram_generator::config[2373]: No configuration found. Nov 1 10:04:21.011479 systemd[1]: Reloading finished in 678 ms. Nov 1 10:04:21.074801 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 1 10:04:21.074915 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 1 10:04:21.075300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 10:04:21.075350 systemd[1]: kubelet.service: Consumed 173ms CPU time, 98.4M memory peak. Nov 1 10:04:21.077024 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 10:04:21.253860 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 10:04:21.271405 (kubelet)[2415]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 10:04:21.327161 kubelet[2415]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 10:04:21.327161 kubelet[2415]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 10:04:21.327161 kubelet[2415]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 10:04:21.327566 kubelet[2415]: I1101 10:04:21.327267 2415 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 10:04:21.438527 kubelet[2415]: I1101 10:04:21.438451 2415 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 10:04:21.438527 kubelet[2415]: I1101 10:04:21.438505 2415 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 10:04:21.438861 kubelet[2415]: I1101 10:04:21.438835 2415 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 10:04:21.468233 kubelet[2415]: E1101 10:04:21.468173 2415 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.55:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Nov 1 10:04:21.475775 kubelet[2415]: I1101 10:04:21.475719 2415 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 10:04:21.486096 kubelet[2415]: I1101 10:04:21.486053 2415 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 1 10:04:21.491486 kubelet[2415]: I1101 10:04:21.491453 2415 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 10:04:21.491771 kubelet[2415]: I1101 10:04:21.491727 2415 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 10:04:21.491966 kubelet[2415]: I1101 10:04:21.491757 2415 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 10:04:21.492682 kubelet[2415]: I1101 10:04:21.492653 2415 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 10:04:21.492682 kubelet[2415]: I1101 10:04:21.492670 2415 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 10:04:21.492848 kubelet[2415]: I1101 10:04:21.492823 2415 state_mem.go:36] "Initialized new in-memory state store" Nov 1 10:04:21.495795 kubelet[2415]: I1101 10:04:21.495768 2415 kubelet.go:446] "Attempting to sync node with API server" Nov 1 10:04:21.495833 kubelet[2415]: I1101 10:04:21.495801 2415 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 10:04:21.495858 kubelet[2415]: I1101 10:04:21.495838 2415 kubelet.go:352] "Adding apiserver pod source" Nov 1 10:04:21.495858 kubelet[2415]: I1101 10:04:21.495856 2415 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 10:04:21.498699 kubelet[2415]: W1101 10:04:21.498647 2415 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Nov 1 10:04:21.498743 kubelet[2415]: W1101 10:04:21.498686 2415 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Nov 1 10:04:21.498743 kubelet[2415]: E1101 10:04:21.498715 2415 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Nov 1 10:04:21.498800 kubelet[2415]: E1101 10:04:21.498740 2415 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Nov 1 10:04:21.499606 kubelet[2415]: I1101 10:04:21.499566 2415 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Nov 1 10:04:21.500095 kubelet[2415]: I1101 10:04:21.500065 2415 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 10:04:21.500185 kubelet[2415]: W1101 10:04:21.500167 2415 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 10:04:21.502175 kubelet[2415]: I1101 10:04:21.502140 2415 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 10:04:21.505088 kubelet[2415]: I1101 10:04:21.503524 2415 server.go:1287] "Started kubelet" Nov 1 10:04:21.505088 kubelet[2415]: I1101 10:04:21.504181 2415 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 10:04:21.506388 kubelet[2415]: I1101 10:04:21.506358 2415 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 10:04:21.506478 kubelet[2415]: I1101 10:04:21.506465 2415 server.go:479] "Adding debug handlers to kubelet server" Nov 1 10:04:21.509357 kubelet[2415]: I1101 10:04:21.509333 2415 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 10:04:21.514257 kubelet[2415]: I1101 10:04:21.514240 2415 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 10:04:21.514720 kubelet[2415]: E1101 10:04:21.514703 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 10:04:21.515183 kubelet[2415]: I1101 10:04:21.515166 2415 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 10:04:21.515449 kubelet[2415]: E1101 10:04:21.515424 2415 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="200ms" Nov 1 10:04:21.522982 kubelet[2415]: I1101 10:04:21.522898 2415 reconciler.go:26] "Reconciler: start to sync state" Nov 1 10:04:21.523840 kubelet[2415]: I1101 10:04:21.523769 2415 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 10:04:21.523840 kubelet[2415]: W1101 10:04:21.523798 2415 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Nov 1 10:04:21.523932 kubelet[2415]: E1101 10:04:21.523856 2415 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Nov 1 10:04:21.524145 kubelet[2415]: I1101 10:04:21.524120 2415 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 10:04:21.525340 kubelet[2415]: I1101 10:04:21.524394 2415 factory.go:221] Registration of the systemd container factory successfully Nov 1 10:04:21.525340 kubelet[2415]: I1101 10:04:21.524509 2415 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 10:04:21.526897 kubelet[2415]: E1101 10:04:21.526867 2415 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 10:04:21.527313 kubelet[2415]: I1101 10:04:21.527286 2415 factory.go:221] Registration of the containerd container factory successfully Nov 1 10:04:21.529293 kubelet[2415]: E1101 10:04:21.526798 2415 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.55:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.55:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1873d9e6181b7bfd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-01 10:04:21.502155773 +0000 UTC m=+0.221481428,LastTimestamp:2025-11-01 10:04:21.502155773 +0000 UTC m=+0.221481428,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 1 10:04:21.536373 kubelet[2415]: I1101 10:04:21.536291 2415 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 10:04:21.538759 kubelet[2415]: I1101 10:04:21.538727 2415 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 10:04:21.538819 kubelet[2415]: I1101 10:04:21.538765 2415 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 10:04:21.538819 kubelet[2415]: I1101 10:04:21.538796 2415 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 10:04:21.538819 kubelet[2415]: I1101 10:04:21.538807 2415 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 10:04:21.538901 kubelet[2415]: E1101 10:04:21.538867 2415 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 10:04:21.540856 kubelet[2415]: W1101 10:04:21.540826 2415 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Nov 1 10:04:21.540900 kubelet[2415]: E1101 10:04:21.540860 2415 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Nov 1 10:04:21.541349 kubelet[2415]: I1101 10:04:21.541326 2415 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 10:04:21.541444 kubelet[2415]: I1101 10:04:21.541404 2415 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 10:04:21.541444 kubelet[2415]: I1101 10:04:21.541431 2415 state_mem.go:36] "Initialized new in-memory state store" Nov 1 10:04:21.615076 kubelet[2415]: E1101 10:04:21.615003 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 10:04:21.639677 kubelet[2415]: E1101 10:04:21.639624 2415 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 1 10:04:21.716041 kubelet[2415]: E1101 10:04:21.715973 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 10:04:21.716572 kubelet[2415]: E1101 10:04:21.716536 2415 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="400ms" Nov 1 10:04:21.776406 kubelet[2415]: I1101 10:04:21.776265 2415 policy_none.go:49] "None policy: Start" Nov 1 10:04:21.776406 kubelet[2415]: I1101 10:04:21.776332 2415 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 10:04:21.776406 kubelet[2415]: I1101 10:04:21.776361 2415 state_mem.go:35] "Initializing new in-memory state store" Nov 1 10:04:21.783923 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 1 10:04:21.798213 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 1 10:04:21.801506 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 1 10:04:21.816293 kubelet[2415]: E1101 10:04:21.816266 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 10:04:21.816371 kubelet[2415]: I1101 10:04:21.816318 2415 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 10:04:21.816724 kubelet[2415]: I1101 10:04:21.816669 2415 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 10:04:21.816724 kubelet[2415]: I1101 10:04:21.816698 2415 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 10:04:21.817164 kubelet[2415]: I1101 10:04:21.817099 2415 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 10:04:21.818585 kubelet[2415]: E1101 10:04:21.818529 2415 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 10:04:21.818676 kubelet[2415]: E1101 10:04:21.818631 2415 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 1 10:04:21.869815 systemd[1]: Created slice kubepods-burstable-podea034468e0eb5a4b454509ed219c5775.slice - libcontainer container kubepods-burstable-podea034468e0eb5a4b454509ed219c5775.slice. Nov 1 10:04:21.898201 kubelet[2415]: E1101 10:04:21.898137 2415 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 10:04:21.901690 systemd[1]: Created slice kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice - libcontainer container kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice. Nov 1 10:04:21.918624 kubelet[2415]: I1101 10:04:21.918582 2415 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 10:04:21.919280 kubelet[2415]: E1101 10:04:21.919231 2415 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Nov 1 10:04:21.920670 kubelet[2415]: E1101 10:04:21.920623 2415 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 10:04:21.924201 systemd[1]: Created slice kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice - libcontainer container kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice. Nov 1 10:04:21.925031 kubelet[2415]: I1101 10:04:21.924852 2415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 10:04:21.925031 kubelet[2415]: I1101 10:04:21.924886 2415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 1 10:04:21.925031 kubelet[2415]: I1101 10:04:21.924936 2415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ea034468e0eb5a4b454509ed219c5775-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ea034468e0eb5a4b454509ed219c5775\") " pod="kube-system/kube-apiserver-localhost" Nov 1 10:04:21.925031 kubelet[2415]: I1101 10:04:21.924956 2415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ea034468e0eb5a4b454509ed219c5775-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ea034468e0eb5a4b454509ed219c5775\") " pod="kube-system/kube-apiserver-localhost" Nov 1 10:04:21.925031 kubelet[2415]: I1101 10:04:21.924972 2415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 10:04:21.925221 kubelet[2415]: I1101 10:04:21.924987 2415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 10:04:21.925221 kubelet[2415]: I1101 10:04:21.925010 2415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ea034468e0eb5a4b454509ed219c5775-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ea034468e0eb5a4b454509ed219c5775\") " pod="kube-system/kube-apiserver-localhost" Nov 1 10:04:21.925221 kubelet[2415]: I1101 10:04:21.925026 2415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 10:04:21.925221 kubelet[2415]: I1101 10:04:21.925051 2415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 10:04:21.926666 kubelet[2415]: E1101 10:04:21.926624 2415 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 10:04:22.117674 kubelet[2415]: E1101 10:04:22.117607 2415 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="800ms" Nov 1 10:04:22.120703 kubelet[2415]: I1101 10:04:22.120678 2415 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 10:04:22.121014 kubelet[2415]: E1101 10:04:22.120975 2415 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Nov 1 10:04:22.199723 kubelet[2415]: E1101 10:04:22.199648 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:22.200518 containerd[1640]: time="2025-11-01T10:04:22.200466765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ea034468e0eb5a4b454509ed219c5775,Namespace:kube-system,Attempt:0,}" Nov 1 10:04:22.221635 kubelet[2415]: E1101 10:04:22.221590 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:22.222456 containerd[1640]: time="2025-11-01T10:04:22.222167622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Nov 1 10:04:22.227675 kubelet[2415]: E1101 10:04:22.227633 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:22.228095 containerd[1640]: time="2025-11-01T10:04:22.228059071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Nov 1 10:04:22.230782 containerd[1640]: time="2025-11-01T10:04:22.230759265Z" level=info msg="connecting to shim f89cde77c83a4e850ec2681b17eb8801ac11ee1a4364a983faa9ff4a0fefd374" address="unix:///run/containerd/s/1cc163ad8da40c5707ca72c06ea5b0f10932ff3f085bc0aced47f508f4318329" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:04:22.263372 systemd[1]: Started cri-containerd-f89cde77c83a4e850ec2681b17eb8801ac11ee1a4364a983faa9ff4a0fefd374.scope - libcontainer container f89cde77c83a4e850ec2681b17eb8801ac11ee1a4364a983faa9ff4a0fefd374. Nov 1 10:04:22.322284 containerd[1640]: time="2025-11-01T10:04:22.322164522Z" level=info msg="connecting to shim 0c0de981bd6d11788ab5b543dee59428bc90366f530c2e5f0da7a71ad6aba31f" address="unix:///run/containerd/s/2789cd98deb78e957cb5ce7891a9f8b5402ed46f93b1c1a2b0cc58d2008d923f" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:04:22.324982 containerd[1640]: time="2025-11-01T10:04:22.324934287Z" level=info msg="connecting to shim e072ffd964d526ea4ca2e6e8b9ddca36ded47bf08e6cf0cb48cdc15a61df0a4e" address="unix:///run/containerd/s/7e6b08deec27008bba52663505da49d0d511b39e09ca50843d23bdcc1abc7fec" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:04:22.364445 systemd[1]: Started cri-containerd-e072ffd964d526ea4ca2e6e8b9ddca36ded47bf08e6cf0cb48cdc15a61df0a4e.scope - libcontainer container e072ffd964d526ea4ca2e6e8b9ddca36ded47bf08e6cf0cb48cdc15a61df0a4e. Nov 1 10:04:22.367024 containerd[1640]: time="2025-11-01T10:04:22.366881192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ea034468e0eb5a4b454509ed219c5775,Namespace:kube-system,Attempt:0,} returns sandbox id \"f89cde77c83a4e850ec2681b17eb8801ac11ee1a4364a983faa9ff4a0fefd374\"" Nov 1 10:04:22.369717 kubelet[2415]: E1101 10:04:22.368817 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:22.370126 systemd[1]: Started cri-containerd-0c0de981bd6d11788ab5b543dee59428bc90366f530c2e5f0da7a71ad6aba31f.scope - libcontainer container 0c0de981bd6d11788ab5b543dee59428bc90366f530c2e5f0da7a71ad6aba31f. Nov 1 10:04:22.375205 containerd[1640]: time="2025-11-01T10:04:22.375090517Z" level=info msg="CreateContainer within sandbox \"f89cde77c83a4e850ec2681b17eb8801ac11ee1a4364a983faa9ff4a0fefd374\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 10:04:22.378697 kubelet[2415]: W1101 10:04:22.378665 2415 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Nov 1 10:04:22.379005 kubelet[2415]: E1101 10:04:22.378971 2415 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Nov 1 10:04:22.397696 kubelet[2415]: W1101 10:04:22.397647 2415 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Nov 1 10:04:22.402375 kubelet[2415]: E1101 10:04:22.397819 2415 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Nov 1 10:04:22.523356 kubelet[2415]: I1101 10:04:22.523319 2415 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 10:04:22.523835 kubelet[2415]: E1101 10:04:22.523777 2415 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Nov 1 10:04:22.732286 containerd[1640]: time="2025-11-01T10:04:22.732085420Z" level=info msg="Container 02a2be8d70c577ebf59a247835214e5aa6361d138a7bd9db60872ac04790cf88: CDI devices from CRI Config.CDIDevices: []" Nov 1 10:04:22.733011 containerd[1640]: time="2025-11-01T10:04:22.732927339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"e072ffd964d526ea4ca2e6e8b9ddca36ded47bf08e6cf0cb48cdc15a61df0a4e\"" Nov 1 10:04:22.734490 kubelet[2415]: E1101 10:04:22.734439 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:22.737833 containerd[1640]: time="2025-11-01T10:04:22.737806229Z" level=info msg="CreateContainer within sandbox \"e072ffd964d526ea4ca2e6e8b9ddca36ded47bf08e6cf0cb48cdc15a61df0a4e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 10:04:22.738590 containerd[1640]: time="2025-11-01T10:04:22.738566254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c0de981bd6d11788ab5b543dee59428bc90366f530c2e5f0da7a71ad6aba31f\"" Nov 1 10:04:22.739051 kubelet[2415]: E1101 10:04:22.739017 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:22.740531 containerd[1640]: time="2025-11-01T10:04:22.740493869Z" level=info msg="CreateContainer within sandbox \"0c0de981bd6d11788ab5b543dee59428bc90366f530c2e5f0da7a71ad6aba31f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 10:04:22.744988 containerd[1640]: time="2025-11-01T10:04:22.744957640Z" level=info msg="CreateContainer within sandbox \"f89cde77c83a4e850ec2681b17eb8801ac11ee1a4364a983faa9ff4a0fefd374\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"02a2be8d70c577ebf59a247835214e5aa6361d138a7bd9db60872ac04790cf88\"" Nov 1 10:04:22.745640 containerd[1640]: time="2025-11-01T10:04:22.745615394Z" level=info msg="StartContainer for \"02a2be8d70c577ebf59a247835214e5aa6361d138a7bd9db60872ac04790cf88\"" Nov 1 10:04:22.746686 containerd[1640]: time="2025-11-01T10:04:22.746661976Z" level=info msg="connecting to shim 02a2be8d70c577ebf59a247835214e5aa6361d138a7bd9db60872ac04790cf88" address="unix:///run/containerd/s/1cc163ad8da40c5707ca72c06ea5b0f10932ff3f085bc0aced47f508f4318329" protocol=ttrpc version=3 Nov 1 10:04:22.757354 containerd[1640]: time="2025-11-01T10:04:22.757284888Z" level=info msg="Container 5976b1c37a7ac13ee36f952f75be43d28383bc68369e7932567c013d38051647: CDI devices from CRI Config.CDIDevices: []" Nov 1 10:04:22.772395 systemd[1]: Started cri-containerd-02a2be8d70c577ebf59a247835214e5aa6361d138a7bd9db60872ac04790cf88.scope - libcontainer container 02a2be8d70c577ebf59a247835214e5aa6361d138a7bd9db60872ac04790cf88. Nov 1 10:04:22.842623 kubelet[2415]: W1101 10:04:22.842513 2415 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Nov 1 10:04:22.842623 kubelet[2415]: E1101 10:04:22.842623 2415 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Nov 1 10:04:23.092162 containerd[1640]: time="2025-11-01T10:04:23.092085038Z" level=info msg="Container 443d863dff29742c44b4bf75d3d48b3a526334d076ba89350dfedb83c26886c6: CDI devices from CRI Config.CDIDevices: []" Nov 1 10:04:23.180666 containerd[1640]: time="2025-11-01T10:04:23.180597971Z" level=info msg="StartContainer for \"02a2be8d70c577ebf59a247835214e5aa6361d138a7bd9db60872ac04790cf88\" returns successfully" Nov 1 10:04:23.325346 kubelet[2415]: I1101 10:04:23.325286 2415 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 10:04:23.394069 containerd[1640]: time="2025-11-01T10:04:23.393907773Z" level=info msg="CreateContainer within sandbox \"e072ffd964d526ea4ca2e6e8b9ddca36ded47bf08e6cf0cb48cdc15a61df0a4e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5976b1c37a7ac13ee36f952f75be43d28383bc68369e7932567c013d38051647\"" Nov 1 10:04:23.394671 containerd[1640]: time="2025-11-01T10:04:23.394632472Z" level=info msg="StartContainer for \"5976b1c37a7ac13ee36f952f75be43d28383bc68369e7932567c013d38051647\"" Nov 1 10:04:23.396090 containerd[1640]: time="2025-11-01T10:04:23.396048869Z" level=info msg="connecting to shim 5976b1c37a7ac13ee36f952f75be43d28383bc68369e7932567c013d38051647" address="unix:///run/containerd/s/7e6b08deec27008bba52663505da49d0d511b39e09ca50843d23bdcc1abc7fec" protocol=ttrpc version=3 Nov 1 10:04:23.397548 containerd[1640]: time="2025-11-01T10:04:23.397520078Z" level=info msg="CreateContainer within sandbox \"0c0de981bd6d11788ab5b543dee59428bc90366f530c2e5f0da7a71ad6aba31f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"443d863dff29742c44b4bf75d3d48b3a526334d076ba89350dfedb83c26886c6\"" Nov 1 10:04:23.400431 containerd[1640]: time="2025-11-01T10:04:23.398299309Z" level=info msg="StartContainer for \"443d863dff29742c44b4bf75d3d48b3a526334d076ba89350dfedb83c26886c6\"" Nov 1 10:04:23.400431 containerd[1640]: time="2025-11-01T10:04:23.399560765Z" level=info msg="connecting to shim 443d863dff29742c44b4bf75d3d48b3a526334d076ba89350dfedb83c26886c6" address="unix:///run/containerd/s/2789cd98deb78e957cb5ce7891a9f8b5402ed46f93b1c1a2b0cc58d2008d923f" protocol=ttrpc version=3 Nov 1 10:04:23.440299 systemd[1]: Started cri-containerd-5976b1c37a7ac13ee36f952f75be43d28383bc68369e7932567c013d38051647.scope - libcontainer container 5976b1c37a7ac13ee36f952f75be43d28383bc68369e7932567c013d38051647. Nov 1 10:04:23.444597 systemd[1]: Started cri-containerd-443d863dff29742c44b4bf75d3d48b3a526334d076ba89350dfedb83c26886c6.scope - libcontainer container 443d863dff29742c44b4bf75d3d48b3a526334d076ba89350dfedb83c26886c6. Nov 1 10:04:23.526502 containerd[1640]: time="2025-11-01T10:04:23.526446491Z" level=info msg="StartContainer for \"443d863dff29742c44b4bf75d3d48b3a526334d076ba89350dfedb83c26886c6\" returns successfully" Nov 1 10:04:23.537430 containerd[1640]: time="2025-11-01T10:04:23.537359917Z" level=info msg="StartContainer for \"5976b1c37a7ac13ee36f952f75be43d28383bc68369e7932567c013d38051647\" returns successfully" Nov 1 10:04:23.553092 kubelet[2415]: E1101 10:04:23.553008 2415 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 10:04:23.555728 kubelet[2415]: E1101 10:04:23.553383 2415 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 10:04:23.555728 kubelet[2415]: E1101 10:04:23.553735 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:23.555728 kubelet[2415]: E1101 10:04:23.553851 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:23.557669 kubelet[2415]: E1101 10:04:23.557630 2415 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 10:04:23.557877 kubelet[2415]: E1101 10:04:23.557760 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:24.529657 kubelet[2415]: E1101 10:04:24.529481 2415 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 1 10:04:24.559435 kubelet[2415]: E1101 10:04:24.559383 2415 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 10:04:24.559903 kubelet[2415]: E1101 10:04:24.559510 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:24.559903 kubelet[2415]: E1101 10:04:24.559785 2415 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 10:04:24.559960 kubelet[2415]: E1101 10:04:24.559934 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:24.560076 kubelet[2415]: E1101 10:04:24.560050 2415 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 10:04:24.560257 kubelet[2415]: E1101 10:04:24.560237 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:24.605854 kubelet[2415]: I1101 10:04:24.605782 2415 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 1 10:04:24.615139 kubelet[2415]: I1101 10:04:24.615054 2415 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 10:04:24.671682 kubelet[2415]: E1101 10:04:24.671632 2415 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 1 10:04:24.671682 kubelet[2415]: I1101 10:04:24.671663 2415 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 10:04:24.673417 kubelet[2415]: E1101 10:04:24.673336 2415 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 1 10:04:24.673417 kubelet[2415]: I1101 10:04:24.673372 2415 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 10:04:24.674539 kubelet[2415]: E1101 10:04:24.674514 2415 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 1 10:04:25.498861 kubelet[2415]: I1101 10:04:25.498797 2415 apiserver.go:52] "Watching apiserver" Nov 1 10:04:25.515983 kubelet[2415]: I1101 10:04:25.515906 2415 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 10:04:25.559962 kubelet[2415]: I1101 10:04:25.559910 2415 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 10:04:25.566747 kubelet[2415]: E1101 10:04:25.566715 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:26.561078 kubelet[2415]: E1101 10:04:26.561033 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:26.716236 systemd[1]: Reload requested from client PID 2686 ('systemctl') (unit session-7.scope)... Nov 1 10:04:26.716253 systemd[1]: Reloading... Nov 1 10:04:26.791142 zram_generator::config[2730]: No configuration found. Nov 1 10:04:27.026408 systemd[1]: Reloading finished in 309 ms. Nov 1 10:04:27.053299 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 10:04:27.068953 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 10:04:27.069299 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 10:04:27.069348 systemd[1]: kubelet.service: Consumed 781ms CPU time, 132.1M memory peak. Nov 1 10:04:27.071268 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 10:04:27.276791 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 10:04:27.297550 (kubelet)[2775]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 10:04:27.337237 kubelet[2775]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 10:04:27.337237 kubelet[2775]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 10:04:27.337237 kubelet[2775]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 10:04:27.337684 kubelet[2775]: I1101 10:04:27.337271 2775 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 10:04:27.343753 kubelet[2775]: I1101 10:04:27.343718 2775 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 10:04:27.343753 kubelet[2775]: I1101 10:04:27.343740 2775 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 10:04:27.343990 kubelet[2775]: I1101 10:04:27.343961 2775 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 10:04:27.345196 kubelet[2775]: I1101 10:04:27.345169 2775 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 1 10:04:27.347208 kubelet[2775]: I1101 10:04:27.347151 2775 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 10:04:27.351356 kubelet[2775]: I1101 10:04:27.351329 2775 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 1 10:04:27.356787 kubelet[2775]: I1101 10:04:27.356751 2775 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 10:04:27.357005 kubelet[2775]: I1101 10:04:27.356984 2775 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 10:04:27.357203 kubelet[2775]: I1101 10:04:27.357006 2775 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 10:04:27.357286 kubelet[2775]: I1101 10:04:27.357217 2775 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 10:04:27.357286 kubelet[2775]: I1101 10:04:27.357226 2775 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 10:04:27.357286 kubelet[2775]: I1101 10:04:27.357276 2775 state_mem.go:36] "Initialized new in-memory state store" Nov 1 10:04:27.357453 kubelet[2775]: I1101 10:04:27.357442 2775 kubelet.go:446] "Attempting to sync node with API server" Nov 1 10:04:27.357477 kubelet[2775]: I1101 10:04:27.357466 2775 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 10:04:27.357503 kubelet[2775]: I1101 10:04:27.357493 2775 kubelet.go:352] "Adding apiserver pod source" Nov 1 10:04:27.357525 kubelet[2775]: I1101 10:04:27.357504 2775 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 10:04:27.358776 kubelet[2775]: I1101 10:04:27.358753 2775 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Nov 1 10:04:27.359156 kubelet[2775]: I1101 10:04:27.359139 2775 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 10:04:27.359612 kubelet[2775]: I1101 10:04:27.359590 2775 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 10:04:27.359648 kubelet[2775]: I1101 10:04:27.359620 2775 server.go:1287] "Started kubelet" Nov 1 10:04:27.360917 kubelet[2775]: I1101 10:04:27.360833 2775 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 10:04:27.361036 kubelet[2775]: I1101 10:04:27.361002 2775 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 10:04:27.361069 kubelet[2775]: I1101 10:04:27.361042 2775 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 10:04:27.361208 kubelet[2775]: I1101 10:04:27.361073 2775 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 10:04:27.361609 kubelet[2775]: I1101 10:04:27.361585 2775 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 10:04:27.365690 kubelet[2775]: I1101 10:04:27.361001 2775 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 10:04:27.366239 kubelet[2775]: I1101 10:04:27.366219 2775 server.go:479] "Adding debug handlers to kubelet server" Nov 1 10:04:27.366388 kubelet[2775]: I1101 10:04:27.366363 2775 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 10:04:27.366566 kubelet[2775]: I1101 10:04:27.366551 2775 reconciler.go:26] "Reconciler: start to sync state" Nov 1 10:04:27.368928 kubelet[2775]: E1101 10:04:27.368885 2775 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 10:04:27.370000 kubelet[2775]: I1101 10:04:27.369934 2775 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 10:04:27.375086 kubelet[2775]: I1101 10:04:27.375045 2775 factory.go:221] Registration of the containerd container factory successfully Nov 1 10:04:27.375086 kubelet[2775]: I1101 10:04:27.375063 2775 factory.go:221] Registration of the systemd container factory successfully Nov 1 10:04:27.378016 kubelet[2775]: I1101 10:04:27.377961 2775 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 10:04:27.378171 kubelet[2775]: E1101 10:04:27.378144 2775 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 10:04:27.379668 kubelet[2775]: I1101 10:04:27.379642 2775 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 10:04:27.380099 kubelet[2775]: I1101 10:04:27.380066 2775 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 10:04:27.380099 kubelet[2775]: I1101 10:04:27.380098 2775 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 10:04:27.380099 kubelet[2775]: I1101 10:04:27.380126 2775 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 10:04:27.380320 kubelet[2775]: E1101 10:04:27.380172 2775 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 10:04:27.416575 kubelet[2775]: I1101 10:04:27.416523 2775 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 10:04:27.416575 kubelet[2775]: I1101 10:04:27.416554 2775 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 10:04:27.416575 kubelet[2775]: I1101 10:04:27.416588 2775 state_mem.go:36] "Initialized new in-memory state store" Nov 1 10:04:27.416833 kubelet[2775]: I1101 10:04:27.416808 2775 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 10:04:27.416833 kubelet[2775]: I1101 10:04:27.416819 2775 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 10:04:27.416967 kubelet[2775]: I1101 10:04:27.416837 2775 policy_none.go:49] "None policy: Start" Nov 1 10:04:27.416967 kubelet[2775]: I1101 10:04:27.416860 2775 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 10:04:27.416967 kubelet[2775]: I1101 10:04:27.416897 2775 state_mem.go:35] "Initializing new in-memory state store" Nov 1 10:04:27.417316 kubelet[2775]: I1101 10:04:27.417273 2775 state_mem.go:75] "Updated machine memory state" Nov 1 10:04:27.473662 kubelet[2775]: I1101 10:04:27.473565 2775 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 10:04:27.473847 kubelet[2775]: I1101 10:04:27.473825 2775 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 10:04:27.473903 kubelet[2775]: I1101 10:04:27.473849 2775 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 10:04:27.474160 kubelet[2775]: I1101 10:04:27.474142 2775 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 10:04:27.475625 kubelet[2775]: E1101 10:04:27.475596 2775 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 10:04:27.482123 kubelet[2775]: I1101 10:04:27.480898 2775 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 10:04:27.482123 kubelet[2775]: I1101 10:04:27.481301 2775 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 10:04:27.482123 kubelet[2775]: I1101 10:04:27.481521 2775 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 10:04:27.490259 kubelet[2775]: E1101 10:04:27.490222 2775 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 1 10:04:27.582435 kubelet[2775]: I1101 10:04:27.582397 2775 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 10:04:27.589254 kubelet[2775]: I1101 10:04:27.589215 2775 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 1 10:04:27.589421 kubelet[2775]: I1101 10:04:27.589293 2775 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 1 10:04:27.667873 kubelet[2775]: I1101 10:04:27.667839 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ea034468e0eb5a4b454509ed219c5775-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ea034468e0eb5a4b454509ed219c5775\") " pod="kube-system/kube-apiserver-localhost" Nov 1 10:04:27.667954 kubelet[2775]: I1101 10:04:27.667871 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ea034468e0eb5a4b454509ed219c5775-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ea034468e0eb5a4b454509ed219c5775\") " pod="kube-system/kube-apiserver-localhost" Nov 1 10:04:27.667954 kubelet[2775]: I1101 10:04:27.667909 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 10:04:27.667954 kubelet[2775]: I1101 10:04:27.667928 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 10:04:27.667954 kubelet[2775]: I1101 10:04:27.667942 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ea034468e0eb5a4b454509ed219c5775-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ea034468e0eb5a4b454509ed219c5775\") " pod="kube-system/kube-apiserver-localhost" Nov 1 10:04:27.668046 kubelet[2775]: I1101 10:04:27.667958 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 10:04:27.668046 kubelet[2775]: I1101 10:04:27.667972 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 10:04:27.668046 kubelet[2775]: I1101 10:04:27.667989 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 10:04:27.668046 kubelet[2775]: I1101 10:04:27.668004 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 1 10:04:27.788130 kubelet[2775]: E1101 10:04:27.788043 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:27.789049 kubelet[2775]: E1101 10:04:27.789007 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:27.791502 kubelet[2775]: E1101 10:04:27.791473 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:28.358999 kubelet[2775]: I1101 10:04:28.358955 2775 apiserver.go:52] "Watching apiserver" Nov 1 10:04:28.366973 kubelet[2775]: I1101 10:04:28.366947 2775 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 10:04:28.400800 kubelet[2775]: E1101 10:04:28.400098 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:28.400800 kubelet[2775]: I1101 10:04:28.400586 2775 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 10:04:28.400800 kubelet[2775]: E1101 10:04:28.400672 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:28.488352 kubelet[2775]: E1101 10:04:28.488299 2775 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 1 10:04:28.488971 kubelet[2775]: E1101 10:04:28.488505 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:28.498902 kubelet[2775]: I1101 10:04:28.498570 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.4985300320000001 podStartE2EDuration="1.498530032s" podCreationTimestamp="2025-11-01 10:04:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:04:28.488600351 +0000 UTC m=+1.184866399" watchObservedRunningTime="2025-11-01 10:04:28.498530032 +0000 UTC m=+1.194796080" Nov 1 10:04:28.511202 kubelet[2775]: I1101 10:04:28.510264 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.510236847 podStartE2EDuration="3.510236847s" podCreationTimestamp="2025-11-01 10:04:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:04:28.499358616 +0000 UTC m=+1.195624664" watchObservedRunningTime="2025-11-01 10:04:28.510236847 +0000 UTC m=+1.206502895" Nov 1 10:04:28.520133 kubelet[2775]: I1101 10:04:28.519419 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.519080351 podStartE2EDuration="1.519080351s" podCreationTimestamp="2025-11-01 10:04:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:04:28.510418187 +0000 UTC m=+1.206684235" watchObservedRunningTime="2025-11-01 10:04:28.519080351 +0000 UTC m=+1.215346399" Nov 1 10:04:29.402752 kubelet[2775]: E1101 10:04:29.402702 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:29.402752 kubelet[2775]: E1101 10:04:29.402714 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:30.425021 kubelet[2775]: E1101 10:04:30.424954 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:31.315808 kubelet[2775]: E1101 10:04:31.315748 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:32.282571 kubelet[2775]: E1101 10:04:32.282511 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:33.025394 kubelet[2775]: I1101 10:04:33.025335 2775 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 10:04:33.025721 containerd[1640]: time="2025-11-01T10:04:33.025675859Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 10:04:33.026161 kubelet[2775]: I1101 10:04:33.025908 2775 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 10:04:33.819228 systemd[1]: Created slice kubepods-besteffort-podb37d6b61_9256_47ff_951a_127e364d3e7e.slice - libcontainer container kubepods-besteffort-podb37d6b61_9256_47ff_951a_127e364d3e7e.slice. Nov 1 10:04:33.904058 kubelet[2775]: I1101 10:04:33.903983 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b37d6b61-9256-47ff-951a-127e364d3e7e-kube-proxy\") pod \"kube-proxy-tt7j2\" (UID: \"b37d6b61-9256-47ff-951a-127e364d3e7e\") " pod="kube-system/kube-proxy-tt7j2" Nov 1 10:04:33.904058 kubelet[2775]: I1101 10:04:33.904029 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b37d6b61-9256-47ff-951a-127e364d3e7e-lib-modules\") pod \"kube-proxy-tt7j2\" (UID: \"b37d6b61-9256-47ff-951a-127e364d3e7e\") " pod="kube-system/kube-proxy-tt7j2" Nov 1 10:04:33.904058 kubelet[2775]: I1101 10:04:33.904044 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq44l\" (UniqueName: \"kubernetes.io/projected/b37d6b61-9256-47ff-951a-127e364d3e7e-kube-api-access-mq44l\") pod \"kube-proxy-tt7j2\" (UID: \"b37d6b61-9256-47ff-951a-127e364d3e7e\") " pod="kube-system/kube-proxy-tt7j2" Nov 1 10:04:33.904058 kubelet[2775]: I1101 10:04:33.904063 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b37d6b61-9256-47ff-951a-127e364d3e7e-xtables-lock\") pod \"kube-proxy-tt7j2\" (UID: \"b37d6b61-9256-47ff-951a-127e364d3e7e\") " pod="kube-system/kube-proxy-tt7j2" Nov 1 10:04:34.135810 kubelet[2775]: E1101 10:04:34.135517 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:34.138124 containerd[1640]: time="2025-11-01T10:04:34.137489644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tt7j2,Uid:b37d6b61-9256-47ff-951a-127e364d3e7e,Namespace:kube-system,Attempt:0,}" Nov 1 10:04:34.154170 systemd[1]: Created slice kubepods-besteffort-podc0639737_538a_4249_b592_d52ae2feb70c.slice - libcontainer container kubepods-besteffort-podc0639737_538a_4249_b592_d52ae2feb70c.slice. Nov 1 10:04:34.174265 containerd[1640]: time="2025-11-01T10:04:34.174160665Z" level=info msg="connecting to shim 504005f99d71ce9b33d6864b705264a2a4e5aedd8a6e57dae54618026ce1e8bd" address="unix:///run/containerd/s/9c220233f5741796c4fcedc816d8a17cce92482d753d9b2326a51365004d1b55" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:04:34.207293 kubelet[2775]: I1101 10:04:34.207243 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c0639737-538a-4249-b592-d52ae2feb70c-var-lib-calico\") pod \"tigera-operator-7dcd859c48-gsk2n\" (UID: \"c0639737-538a-4249-b592-d52ae2feb70c\") " pod="tigera-operator/tigera-operator-7dcd859c48-gsk2n" Nov 1 10:04:34.207293 kubelet[2775]: I1101 10:04:34.207288 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9hwz\" (UniqueName: \"kubernetes.io/projected/c0639737-538a-4249-b592-d52ae2feb70c-kube-api-access-n9hwz\") pod \"tigera-operator-7dcd859c48-gsk2n\" (UID: \"c0639737-538a-4249-b592-d52ae2feb70c\") " pod="tigera-operator/tigera-operator-7dcd859c48-gsk2n" Nov 1 10:04:34.207312 systemd[1]: Started cri-containerd-504005f99d71ce9b33d6864b705264a2a4e5aedd8a6e57dae54618026ce1e8bd.scope - libcontainer container 504005f99d71ce9b33d6864b705264a2a4e5aedd8a6e57dae54618026ce1e8bd. Nov 1 10:04:34.233748 containerd[1640]: time="2025-11-01T10:04:34.233678668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tt7j2,Uid:b37d6b61-9256-47ff-951a-127e364d3e7e,Namespace:kube-system,Attempt:0,} returns sandbox id \"504005f99d71ce9b33d6864b705264a2a4e5aedd8a6e57dae54618026ce1e8bd\"" Nov 1 10:04:34.234600 kubelet[2775]: E1101 10:04:34.234559 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:34.236979 containerd[1640]: time="2025-11-01T10:04:34.236946770Z" level=info msg="CreateContainer within sandbox \"504005f99d71ce9b33d6864b705264a2a4e5aedd8a6e57dae54618026ce1e8bd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 10:04:34.248612 containerd[1640]: time="2025-11-01T10:04:34.248551431Z" level=info msg="Container e6852789a63e81b3c72227a3ceb69d5febf4bd0f94021214c3cfb02043e3e8ab: CDI devices from CRI Config.CDIDevices: []" Nov 1 10:04:34.252903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount112053854.mount: Deactivated successfully. Nov 1 10:04:34.261186 containerd[1640]: time="2025-11-01T10:04:34.261140646Z" level=info msg="CreateContainer within sandbox \"504005f99d71ce9b33d6864b705264a2a4e5aedd8a6e57dae54618026ce1e8bd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e6852789a63e81b3c72227a3ceb69d5febf4bd0f94021214c3cfb02043e3e8ab\"" Nov 1 10:04:34.261703 containerd[1640]: time="2025-11-01T10:04:34.261665103Z" level=info msg="StartContainer for \"e6852789a63e81b3c72227a3ceb69d5febf4bd0f94021214c3cfb02043e3e8ab\"" Nov 1 10:04:34.263028 containerd[1640]: time="2025-11-01T10:04:34.263000907Z" level=info msg="connecting to shim e6852789a63e81b3c72227a3ceb69d5febf4bd0f94021214c3cfb02043e3e8ab" address="unix:///run/containerd/s/9c220233f5741796c4fcedc816d8a17cce92482d753d9b2326a51365004d1b55" protocol=ttrpc version=3 Nov 1 10:04:34.292258 systemd[1]: Started cri-containerd-e6852789a63e81b3c72227a3ceb69d5febf4bd0f94021214c3cfb02043e3e8ab.scope - libcontainer container e6852789a63e81b3c72227a3ceb69d5febf4bd0f94021214c3cfb02043e3e8ab. Nov 1 10:04:34.371121 containerd[1640]: time="2025-11-01T10:04:34.371056356Z" level=info msg="StartContainer for \"e6852789a63e81b3c72227a3ceb69d5febf4bd0f94021214c3cfb02043e3e8ab\" returns successfully" Nov 1 10:04:34.416216 kubelet[2775]: E1101 10:04:34.416000 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:34.425073 kubelet[2775]: I1101 10:04:34.425009 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tt7j2" podStartSLOduration=1.424988325 podStartE2EDuration="1.424988325s" podCreationTimestamp="2025-11-01 10:04:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:04:34.424812474 +0000 UTC m=+7.121078523" watchObservedRunningTime="2025-11-01 10:04:34.424988325 +0000 UTC m=+7.121254373" Nov 1 10:04:34.468629 containerd[1640]: time="2025-11-01T10:04:34.468574162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-gsk2n,Uid:c0639737-538a-4249-b592-d52ae2feb70c,Namespace:tigera-operator,Attempt:0,}" Nov 1 10:04:34.514863 containerd[1640]: time="2025-11-01T10:04:34.514347705Z" level=info msg="connecting to shim dfe88e652491f2911645867fba3947e7ddcded930444ff05d35771f768c5e82f" address="unix:///run/containerd/s/67c8ff0c786dbe082b3ac82ea5ec245bba2ffe95053e92b8972739bf0cac4d2a" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:04:34.540384 systemd[1]: Started cri-containerd-dfe88e652491f2911645867fba3947e7ddcded930444ff05d35771f768c5e82f.scope - libcontainer container dfe88e652491f2911645867fba3947e7ddcded930444ff05d35771f768c5e82f. Nov 1 10:04:34.591316 containerd[1640]: time="2025-11-01T10:04:34.591272611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-gsk2n,Uid:c0639737-538a-4249-b592-d52ae2feb70c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"dfe88e652491f2911645867fba3947e7ddcded930444ff05d35771f768c5e82f\"" Nov 1 10:04:34.593037 containerd[1640]: time="2025-11-01T10:04:34.592995884Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 1 10:04:35.912429 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount57772089.mount: Deactivated successfully. Nov 1 10:04:36.258951 containerd[1640]: time="2025-11-01T10:04:36.258809610Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:04:36.259604 containerd[1640]: time="2025-11-01T10:04:36.259580752Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Nov 1 10:04:36.260725 containerd[1640]: time="2025-11-01T10:04:36.260662788Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:04:36.263216 containerd[1640]: time="2025-11-01T10:04:36.263166909Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:04:36.263751 containerd[1640]: time="2025-11-01T10:04:36.263717464Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.670671797s" Nov 1 10:04:36.263797 containerd[1640]: time="2025-11-01T10:04:36.263749955Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 1 10:04:36.265778 containerd[1640]: time="2025-11-01T10:04:36.265726234Z" level=info msg="CreateContainer within sandbox \"dfe88e652491f2911645867fba3947e7ddcded930444ff05d35771f768c5e82f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 1 10:04:36.274386 containerd[1640]: time="2025-11-01T10:04:36.274341332Z" level=info msg="Container 682315fbf5857b231e4da92b98b03e2ea4a3861018e9f36f1bbf52f8169a542b: CDI devices from CRI Config.CDIDevices: []" Nov 1 10:04:36.277786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2003989410.mount: Deactivated successfully. Nov 1 10:04:36.282499 containerd[1640]: time="2025-11-01T10:04:36.282455597Z" level=info msg="CreateContainer within sandbox \"dfe88e652491f2911645867fba3947e7ddcded930444ff05d35771f768c5e82f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"682315fbf5857b231e4da92b98b03e2ea4a3861018e9f36f1bbf52f8169a542b\"" Nov 1 10:04:36.283070 containerd[1640]: time="2025-11-01T10:04:36.283031010Z" level=info msg="StartContainer for \"682315fbf5857b231e4da92b98b03e2ea4a3861018e9f36f1bbf52f8169a542b\"" Nov 1 10:04:36.283962 containerd[1640]: time="2025-11-01T10:04:36.283924791Z" level=info msg="connecting to shim 682315fbf5857b231e4da92b98b03e2ea4a3861018e9f36f1bbf52f8169a542b" address="unix:///run/containerd/s/67c8ff0c786dbe082b3ac82ea5ec245bba2ffe95053e92b8972739bf0cac4d2a" protocol=ttrpc version=3 Nov 1 10:04:36.306240 systemd[1]: Started cri-containerd-682315fbf5857b231e4da92b98b03e2ea4a3861018e9f36f1bbf52f8169a542b.scope - libcontainer container 682315fbf5857b231e4da92b98b03e2ea4a3861018e9f36f1bbf52f8169a542b. Nov 1 10:04:36.340833 containerd[1640]: time="2025-11-01T10:04:36.340770808Z" level=info msg="StartContainer for \"682315fbf5857b231e4da92b98b03e2ea4a3861018e9f36f1bbf52f8169a542b\" returns successfully" Nov 1 10:04:40.431828 kubelet[2775]: E1101 10:04:40.431776 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:40.445417 kubelet[2775]: I1101 10:04:40.445352 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-gsk2n" podStartSLOduration=4.773206635 podStartE2EDuration="6.445329862s" podCreationTimestamp="2025-11-01 10:04:34 +0000 UTC" firstStartedPulling="2025-11-01 10:04:34.592363063 +0000 UTC m=+7.288629112" lastFinishedPulling="2025-11-01 10:04:36.264486291 +0000 UTC m=+8.960752339" observedRunningTime="2025-11-01 10:04:36.428227955 +0000 UTC m=+9.124494003" watchObservedRunningTime="2025-11-01 10:04:40.445329862 +0000 UTC m=+13.141595910" Nov 1 10:04:41.321048 kubelet[2775]: E1101 10:04:41.320998 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:41.431412 kubelet[2775]: E1101 10:04:41.431372 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:41.431611 kubelet[2775]: E1101 10:04:41.431515 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:42.290679 kubelet[2775]: E1101 10:04:42.290604 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:42.365207 sudo[1849]: pam_unix(sudo:session): session closed for user root Nov 1 10:04:42.367066 sshd[1848]: Connection closed by 10.0.0.1 port 34416 Nov 1 10:04:42.367678 sshd-session[1845]: pam_unix(sshd:session): session closed for user core Nov 1 10:04:42.372968 systemd-logind[1618]: Session 7 logged out. Waiting for processes to exit. Nov 1 10:04:42.373280 systemd[1]: sshd@6-10.0.0.55:22-10.0.0.1:34416.service: Deactivated successfully. Nov 1 10:04:42.375616 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 10:04:42.375832 systemd[1]: session-7.scope: Consumed 4.739s CPU time, 220.6M memory peak. Nov 1 10:04:42.378310 systemd-logind[1618]: Removed session 7. Nov 1 10:04:43.811279 update_engine[1619]: I20251101 10:04:43.811163 1619 update_attempter.cc:509] Updating boot flags... Nov 1 10:04:47.866810 systemd[1]: Created slice kubepods-besteffort-pod19e662d6_2275_479a_9723_828a349305c8.slice - libcontainer container kubepods-besteffort-pod19e662d6_2275_479a_9723_828a349305c8.slice. Nov 1 10:04:47.897342 kubelet[2775]: I1101 10:04:47.897292 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19e662d6-2275-479a-9723-828a349305c8-tigera-ca-bundle\") pod \"calico-typha-589b668bd7-p4s79\" (UID: \"19e662d6-2275-479a-9723-828a349305c8\") " pod="calico-system/calico-typha-589b668bd7-p4s79" Nov 1 10:04:47.897342 kubelet[2775]: I1101 10:04:47.897343 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrtpk\" (UniqueName: \"kubernetes.io/projected/19e662d6-2275-479a-9723-828a349305c8-kube-api-access-zrtpk\") pod \"calico-typha-589b668bd7-p4s79\" (UID: \"19e662d6-2275-479a-9723-828a349305c8\") " pod="calico-system/calico-typha-589b668bd7-p4s79" Nov 1 10:04:47.897832 kubelet[2775]: I1101 10:04:47.897363 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/19e662d6-2275-479a-9723-828a349305c8-typha-certs\") pod \"calico-typha-589b668bd7-p4s79\" (UID: \"19e662d6-2275-479a-9723-828a349305c8\") " pod="calico-system/calico-typha-589b668bd7-p4s79" Nov 1 10:04:48.047982 systemd[1]: Created slice kubepods-besteffort-poda0c76b83_da06_4ff6_8131_348d826cb37a.slice - libcontainer container kubepods-besteffort-poda0c76b83_da06_4ff6_8131_348d826cb37a.slice. Nov 1 10:04:48.099242 kubelet[2775]: I1101 10:04:48.099181 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a0c76b83-da06-4ff6-8131-348d826cb37a-flexvol-driver-host\") pod \"calico-node-bzv5g\" (UID: \"a0c76b83-da06-4ff6-8131-348d826cb37a\") " pod="calico-system/calico-node-bzv5g" Nov 1 10:04:48.099242 kubelet[2775]: I1101 10:04:48.099240 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a0c76b83-da06-4ff6-8131-348d826cb37a-node-certs\") pod \"calico-node-bzv5g\" (UID: \"a0c76b83-da06-4ff6-8131-348d826cb37a\") " pod="calico-system/calico-node-bzv5g" Nov 1 10:04:48.099416 kubelet[2775]: I1101 10:04:48.099266 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a0c76b83-da06-4ff6-8131-348d826cb37a-policysync\") pod \"calico-node-bzv5g\" (UID: \"a0c76b83-da06-4ff6-8131-348d826cb37a\") " pod="calico-system/calico-node-bzv5g" Nov 1 10:04:48.099416 kubelet[2775]: I1101 10:04:48.099338 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a0c76b83-da06-4ff6-8131-348d826cb37a-cni-bin-dir\") pod \"calico-node-bzv5g\" (UID: \"a0c76b83-da06-4ff6-8131-348d826cb37a\") " pod="calico-system/calico-node-bzv5g" Nov 1 10:04:48.099479 kubelet[2775]: I1101 10:04:48.099411 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a0c76b83-da06-4ff6-8131-348d826cb37a-var-run-calico\") pod \"calico-node-bzv5g\" (UID: \"a0c76b83-da06-4ff6-8131-348d826cb37a\") " pod="calico-system/calico-node-bzv5g" Nov 1 10:04:48.099479 kubelet[2775]: I1101 10:04:48.099457 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a0c76b83-da06-4ff6-8131-348d826cb37a-cni-log-dir\") pod \"calico-node-bzv5g\" (UID: \"a0c76b83-da06-4ff6-8131-348d826cb37a\") " pod="calico-system/calico-node-bzv5g" Nov 1 10:04:48.099479 kubelet[2775]: I1101 10:04:48.099474 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a0c76b83-da06-4ff6-8131-348d826cb37a-cni-net-dir\") pod \"calico-node-bzv5g\" (UID: \"a0c76b83-da06-4ff6-8131-348d826cb37a\") " pod="calico-system/calico-node-bzv5g" Nov 1 10:04:48.099555 kubelet[2775]: I1101 10:04:48.099500 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mp9n\" (UniqueName: \"kubernetes.io/projected/a0c76b83-da06-4ff6-8131-348d826cb37a-kube-api-access-6mp9n\") pod \"calico-node-bzv5g\" (UID: \"a0c76b83-da06-4ff6-8131-348d826cb37a\") " pod="calico-system/calico-node-bzv5g" Nov 1 10:04:48.099585 kubelet[2775]: I1101 10:04:48.099527 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a0c76b83-da06-4ff6-8131-348d826cb37a-xtables-lock\") pod \"calico-node-bzv5g\" (UID: \"a0c76b83-da06-4ff6-8131-348d826cb37a\") " pod="calico-system/calico-node-bzv5g" Nov 1 10:04:48.099609 kubelet[2775]: I1101 10:04:48.099582 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a0c76b83-da06-4ff6-8131-348d826cb37a-lib-modules\") pod \"calico-node-bzv5g\" (UID: \"a0c76b83-da06-4ff6-8131-348d826cb37a\") " pod="calico-system/calico-node-bzv5g" Nov 1 10:04:48.099609 kubelet[2775]: I1101 10:04:48.099603 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0c76b83-da06-4ff6-8131-348d826cb37a-tigera-ca-bundle\") pod \"calico-node-bzv5g\" (UID: \"a0c76b83-da06-4ff6-8131-348d826cb37a\") " pod="calico-system/calico-node-bzv5g" Nov 1 10:04:48.099692 kubelet[2775]: I1101 10:04:48.099652 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a0c76b83-da06-4ff6-8131-348d826cb37a-var-lib-calico\") pod \"calico-node-bzv5g\" (UID: \"a0c76b83-da06-4ff6-8131-348d826cb37a\") " pod="calico-system/calico-node-bzv5g" Nov 1 10:04:48.177289 kubelet[2775]: E1101 10:04:48.177182 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:48.177773 containerd[1640]: time="2025-11-01T10:04:48.177643040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-589b668bd7-p4s79,Uid:19e662d6-2275-479a-9723-828a349305c8,Namespace:calico-system,Attempt:0,}" Nov 1 10:04:48.427088 kubelet[2775]: E1101 10:04:48.426965 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:48.427088 kubelet[2775]: W1101 10:04:48.426992 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:48.427088 kubelet[2775]: E1101 10:04:48.427040 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:48.651499 kubelet[2775]: E1101 10:04:48.651447 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:48.652159 containerd[1640]: time="2025-11-01T10:04:48.652080459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bzv5g,Uid:a0c76b83-da06-4ff6-8131-348d826cb37a,Namespace:calico-system,Attempt:0,}" Nov 1 10:04:49.051285 kubelet[2775]: E1101 10:04:49.051080 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zlp4v" podUID="5e9801c6-fe95-4f67-a365-4280796e7e3e" Nov 1 10:04:49.053634 containerd[1640]: time="2025-11-01T10:04:49.053571628Z" level=info msg="connecting to shim ced5e2949ce7ce7d9108cda242ce898be9ed98f016e629e41ccaf9b94b1eb1a2" address="unix:///run/containerd/s/2eccb8aaa3e10da40fc2d211c8dc5acd53aadcafa5b98db3c99177d503097133" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:04:49.083139 containerd[1640]: time="2025-11-01T10:04:49.083006284Z" level=info msg="connecting to shim 3200e60e559ac1de3ec9bee7e1f131a5e81413d818a785208a118fb049174832" address="unix:///run/containerd/s/8c5eeaff5ddc6888ef3571cc5ad475ef225a273837b5ca86624de4acb9895295" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:04:49.096471 kubelet[2775]: E1101 10:04:49.096398 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.096715 kubelet[2775]: W1101 10:04:49.096524 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.096715 kubelet[2775]: E1101 10:04:49.096610 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.097232 kubelet[2775]: E1101 10:04:49.097212 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.097232 kubelet[2775]: W1101 10:04:49.097228 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.097318 kubelet[2775]: E1101 10:04:49.097239 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.097635 kubelet[2775]: E1101 10:04:49.097596 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.097635 kubelet[2775]: W1101 10:04:49.097610 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.097635 kubelet[2775]: E1101 10:04:49.097620 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.098255 kubelet[2775]: E1101 10:04:49.098055 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.098255 kubelet[2775]: W1101 10:04:49.098083 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.098255 kubelet[2775]: E1101 10:04:49.098140 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.098537 kubelet[2775]: E1101 10:04:49.098525 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.098652 kubelet[2775]: W1101 10:04:49.098600 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.098652 kubelet[2775]: E1101 10:04:49.098613 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.098896 kubelet[2775]: E1101 10:04:49.098884 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.098956 kubelet[2775]: W1101 10:04:49.098946 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.099025 kubelet[2775]: E1101 10:04:49.099011 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.099260 kubelet[2775]: E1101 10:04:49.099248 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.099325 kubelet[2775]: W1101 10:04:49.099314 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.099397 kubelet[2775]: E1101 10:04:49.099376 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.099669 kubelet[2775]: E1101 10:04:49.099647 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.099669 kubelet[2775]: W1101 10:04:49.099665 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.099734 kubelet[2775]: E1101 10:04:49.099677 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.099906 kubelet[2775]: E1101 10:04:49.099887 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.099906 kubelet[2775]: W1101 10:04:49.099901 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.099963 kubelet[2775]: E1101 10:04:49.099910 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.100087 kubelet[2775]: E1101 10:04:49.100066 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.100087 kubelet[2775]: W1101 10:04:49.100081 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.100087 kubelet[2775]: E1101 10:04:49.100088 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.100308 kubelet[2775]: E1101 10:04:49.100274 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.100308 kubelet[2775]: W1101 10:04:49.100289 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.100308 kubelet[2775]: E1101 10:04:49.100296 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.100472 kubelet[2775]: E1101 10:04:49.100451 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.100472 kubelet[2775]: W1101 10:04:49.100465 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.100472 kubelet[2775]: E1101 10:04:49.100474 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.100645 kubelet[2775]: E1101 10:04:49.100625 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.100645 kubelet[2775]: W1101 10:04:49.100638 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.100645 kubelet[2775]: E1101 10:04:49.100646 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.100796 kubelet[2775]: E1101 10:04:49.100789 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.100822 kubelet[2775]: W1101 10:04:49.100796 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.100822 kubelet[2775]: E1101 10:04:49.100804 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.100980 kubelet[2775]: E1101 10:04:49.100959 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.100980 kubelet[2775]: W1101 10:04:49.100974 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.100980 kubelet[2775]: E1101 10:04:49.100982 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.101174 kubelet[2775]: E1101 10:04:49.101153 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.101174 kubelet[2775]: W1101 10:04:49.101167 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.101174 kubelet[2775]: E1101 10:04:49.101174 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.101347 kubelet[2775]: E1101 10:04:49.101327 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.101347 kubelet[2775]: W1101 10:04:49.101343 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.101412 kubelet[2775]: E1101 10:04:49.101351 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.101521 kubelet[2775]: E1101 10:04:49.101501 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.101521 kubelet[2775]: W1101 10:04:49.101515 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.101521 kubelet[2775]: E1101 10:04:49.101523 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.101909 kubelet[2775]: E1101 10:04:49.101841 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.101909 kubelet[2775]: W1101 10:04:49.101862 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.101909 kubelet[2775]: E1101 10:04:49.101874 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.102389 kubelet[2775]: E1101 10:04:49.102303 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.102389 kubelet[2775]: W1101 10:04:49.102318 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.102389 kubelet[2775]: E1101 10:04:49.102327 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.107892 kubelet[2775]: E1101 10:04:49.107350 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.107892 kubelet[2775]: W1101 10:04:49.107371 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.107892 kubelet[2775]: E1101 10:04:49.107395 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.107892 kubelet[2775]: I1101 10:04:49.107422 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5e9801c6-fe95-4f67-a365-4280796e7e3e-registration-dir\") pod \"csi-node-driver-zlp4v\" (UID: \"5e9801c6-fe95-4f67-a365-4280796e7e3e\") " pod="calico-system/csi-node-driver-zlp4v" Nov 1 10:04:49.107892 kubelet[2775]: E1101 10:04:49.107656 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.107892 kubelet[2775]: W1101 10:04:49.107665 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.107892 kubelet[2775]: E1101 10:04:49.107677 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.107892 kubelet[2775]: I1101 10:04:49.107690 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5e9801c6-fe95-4f67-a365-4280796e7e3e-socket-dir\") pod \"csi-node-driver-zlp4v\" (UID: \"5e9801c6-fe95-4f67-a365-4280796e7e3e\") " pod="calico-system/csi-node-driver-zlp4v" Nov 1 10:04:49.108187 kubelet[2775]: E1101 10:04:49.107912 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.108187 kubelet[2775]: W1101 10:04:49.107921 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.108187 kubelet[2775]: E1101 10:04:49.107939 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.108187 kubelet[2775]: I1101 10:04:49.107952 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh4zp\" (UniqueName: \"kubernetes.io/projected/5e9801c6-fe95-4f67-a365-4280796e7e3e-kube-api-access-dh4zp\") pod \"csi-node-driver-zlp4v\" (UID: \"5e9801c6-fe95-4f67-a365-4280796e7e3e\") " pod="calico-system/csi-node-driver-zlp4v" Nov 1 10:04:49.108271 kubelet[2775]: E1101 10:04:49.108221 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.108271 kubelet[2775]: W1101 10:04:49.108230 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.108271 kubelet[2775]: E1101 10:04:49.108249 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.108271 kubelet[2775]: I1101 10:04:49.108263 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e9801c6-fe95-4f67-a365-4280796e7e3e-kubelet-dir\") pod \"csi-node-driver-zlp4v\" (UID: \"5e9801c6-fe95-4f67-a365-4280796e7e3e\") " pod="calico-system/csi-node-driver-zlp4v" Nov 1 10:04:49.108314 systemd[1]: Started cri-containerd-ced5e2949ce7ce7d9108cda242ce898be9ed98f016e629e41ccaf9b94b1eb1a2.scope - libcontainer container ced5e2949ce7ce7d9108cda242ce898be9ed98f016e629e41ccaf9b94b1eb1a2. Nov 1 10:04:49.108627 kubelet[2775]: E1101 10:04:49.108480 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.108627 kubelet[2775]: W1101 10:04:49.108489 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.108627 kubelet[2775]: E1101 10:04:49.108508 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.108627 kubelet[2775]: I1101 10:04:49.108521 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5e9801c6-fe95-4f67-a365-4280796e7e3e-varrun\") pod \"csi-node-driver-zlp4v\" (UID: \"5e9801c6-fe95-4f67-a365-4280796e7e3e\") " pod="calico-system/csi-node-driver-zlp4v" Nov 1 10:04:49.108718 kubelet[2775]: E1101 10:04:49.108707 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.108718 kubelet[2775]: W1101 10:04:49.108716 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.108761 kubelet[2775]: E1101 10:04:49.108735 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.109988 kubelet[2775]: E1101 10:04:49.108970 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.109988 kubelet[2775]: W1101 10:04:49.108978 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.109988 kubelet[2775]: E1101 10:04:49.109884 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.110076 kubelet[2775]: E1101 10:04:49.109998 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.110076 kubelet[2775]: W1101 10:04:49.110006 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.110076 kubelet[2775]: E1101 10:04:49.110047 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.110288 kubelet[2775]: E1101 10:04:49.110224 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.110288 kubelet[2775]: W1101 10:04:49.110233 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.110288 kubelet[2775]: E1101 10:04:49.110274 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.110504 kubelet[2775]: E1101 10:04:49.110485 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.110504 kubelet[2775]: W1101 10:04:49.110498 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.110620 kubelet[2775]: E1101 10:04:49.110602 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.110861 kubelet[2775]: E1101 10:04:49.110833 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.110861 kubelet[2775]: W1101 10:04:49.110847 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.110988 kubelet[2775]: E1101 10:04:49.110961 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.111265 kubelet[2775]: E1101 10:04:49.111246 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.111265 kubelet[2775]: W1101 10:04:49.111259 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.111361 kubelet[2775]: E1101 10:04:49.111269 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.111635 kubelet[2775]: E1101 10:04:49.111619 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.111635 kubelet[2775]: W1101 10:04:49.111632 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.111690 kubelet[2775]: E1101 10:04:49.111642 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.112472 kubelet[2775]: E1101 10:04:49.112429 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.112472 kubelet[2775]: W1101 10:04:49.112448 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.112472 kubelet[2775]: E1101 10:04:49.112457 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.112899 kubelet[2775]: E1101 10:04:49.112880 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.112899 kubelet[2775]: W1101 10:04:49.112893 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.112899 kubelet[2775]: E1101 10:04:49.112903 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.138258 systemd[1]: Started cri-containerd-3200e60e559ac1de3ec9bee7e1f131a5e81413d818a785208a118fb049174832.scope - libcontainer container 3200e60e559ac1de3ec9bee7e1f131a5e81413d818a785208a118fb049174832. Nov 1 10:04:49.187399 containerd[1640]: time="2025-11-01T10:04:49.187305742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-589b668bd7-p4s79,Uid:19e662d6-2275-479a-9723-828a349305c8,Namespace:calico-system,Attempt:0,} returns sandbox id \"ced5e2949ce7ce7d9108cda242ce898be9ed98f016e629e41ccaf9b94b1eb1a2\"" Nov 1 10:04:49.189696 kubelet[2775]: E1101 10:04:49.189044 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:49.191279 containerd[1640]: time="2025-11-01T10:04:49.191232446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 1 10:04:49.192299 containerd[1640]: time="2025-11-01T10:04:49.192264324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bzv5g,Uid:a0c76b83-da06-4ff6-8131-348d826cb37a,Namespace:calico-system,Attempt:0,} returns sandbox id \"3200e60e559ac1de3ec9bee7e1f131a5e81413d818a785208a118fb049174832\"" Nov 1 10:04:49.193658 kubelet[2775]: E1101 10:04:49.193629 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:49.209907 kubelet[2775]: E1101 10:04:49.209865 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.209907 kubelet[2775]: W1101 10:04:49.209890 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.209907 kubelet[2775]: E1101 10:04:49.209915 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.210187 kubelet[2775]: E1101 10:04:49.210171 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.210187 kubelet[2775]: W1101 10:04:49.210182 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.210271 kubelet[2775]: E1101 10:04:49.210196 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.210550 kubelet[2775]: E1101 10:04:49.210523 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.210592 kubelet[2775]: W1101 10:04:49.210548 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.210592 kubelet[2775]: E1101 10:04:49.210577 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.210802 kubelet[2775]: E1101 10:04:49.210786 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.210802 kubelet[2775]: W1101 10:04:49.210796 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.210856 kubelet[2775]: E1101 10:04:49.210811 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.211045 kubelet[2775]: E1101 10:04:49.211021 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.211045 kubelet[2775]: W1101 10:04:49.211032 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.211045 kubelet[2775]: E1101 10:04:49.211046 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.211304 kubelet[2775]: E1101 10:04:49.211289 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.211304 kubelet[2775]: W1101 10:04:49.211299 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.211356 kubelet[2775]: E1101 10:04:49.211314 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.212097 kubelet[2775]: E1101 10:04:49.211595 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.212097 kubelet[2775]: W1101 10:04:49.211634 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.212097 kubelet[2775]: E1101 10:04:49.211672 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.212097 kubelet[2775]: E1101 10:04:49.211980 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.212097 kubelet[2775]: W1101 10:04:49.211990 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.212097 kubelet[2775]: E1101 10:04:49.212006 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.212419 kubelet[2775]: E1101 10:04:49.212388 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.212419 kubelet[2775]: W1101 10:04:49.212400 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.212419 kubelet[2775]: E1101 10:04:49.212413 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.212647 kubelet[2775]: E1101 10:04:49.212575 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.212647 kubelet[2775]: W1101 10:04:49.212583 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.212647 kubelet[2775]: E1101 10:04:49.212595 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.212910 kubelet[2775]: E1101 10:04:49.212891 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.212910 kubelet[2775]: W1101 10:04:49.212905 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.212910 kubelet[2775]: E1101 10:04:49.212922 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.213234 kubelet[2775]: E1101 10:04:49.213214 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.213234 kubelet[2775]: W1101 10:04:49.213228 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.213418 kubelet[2775]: E1101 10:04:49.213247 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.213604 kubelet[2775]: E1101 10:04:49.213587 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.213604 kubelet[2775]: W1101 10:04:49.213599 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.213910 kubelet[2775]: E1101 10:04:49.213653 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.213910 kubelet[2775]: E1101 10:04:49.213860 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.213910 kubelet[2775]: W1101 10:04:49.213870 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.214010 kubelet[2775]: E1101 10:04:49.213928 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.214093 kubelet[2775]: E1101 10:04:49.214076 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.214093 kubelet[2775]: W1101 10:04:49.214088 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.214189 kubelet[2775]: E1101 10:04:49.214125 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.214321 kubelet[2775]: E1101 10:04:49.214306 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.214321 kubelet[2775]: W1101 10:04:49.214317 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.214495 kubelet[2775]: E1101 10:04:49.214331 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.214529 kubelet[2775]: E1101 10:04:49.214513 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.214529 kubelet[2775]: W1101 10:04:49.214524 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.214575 kubelet[2775]: E1101 10:04:49.214532 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.214777 kubelet[2775]: E1101 10:04:49.214761 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.214777 kubelet[2775]: W1101 10:04:49.214772 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.214834 kubelet[2775]: E1101 10:04:49.214785 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.214992 kubelet[2775]: E1101 10:04:49.214978 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.214992 kubelet[2775]: W1101 10:04:49.214988 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.215070 kubelet[2775]: E1101 10:04:49.215001 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.215266 kubelet[2775]: E1101 10:04:49.215250 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.215266 kubelet[2775]: W1101 10:04:49.215261 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.215368 kubelet[2775]: E1101 10:04:49.215324 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.215597 kubelet[2775]: E1101 10:04:49.215575 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.215597 kubelet[2775]: W1101 10:04:49.215592 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.215696 kubelet[2775]: E1101 10:04:49.215609 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.215829 kubelet[2775]: E1101 10:04:49.215814 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.215829 kubelet[2775]: W1101 10:04:49.215825 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.215878 kubelet[2775]: E1101 10:04:49.215833 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.216047 kubelet[2775]: E1101 10:04:49.216032 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.216047 kubelet[2775]: W1101 10:04:49.216043 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.216214 kubelet[2775]: E1101 10:04:49.216144 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.216247 kubelet[2775]: E1101 10:04:49.216238 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.216247 kubelet[2775]: W1101 10:04:49.216246 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.216287 kubelet[2775]: E1101 10:04:49.216254 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.216713 kubelet[2775]: E1101 10:04:49.216697 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.216713 kubelet[2775]: W1101 10:04:49.216708 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.216781 kubelet[2775]: E1101 10:04:49.216717 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:49.220485 kubelet[2775]: E1101 10:04:49.220456 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:49.220485 kubelet[2775]: W1101 10:04:49.220476 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:49.220549 kubelet[2775]: E1101 10:04:49.220490 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:50.380783 kubelet[2775]: E1101 10:04:50.380710 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zlp4v" podUID="5e9801c6-fe95-4f67-a365-4280796e7e3e" Nov 1 10:04:50.442901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount890001171.mount: Deactivated successfully. Nov 1 10:04:50.985500 containerd[1640]: time="2025-11-01T10:04:50.985433026Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:04:51.014827 containerd[1640]: time="2025-11-01T10:04:51.014744064Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Nov 1 10:04:51.056310 containerd[1640]: time="2025-11-01T10:04:51.056246430Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:04:51.058483 containerd[1640]: time="2025-11-01T10:04:51.058456751Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:04:51.059039 containerd[1640]: time="2025-11-01T10:04:51.058992597Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.867715175s" Nov 1 10:04:51.059098 containerd[1640]: time="2025-11-01T10:04:51.059038463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 1 10:04:51.060481 containerd[1640]: time="2025-11-01T10:04:51.060423102Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 1 10:04:51.069755 containerd[1640]: time="2025-11-01T10:04:51.069619580Z" level=info msg="CreateContainer within sandbox \"ced5e2949ce7ce7d9108cda242ce898be9ed98f016e629e41ccaf9b94b1eb1a2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 1 10:04:51.077754 containerd[1640]: time="2025-11-01T10:04:51.077699851Z" level=info msg="Container 99e2bb8a1fc6a2e93e444883de5870dcd2244706d759ece39ca481ec3988ae61: CDI devices from CRI Config.CDIDevices: []" Nov 1 10:04:51.085886 containerd[1640]: time="2025-11-01T10:04:51.085837120Z" level=info msg="CreateContainer within sandbox \"ced5e2949ce7ce7d9108cda242ce898be9ed98f016e629e41ccaf9b94b1eb1a2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"99e2bb8a1fc6a2e93e444883de5870dcd2244706d759ece39ca481ec3988ae61\"" Nov 1 10:04:51.086583 containerd[1640]: time="2025-11-01T10:04:51.086547164Z" level=info msg="StartContainer for \"99e2bb8a1fc6a2e93e444883de5870dcd2244706d759ece39ca481ec3988ae61\"" Nov 1 10:04:51.088133 containerd[1640]: time="2025-11-01T10:04:51.087755262Z" level=info msg="connecting to shim 99e2bb8a1fc6a2e93e444883de5870dcd2244706d759ece39ca481ec3988ae61" address="unix:///run/containerd/s/2eccb8aaa3e10da40fc2d211c8dc5acd53aadcafa5b98db3c99177d503097133" protocol=ttrpc version=3 Nov 1 10:04:51.108291 systemd[1]: Started cri-containerd-99e2bb8a1fc6a2e93e444883de5870dcd2244706d759ece39ca481ec3988ae61.scope - libcontainer container 99e2bb8a1fc6a2e93e444883de5870dcd2244706d759ece39ca481ec3988ae61. Nov 1 10:04:51.162037 containerd[1640]: time="2025-11-01T10:04:51.161991421Z" level=info msg="StartContainer for \"99e2bb8a1fc6a2e93e444883de5870dcd2244706d759ece39ca481ec3988ae61\" returns successfully" Nov 1 10:04:51.457753 kubelet[2775]: E1101 10:04:51.457713 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:51.466835 kubelet[2775]: I1101 10:04:51.466737 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-589b668bd7-p4s79" podStartSLOduration=2.59723585 podStartE2EDuration="4.466720038s" podCreationTimestamp="2025-11-01 10:04:47 +0000 UTC" firstStartedPulling="2025-11-01 10:04:49.19068054 +0000 UTC m=+21.886946588" lastFinishedPulling="2025-11-01 10:04:51.060164728 +0000 UTC m=+23.756430776" observedRunningTime="2025-11-01 10:04:51.466490206 +0000 UTC m=+24.162756254" watchObservedRunningTime="2025-11-01 10:04:51.466720038 +0000 UTC m=+24.162986086" Nov 1 10:04:51.518344 kubelet[2775]: E1101 10:04:51.518288 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:51.518344 kubelet[2775]: W1101 10:04:51.518324 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:51.518344 kubelet[2775]: E1101 10:04:51.518350 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:51.518550 kubelet[2775]: E1101 10:04:51.518532 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:51.518550 kubelet[2775]: W1101 10:04:51.518542 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:51.518550 kubelet[2775]: E1101 10:04:51.518551 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:51.518784 kubelet[2775]: E1101 10:04:51.518757 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:51.518784 kubelet[2775]: W1101 10:04:51.518769 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:51.518784 kubelet[2775]: E1101 10:04:51.518777 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:51.519050 kubelet[2775]: E1101 10:04:51.519030 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:51.519050 kubelet[2775]: W1101 10:04:51.519041 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:51.519050 kubelet[2775]: E1101 10:04:51.519049 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:51.519272 kubelet[2775]: E1101 10:04:51.519252 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:51.519272 kubelet[2775]: W1101 10:04:51.519263 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:51.519272 kubelet[2775]: E1101 10:04:51.519270 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:51.519452 kubelet[2775]: E1101 10:04:51.519435 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:51.519452 kubelet[2775]: W1101 10:04:51.519444 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:51.519452 kubelet[2775]: E1101 10:04:51.519451 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:51.519604 kubelet[2775]: E1101 10:04:51.519588 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:51.519604 kubelet[2775]: W1101 10:04:51.519598 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:51.519604 kubelet[2775]: E1101 10:04:51.519605 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:51.519752 kubelet[2775]: E1101 10:04:51.519736 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:51.519752 kubelet[2775]: W1101 10:04:51.519745 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:51.519752 kubelet[2775]: E1101 10:04:51.519752 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:51.519906 kubelet[2775]: E1101 10:04:51.519889 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:51.519906 kubelet[2775]: W1101 10:04:51.519898 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:51.519952 kubelet[2775]: E1101 10:04:51.519907 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:51.520057 kubelet[2775]: E1101 10:04:51.520040 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:51.520057 kubelet[2775]: W1101 10:04:51.520049 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:51.520128 kubelet[2775]: E1101 10:04:51.520056 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:51.520220 kubelet[2775]: E1101 10:04:51.520204 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:51.520220 kubelet[2775]: W1101 10:04:51.520213 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:51.520220 kubelet[2775]: E1101 10:04:51.520221 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:51.520383 kubelet[2775]: E1101 10:04:51.520367 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:51.520383 kubelet[2775]: W1101 10:04:51.520376 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:51.520441 kubelet[2775]: E1101 10:04:51.520385 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:51.520562 kubelet[2775]: E1101 10:04:51.520546 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:51.520562 kubelet[2775]: W1101 10:04:51.520555 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:51.520615 kubelet[2775]: E1101 10:04:51.520564 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:51.520741 kubelet[2775]: E1101 10:04:51.520725 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:51.520741 kubelet[2775]: W1101 10:04:51.520734 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:51.520787 kubelet[2775]: E1101 10:04:51.520742 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:51.520899 kubelet[2775]: E1101 10:04:51.520883 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:51.520899 kubelet[2775]: W1101 10:04:51.520892 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:51.520947 kubelet[2775]: E1101 10:04:51.520900 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:51.529644 kubelet[2775]: E1101 10:04:51.529576 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:51.529644 kubelet[2775]: W1101 10:04:51.529611 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:51.529880 kubelet[2775]: E1101 10:04:51.529664 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:51.530016 kubelet[2775]: E1101 10:04:51.529979 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:51.530016 kubelet[2775]: W1101 10:04:51.529997 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:51.530016 kubelet[2775]: E1101 10:04:51.530012 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:51.530248 kubelet[2775]: E1101 10:04:51.530228 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:51.530248 kubelet[2775]: W1101 10:04:51.530240 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:51.530352 kubelet[2775]: E1101 10:04:51.530253 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:51.530453 kubelet[2775]: E1101 10:04:51.530431 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:51.530453 kubelet[2775]: W1101 10:04:51.530448 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:51.530534 kubelet[2775]: E1101 10:04:51.530466 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:51.530723 kubelet[2775]: E1101 10:04:51.530697 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:51.530723 kubelet[2775]: W1101 10:04:51.530713 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:51.530818 kubelet[2775]: E1101 10:04:51.530735 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:51.530961 kubelet[2775]: E1101 10:04:51.530941 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:51.530961 kubelet[2775]: W1101 10:04:51.530953 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:51.531043 kubelet[2775]: E1101 10:04:51.530970 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:51.531308 kubelet[2775]: E1101 10:04:51.531279 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:51.531308 kubelet[2775]: W1101 10:04:51.531293 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:51.531423 kubelet[2775]: E1101 10:04:51.531311 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:51.531547 kubelet[2775]: E1101 10:04:51.531525 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:51.531547 kubelet[2775]: W1101 10:04:51.531538 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:51.531618 kubelet[2775]: E1101 10:04:51.531551 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:51.531762 kubelet[2775]: E1101 10:04:51.531739 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:51.531762 kubelet[2775]: W1101 10:04:51.531756 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:51.531839 kubelet[2775]: E1101 10:04:51.531776 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:51.532011 kubelet[2775]: E1101 10:04:51.531992 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:51.532011 kubelet[2775]: W1101 10:04:51.532004 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:51.532078 kubelet[2775]: E1101 10:04:51.532017 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:51.532273 kubelet[2775]: E1101 10:04:51.532247 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:51.532273 kubelet[2775]: W1101 10:04:51.532263 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:51.532362 kubelet[2775]: E1101 10:04:51.532279 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:51.532557 kubelet[2775]: E1101 10:04:51.532532 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:51.532557 kubelet[2775]: W1101 10:04:51.532546 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:51.532647 kubelet[2775]: E1101 10:04:51.532563 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:51.532883 kubelet[2775]: E1101 10:04:51.532859 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:51.532883 kubelet[2775]: W1101 10:04:51.532878 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:51.532968 kubelet[2775]: E1101 10:04:51.532901 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:51.533157 kubelet[2775]: E1101 10:04:51.533140 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:51.533157 kubelet[2775]: W1101 10:04:51.533152 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:51.533226 kubelet[2775]: E1101 10:04:51.533185 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:51.533399 kubelet[2775]: E1101 10:04:51.533381 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:51.533399 kubelet[2775]: W1101 10:04:51.533393 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:51.533485 kubelet[2775]: E1101 10:04:51.533429 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:51.533626 kubelet[2775]: E1101 10:04:51.533611 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:51.533626 kubelet[2775]: W1101 10:04:51.533622 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:51.533706 kubelet[2775]: E1101 10:04:51.533639 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:51.533891 kubelet[2775]: E1101 10:04:51.533869 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:51.533891 kubelet[2775]: W1101 10:04:51.533885 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:51.533962 kubelet[2775]: E1101 10:04:51.533895 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:51.534227 kubelet[2775]: E1101 10:04:51.534208 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:04:51.534227 kubelet[2775]: W1101 10:04:51.534219 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:04:51.534227 kubelet[2775]: E1101 10:04:51.534227 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:04:52.373733 containerd[1640]: time="2025-11-01T10:04:52.373668150Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:04:52.374511 containerd[1640]: time="2025-11-01T10:04:52.374460557Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Nov 1 10:04:52.375666 containerd[1640]: time="2025-11-01T10:04:52.375636375Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:04:52.377493 containerd[1640]: time="2025-11-01T10:04:52.377440052Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:04:52.378011 containerd[1640]: time="2025-11-01T10:04:52.377940110Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.317479127s" Nov 1 10:04:52.378011 containerd[1640]: time="2025-11-01T10:04:52.377984894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 1 10:04:52.380147 containerd[1640]: time="2025-11-01T10:04:52.380098222Z" level=info msg="CreateContainer within sandbox \"3200e60e559ac1de3ec9bee7e1f131a5e81413d818a785208a118fb049174832\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 1 10:04:52.381426 kubelet[2775]: E1101 10:04:52.381389 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zlp4v" podUID="5e9801c6-fe95-4f67-a365-4280796e7e3e" Nov 1 10:04:52.389319 containerd[1640]: time="2025-11-01T10:04:52.389276835Z" level=info msg="Container 6e5cac88125dcf37df77e3b600bdf4eaf353074a7ec3284f80b0f3936f01131c: CDI devices from CRI Config.CDIDevices: []" Nov 1 10:04:52.397341 containerd[1640]: time="2025-11-01T10:04:52.397277626Z" level=info msg="CreateContainer within sandbox \"3200e60e559ac1de3ec9bee7e1f131a5e81413d818a785208a118fb049174832\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6e5cac88125dcf37df77e3b600bdf4eaf353074a7ec3284f80b0f3936f01131c\"" Nov 1 10:04:52.397832 containerd[1640]: time="2025-11-01T10:04:52.397798784Z" level=info msg="StartContainer for \"6e5cac88125dcf37df77e3b600bdf4eaf353074a7ec3284f80b0f3936f01131c\"" Nov 1 10:04:52.399258 containerd[1640]: time="2025-11-01T10:04:52.399230162Z" level=info msg="connecting to shim 6e5cac88125dcf37df77e3b600bdf4eaf353074a7ec3284f80b0f3936f01131c" address="unix:///run/containerd/s/8c5eeaff5ddc6888ef3571cc5ad475ef225a273837b5ca86624de4acb9895295" protocol=ttrpc version=3 Nov 1 10:04:52.424347 systemd[1]: Started cri-containerd-6e5cac88125dcf37df77e3b600bdf4eaf353074a7ec3284f80b0f3936f01131c.scope - libcontainer container 6e5cac88125dcf37df77e3b600bdf4eaf353074a7ec3284f80b0f3936f01131c. Nov 1 10:04:52.461512 kubelet[2775]: I1101 10:04:52.461449 2775 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 10:04:52.462409 kubelet[2775]: E1101 10:04:52.461777 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:52.472896 containerd[1640]: time="2025-11-01T10:04:52.472725188Z" level=info msg="StartContainer for \"6e5cac88125dcf37df77e3b600bdf4eaf353074a7ec3284f80b0f3936f01131c\" returns successfully" Nov 1 10:04:52.485478 systemd[1]: cri-containerd-6e5cac88125dcf37df77e3b600bdf4eaf353074a7ec3284f80b0f3936f01131c.scope: Deactivated successfully. Nov 1 10:04:52.487390 containerd[1640]: time="2025-11-01T10:04:52.487332453Z" level=info msg="received exit event container_id:\"6e5cac88125dcf37df77e3b600bdf4eaf353074a7ec3284f80b0f3936f01131c\" id:\"6e5cac88125dcf37df77e3b600bdf4eaf353074a7ec3284f80b0f3936f01131c\" pid:3494 exited_at:{seconds:1761991492 nanos:486790786}" Nov 1 10:04:52.513331 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e5cac88125dcf37df77e3b600bdf4eaf353074a7ec3284f80b0f3936f01131c-rootfs.mount: Deactivated successfully. Nov 1 10:04:53.465096 kubelet[2775]: E1101 10:04:53.465034 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:53.465759 containerd[1640]: time="2025-11-01T10:04:53.465587093Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 1 10:04:54.380801 kubelet[2775]: E1101 10:04:54.380746 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zlp4v" podUID="5e9801c6-fe95-4f67-a365-4280796e7e3e" Nov 1 10:04:56.038635 containerd[1640]: time="2025-11-01T10:04:56.038558365Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:04:56.039663 containerd[1640]: time="2025-11-01T10:04:56.039625368Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Nov 1 10:04:56.040927 containerd[1640]: time="2025-11-01T10:04:56.040890042Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:04:56.043098 containerd[1640]: time="2025-11-01T10:04:56.043077228Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:04:56.043693 containerd[1640]: time="2025-11-01T10:04:56.043654751Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.578036879s" Nov 1 10:04:56.043693 containerd[1640]: time="2025-11-01T10:04:56.043684046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 1 10:04:56.045833 containerd[1640]: time="2025-11-01T10:04:56.045795880Z" level=info msg="CreateContainer within sandbox \"3200e60e559ac1de3ec9bee7e1f131a5e81413d818a785208a118fb049174832\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 1 10:04:56.058489 containerd[1640]: time="2025-11-01T10:04:56.058441205Z" level=info msg="Container 5d2f3570c24c7a248de656cf31bbb293490c90833d751e0dbca48203129a74fb: CDI devices from CRI Config.CDIDevices: []" Nov 1 10:04:56.066474 containerd[1640]: time="2025-11-01T10:04:56.066432545Z" level=info msg="CreateContainer within sandbox \"3200e60e559ac1de3ec9bee7e1f131a5e81413d818a785208a118fb049174832\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5d2f3570c24c7a248de656cf31bbb293490c90833d751e0dbca48203129a74fb\"" Nov 1 10:04:56.069460 containerd[1640]: time="2025-11-01T10:04:56.069410715Z" level=info msg="StartContainer for \"5d2f3570c24c7a248de656cf31bbb293490c90833d751e0dbca48203129a74fb\"" Nov 1 10:04:56.071010 containerd[1640]: time="2025-11-01T10:04:56.070984039Z" level=info msg="connecting to shim 5d2f3570c24c7a248de656cf31bbb293490c90833d751e0dbca48203129a74fb" address="unix:///run/containerd/s/8c5eeaff5ddc6888ef3571cc5ad475ef225a273837b5ca86624de4acb9895295" protocol=ttrpc version=3 Nov 1 10:04:56.091302 systemd[1]: Started cri-containerd-5d2f3570c24c7a248de656cf31bbb293490c90833d751e0dbca48203129a74fb.scope - libcontainer container 5d2f3570c24c7a248de656cf31bbb293490c90833d751e0dbca48203129a74fb. Nov 1 10:04:56.183940 containerd[1640]: time="2025-11-01T10:04:56.183718812Z" level=info msg="StartContainer for \"5d2f3570c24c7a248de656cf31bbb293490c90833d751e0dbca48203129a74fb\" returns successfully" Nov 1 10:04:56.381447 kubelet[2775]: E1101 10:04:56.381380 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zlp4v" podUID="5e9801c6-fe95-4f67-a365-4280796e7e3e" Nov 1 10:04:56.474098 kubelet[2775]: E1101 10:04:56.474058 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:57.476962 kubelet[2775]: E1101 10:04:57.476897 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:57.756404 systemd[1]: cri-containerd-5d2f3570c24c7a248de656cf31bbb293490c90833d751e0dbca48203129a74fb.scope: Deactivated successfully. Nov 1 10:04:57.758218 systemd[1]: cri-containerd-5d2f3570c24c7a248de656cf31bbb293490c90833d751e0dbca48203129a74fb.scope: Consumed 631ms CPU time, 176.4M memory peak, 3.8M read from disk, 171.3M written to disk. Nov 1 10:04:57.800455 containerd[1640]: time="2025-11-01T10:04:57.800384538Z" level=info msg="received exit event container_id:\"5d2f3570c24c7a248de656cf31bbb293490c90833d751e0dbca48203129a74fb\" id:\"5d2f3570c24c7a248de656cf31bbb293490c90833d751e0dbca48203129a74fb\" pid:3554 exited_at:{seconds:1761991497 nanos:758796711}" Nov 1 10:04:57.825676 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d2f3570c24c7a248de656cf31bbb293490c90833d751e0dbca48203129a74fb-rootfs.mount: Deactivated successfully. Nov 1 10:04:57.856217 kubelet[2775]: I1101 10:04:57.856170 2775 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 1 10:04:57.902197 systemd[1]: Created slice kubepods-burstable-pod90c5081d_f937_438e_bb8a_7ad343c3e65b.slice - libcontainer container kubepods-burstable-pod90c5081d_f937_438e_bb8a_7ad343c3e65b.slice. Nov 1 10:04:57.910753 systemd[1]: Created slice kubepods-besteffort-pod497638cc_4034_4ffe_9443_48cd7ad72cdc.slice - libcontainer container kubepods-besteffort-pod497638cc_4034_4ffe_9443_48cd7ad72cdc.slice. Nov 1 10:04:57.920858 systemd[1]: Created slice kubepods-besteffort-pod11cb09fe_0906_4aa9_80bd_422fc601a30c.slice - libcontainer container kubepods-besteffort-pod11cb09fe_0906_4aa9_80bd_422fc601a30c.slice. Nov 1 10:04:57.926250 systemd[1]: Created slice kubepods-besteffort-podfc376135_15c2_4563_9e6f_3663c5522932.slice - libcontainer container kubepods-besteffort-podfc376135_15c2_4563_9e6f_3663c5522932.slice. Nov 1 10:04:57.934630 systemd[1]: Created slice kubepods-besteffort-podcb93b629_1d38_403a_a17d_82160a57c839.slice - libcontainer container kubepods-besteffort-podcb93b629_1d38_403a_a17d_82160a57c839.slice. Nov 1 10:04:57.941759 systemd[1]: Created slice kubepods-burstable-podc66a3ca9_ff79_4bff_ac6d_52b470a1658e.slice - libcontainer container kubepods-burstable-podc66a3ca9_ff79_4bff_ac6d_52b470a1658e.slice. Nov 1 10:04:57.947040 systemd[1]: Created slice kubepods-besteffort-pod29af33ad_9abc_4ff3_b520_e3177a680c27.slice - libcontainer container kubepods-besteffort-pod29af33ad_9abc_4ff3_b520_e3177a680c27.slice. Nov 1 10:04:57.974406 kubelet[2775]: I1101 10:04:57.974339 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfvvb\" (UniqueName: \"kubernetes.io/projected/fc376135-15c2-4563-9e6f-3663c5522932-kube-api-access-xfvvb\") pod \"calico-apiserver-56f8446f94-kclf6\" (UID: \"fc376135-15c2-4563-9e6f-3663c5522932\") " pod="calico-apiserver/calico-apiserver-56f8446f94-kclf6" Nov 1 10:04:57.974406 kubelet[2775]: I1101 10:04:57.974381 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl2p9\" (UniqueName: \"kubernetes.io/projected/11cb09fe-0906-4aa9-80bd-422fc601a30c-kube-api-access-gl2p9\") pod \"calico-kube-controllers-64b6f54dbf-5vmmj\" (UID: \"11cb09fe-0906-4aa9-80bd-422fc601a30c\") " pod="calico-system/calico-kube-controllers-64b6f54dbf-5vmmj" Nov 1 10:04:57.974406 kubelet[2775]: I1101 10:04:57.974402 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg4sc\" (UniqueName: \"kubernetes.io/projected/497638cc-4034-4ffe-9443-48cd7ad72cdc-kube-api-access-hg4sc\") pod \"goldmane-666569f655-5mnnb\" (UID: \"497638cc-4034-4ffe-9443-48cd7ad72cdc\") " pod="calico-system/goldmane-666569f655-5mnnb" Nov 1 10:04:57.974406 kubelet[2775]: I1101 10:04:57.974420 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fc376135-15c2-4563-9e6f-3663c5522932-calico-apiserver-certs\") pod \"calico-apiserver-56f8446f94-kclf6\" (UID: \"fc376135-15c2-4563-9e6f-3663c5522932\") " pod="calico-apiserver/calico-apiserver-56f8446f94-kclf6" Nov 1 10:04:57.974716 kubelet[2775]: I1101 10:04:57.974439 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89b9k\" (UniqueName: \"kubernetes.io/projected/29af33ad-9abc-4ff3-b520-e3177a680c27-kube-api-access-89b9k\") pod \"calico-apiserver-56f8446f94-wbw7g\" (UID: \"29af33ad-9abc-4ff3-b520-e3177a680c27\") " pod="calico-apiserver/calico-apiserver-56f8446f94-wbw7g" Nov 1 10:04:57.974716 kubelet[2775]: I1101 10:04:57.974463 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/497638cc-4034-4ffe-9443-48cd7ad72cdc-config\") pod \"goldmane-666569f655-5mnnb\" (UID: \"497638cc-4034-4ffe-9443-48cd7ad72cdc\") " pod="calico-system/goldmane-666569f655-5mnnb" Nov 1 10:04:57.974716 kubelet[2775]: I1101 10:04:57.974542 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb93b629-1d38-403a-a17d-82160a57c839-whisker-ca-bundle\") pod \"whisker-54f854d67b-mmp4b\" (UID: \"cb93b629-1d38-403a-a17d-82160a57c839\") " pod="calico-system/whisker-54f854d67b-mmp4b" Nov 1 10:04:57.974716 kubelet[2775]: I1101 10:04:57.974584 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90c5081d-f937-438e-bb8a-7ad343c3e65b-config-volume\") pod \"coredns-668d6bf9bc-lbx5t\" (UID: \"90c5081d-f937-438e-bb8a-7ad343c3e65b\") " pod="kube-system/coredns-668d6bf9bc-lbx5t" Nov 1 10:04:57.974716 kubelet[2775]: I1101 10:04:57.974603 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plpvb\" (UniqueName: \"kubernetes.io/projected/c66a3ca9-ff79-4bff-ac6d-52b470a1658e-kube-api-access-plpvb\") pod \"coredns-668d6bf9bc-8599x\" (UID: \"c66a3ca9-ff79-4bff-ac6d-52b470a1658e\") " pod="kube-system/coredns-668d6bf9bc-8599x" Nov 1 10:04:57.974853 kubelet[2775]: I1101 10:04:57.974665 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cb93b629-1d38-403a-a17d-82160a57c839-whisker-backend-key-pair\") pod \"whisker-54f854d67b-mmp4b\" (UID: \"cb93b629-1d38-403a-a17d-82160a57c839\") " pod="calico-system/whisker-54f854d67b-mmp4b" Nov 1 10:04:57.974853 kubelet[2775]: I1101 10:04:57.974721 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx8tb\" (UniqueName: \"kubernetes.io/projected/cb93b629-1d38-403a-a17d-82160a57c839-kube-api-access-rx8tb\") pod \"whisker-54f854d67b-mmp4b\" (UID: \"cb93b629-1d38-403a-a17d-82160a57c839\") " pod="calico-system/whisker-54f854d67b-mmp4b" Nov 1 10:04:57.974853 kubelet[2775]: I1101 10:04:57.974779 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/29af33ad-9abc-4ff3-b520-e3177a680c27-calico-apiserver-certs\") pod \"calico-apiserver-56f8446f94-wbw7g\" (UID: \"29af33ad-9abc-4ff3-b520-e3177a680c27\") " pod="calico-apiserver/calico-apiserver-56f8446f94-wbw7g" Nov 1 10:04:57.974972 kubelet[2775]: I1101 10:04:57.974888 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/497638cc-4034-4ffe-9443-48cd7ad72cdc-goldmane-key-pair\") pod \"goldmane-666569f655-5mnnb\" (UID: \"497638cc-4034-4ffe-9443-48cd7ad72cdc\") " pod="calico-system/goldmane-666569f655-5mnnb" Nov 1 10:04:57.975016 kubelet[2775]: I1101 10:04:57.974977 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/497638cc-4034-4ffe-9443-48cd7ad72cdc-goldmane-ca-bundle\") pod \"goldmane-666569f655-5mnnb\" (UID: \"497638cc-4034-4ffe-9443-48cd7ad72cdc\") " pod="calico-system/goldmane-666569f655-5mnnb" Nov 1 10:04:57.975043 kubelet[2775]: I1101 10:04:57.975020 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11cb09fe-0906-4aa9-80bd-422fc601a30c-tigera-ca-bundle\") pod \"calico-kube-controllers-64b6f54dbf-5vmmj\" (UID: \"11cb09fe-0906-4aa9-80bd-422fc601a30c\") " pod="calico-system/calico-kube-controllers-64b6f54dbf-5vmmj" Nov 1 10:04:57.975078 kubelet[2775]: I1101 10:04:57.975040 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c66a3ca9-ff79-4bff-ac6d-52b470a1658e-config-volume\") pod \"coredns-668d6bf9bc-8599x\" (UID: \"c66a3ca9-ff79-4bff-ac6d-52b470a1658e\") " pod="kube-system/coredns-668d6bf9bc-8599x" Nov 1 10:04:57.975140 kubelet[2775]: I1101 10:04:57.975074 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwtgv\" (UniqueName: \"kubernetes.io/projected/90c5081d-f937-438e-bb8a-7ad343c3e65b-kube-api-access-mwtgv\") pod \"coredns-668d6bf9bc-lbx5t\" (UID: \"90c5081d-f937-438e-bb8a-7ad343c3e65b\") " pod="kube-system/coredns-668d6bf9bc-lbx5t" Nov 1 10:04:58.210896 kubelet[2775]: E1101 10:04:58.210837 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:58.212165 containerd[1640]: time="2025-11-01T10:04:58.211819570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lbx5t,Uid:90c5081d-f937-438e-bb8a-7ad343c3e65b,Namespace:kube-system,Attempt:0,}" Nov 1 10:04:58.218578 containerd[1640]: time="2025-11-01T10:04:58.218556795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5mnnb,Uid:497638cc-4034-4ffe-9443-48cd7ad72cdc,Namespace:calico-system,Attempt:0,}" Nov 1 10:04:58.224624 containerd[1640]: time="2025-11-01T10:04:58.224575561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64b6f54dbf-5vmmj,Uid:11cb09fe-0906-4aa9-80bd-422fc601a30c,Namespace:calico-system,Attempt:0,}" Nov 1 10:04:58.231213 containerd[1640]: time="2025-11-01T10:04:58.231170388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56f8446f94-kclf6,Uid:fc376135-15c2-4563-9e6f-3663c5522932,Namespace:calico-apiserver,Attempt:0,}" Nov 1 10:04:58.240211 containerd[1640]: time="2025-11-01T10:04:58.240180267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54f854d67b-mmp4b,Uid:cb93b629-1d38-403a-a17d-82160a57c839,Namespace:calico-system,Attempt:0,}" Nov 1 10:04:58.244689 kubelet[2775]: E1101 10:04:58.244644 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:58.245281 containerd[1640]: time="2025-11-01T10:04:58.245235795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8599x,Uid:c66a3ca9-ff79-4bff-ac6d-52b470a1658e,Namespace:kube-system,Attempt:0,}" Nov 1 10:04:58.250053 containerd[1640]: time="2025-11-01T10:04:58.250021869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56f8446f94-wbw7g,Uid:29af33ad-9abc-4ff3-b520-e3177a680c27,Namespace:calico-apiserver,Attempt:0,}" Nov 1 10:04:58.389879 systemd[1]: Created slice kubepods-besteffort-pod5e9801c6_fe95_4f67_a365_4280796e7e3e.slice - libcontainer container kubepods-besteffort-pod5e9801c6_fe95_4f67_a365_4280796e7e3e.slice. Nov 1 10:04:58.396220 containerd[1640]: time="2025-11-01T10:04:58.396010320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zlp4v,Uid:5e9801c6-fe95-4f67-a365-4280796e7e3e,Namespace:calico-system,Attempt:0,}" Nov 1 10:04:58.531278 containerd[1640]: time="2025-11-01T10:04:58.529187377Z" level=error msg="Failed to destroy network for sandbox \"9dcf71135b265a385baec6d43208d6ae242444f6e03398854911db3fb5486335\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:04:58.532861 containerd[1640]: time="2025-11-01T10:04:58.532818122Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lbx5t,Uid:90c5081d-f937-438e-bb8a-7ad343c3e65b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dcf71135b265a385baec6d43208d6ae242444f6e03398854911db3fb5486335\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:04:58.541411 kubelet[2775]: E1101 10:04:58.540007 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:58.541411 kubelet[2775]: E1101 10:04:58.540771 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dcf71135b265a385baec6d43208d6ae242444f6e03398854911db3fb5486335\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:04:58.541411 kubelet[2775]: E1101 10:04:58.540858 2775 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dcf71135b265a385baec6d43208d6ae242444f6e03398854911db3fb5486335\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lbx5t" Nov 1 10:04:58.541411 kubelet[2775]: E1101 10:04:58.540897 2775 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dcf71135b265a385baec6d43208d6ae242444f6e03398854911db3fb5486335\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lbx5t" Nov 1 10:04:58.543982 kubelet[2775]: E1101 10:04:58.541261 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-lbx5t_kube-system(90c5081d-f937-438e-bb8a-7ad343c3e65b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-lbx5t_kube-system(90c5081d-f937-438e-bb8a-7ad343c3e65b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9dcf71135b265a385baec6d43208d6ae242444f6e03398854911db3fb5486335\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-lbx5t" podUID="90c5081d-f937-438e-bb8a-7ad343c3e65b" Nov 1 10:04:58.544058 containerd[1640]: time="2025-11-01T10:04:58.541786694Z" level=error msg="Failed to destroy network for sandbox \"08ece8f87312cfd1bd3ec0bf9645956a7cc83a4cd02e1426a1447f1e30affb9e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:04:58.544058 containerd[1640]: time="2025-11-01T10:04:58.543980602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 1 10:04:58.553270 containerd[1640]: time="2025-11-01T10:04:58.550376906Z" level=error msg="Failed to destroy network for sandbox \"cc8c498f617f6c4bb6d960e46f038f2a98b507e8363ffbbf93fc5a33a97dff1d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:04:58.558736 containerd[1640]: time="2025-11-01T10:04:58.558558431Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zlp4v,Uid:5e9801c6-fe95-4f67-a365-4280796e7e3e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"08ece8f87312cfd1bd3ec0bf9645956a7cc83a4cd02e1426a1447f1e30affb9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:04:58.561352 kubelet[2775]: E1101 10:04:58.558841 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08ece8f87312cfd1bd3ec0bf9645956a7cc83a4cd02e1426a1447f1e30affb9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:04:58.561352 kubelet[2775]: E1101 10:04:58.558914 2775 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08ece8f87312cfd1bd3ec0bf9645956a7cc83a4cd02e1426a1447f1e30affb9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zlp4v" Nov 1 10:04:58.561352 kubelet[2775]: E1101 10:04:58.558937 2775 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08ece8f87312cfd1bd3ec0bf9645956a7cc83a4cd02e1426a1447f1e30affb9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zlp4v" Nov 1 10:04:58.561505 kubelet[2775]: E1101 10:04:58.559000 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zlp4v_calico-system(5e9801c6-fe95-4f67-a365-4280796e7e3e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zlp4v_calico-system(5e9801c6-fe95-4f67-a365-4280796e7e3e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"08ece8f87312cfd1bd3ec0bf9645956a7cc83a4cd02e1426a1447f1e30affb9e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zlp4v" podUID="5e9801c6-fe95-4f67-a365-4280796e7e3e" Nov 1 10:04:58.564474 containerd[1640]: time="2025-11-01T10:04:58.563158876Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5mnnb,Uid:497638cc-4034-4ffe-9443-48cd7ad72cdc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc8c498f617f6c4bb6d960e46f038f2a98b507e8363ffbbf93fc5a33a97dff1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:04:58.564657 kubelet[2775]: E1101 10:04:58.563299 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc8c498f617f6c4bb6d960e46f038f2a98b507e8363ffbbf93fc5a33a97dff1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:04:58.564657 kubelet[2775]: E1101 10:04:58.563333 2775 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc8c498f617f6c4bb6d960e46f038f2a98b507e8363ffbbf93fc5a33a97dff1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-5mnnb" Nov 1 10:04:58.564657 kubelet[2775]: E1101 10:04:58.563351 2775 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc8c498f617f6c4bb6d960e46f038f2a98b507e8363ffbbf93fc5a33a97dff1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-5mnnb" Nov 1 10:04:58.564785 kubelet[2775]: E1101 10:04:58.563380 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-5mnnb_calico-system(497638cc-4034-4ffe-9443-48cd7ad72cdc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-5mnnb_calico-system(497638cc-4034-4ffe-9443-48cd7ad72cdc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc8c498f617f6c4bb6d960e46f038f2a98b507e8363ffbbf93fc5a33a97dff1d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-5mnnb" podUID="497638cc-4034-4ffe-9443-48cd7ad72cdc" Nov 1 10:04:58.594673 containerd[1640]: time="2025-11-01T10:04:58.594608801Z" level=error msg="Failed to destroy network for sandbox \"481b4ef2cb658a305e1652a6c0e3febc2cf78dac60d9dc37a0ae64d1e8134198\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:04:58.604828 containerd[1640]: time="2025-11-01T10:04:58.604654084Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8599x,Uid:c66a3ca9-ff79-4bff-ac6d-52b470a1658e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"481b4ef2cb658a305e1652a6c0e3febc2cf78dac60d9dc37a0ae64d1e8134198\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:04:58.605011 kubelet[2775]: E1101 10:04:58.604967 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"481b4ef2cb658a305e1652a6c0e3febc2cf78dac60d9dc37a0ae64d1e8134198\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:04:58.605071 kubelet[2775]: E1101 10:04:58.605055 2775 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"481b4ef2cb658a305e1652a6c0e3febc2cf78dac60d9dc37a0ae64d1e8134198\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8599x" Nov 1 10:04:58.605114 kubelet[2775]: E1101 10:04:58.605077 2775 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"481b4ef2cb658a305e1652a6c0e3febc2cf78dac60d9dc37a0ae64d1e8134198\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8599x" Nov 1 10:04:58.605186 kubelet[2775]: E1101 10:04:58.605147 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-8599x_kube-system(c66a3ca9-ff79-4bff-ac6d-52b470a1658e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-8599x_kube-system(c66a3ca9-ff79-4bff-ac6d-52b470a1658e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"481b4ef2cb658a305e1652a6c0e3febc2cf78dac60d9dc37a0ae64d1e8134198\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-8599x" podUID="c66a3ca9-ff79-4bff-ac6d-52b470a1658e" Nov 1 10:04:58.605773 containerd[1640]: time="2025-11-01T10:04:58.605744841Z" level=error msg="Failed to destroy network for sandbox \"73dd531a72de16c54f9f38969895fb7544b031dc625768626897dedec5cb2db3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:04:58.608032 containerd[1640]: time="2025-11-01T10:04:58.607985426Z" level=error msg="Failed to destroy network for sandbox \"f24f197203dcae77bf67cbddad380e54e633e0010642b08da360b0271f05aff4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:04:58.608865 containerd[1640]: time="2025-11-01T10:04:58.608811587Z" level=error msg="Failed to destroy network for sandbox \"7afd0570e8505027cbf48bda157afeab0ad280017748b7b63eceeb99a0cbc550\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:04:58.610355 containerd[1640]: time="2025-11-01T10:04:58.609844064Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56f8446f94-kclf6,Uid:fc376135-15c2-4563-9e6f-3663c5522932,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"73dd531a72de16c54f9f38969895fb7544b031dc625768626897dedec5cb2db3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:04:58.610577 kubelet[2775]: E1101 10:04:58.610547 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73dd531a72de16c54f9f38969895fb7544b031dc625768626897dedec5cb2db3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:04:58.610655 kubelet[2775]: E1101 10:04:58.610617 2775 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73dd531a72de16c54f9f38969895fb7544b031dc625768626897dedec5cb2db3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56f8446f94-kclf6" Nov 1 10:04:58.610655 kubelet[2775]: E1101 10:04:58.610643 2775 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73dd531a72de16c54f9f38969895fb7544b031dc625768626897dedec5cb2db3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56f8446f94-kclf6" Nov 1 10:04:58.610750 kubelet[2775]: E1101 10:04:58.610677 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-56f8446f94-kclf6_calico-apiserver(fc376135-15c2-4563-9e6f-3663c5522932)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-56f8446f94-kclf6_calico-apiserver(fc376135-15c2-4563-9e6f-3663c5522932)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73dd531a72de16c54f9f38969895fb7544b031dc625768626897dedec5cb2db3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-56f8446f94-kclf6" podUID="fc376135-15c2-4563-9e6f-3663c5522932" Nov 1 10:04:58.612453 containerd[1640]: time="2025-11-01T10:04:58.612421682Z" level=error msg="Failed to destroy network for sandbox \"7afef20dcfe2d5e31ae5091c78e18b1fc4ae51f6c07a3d97cc9886b2e9d35969\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:04:58.613710 containerd[1640]: time="2025-11-01T10:04:58.613601587Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54f854d67b-mmp4b,Uid:cb93b629-1d38-403a-a17d-82160a57c839,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7afd0570e8505027cbf48bda157afeab0ad280017748b7b63eceeb99a0cbc550\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:04:58.613804 kubelet[2775]: E1101 10:04:58.613776 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7afd0570e8505027cbf48bda157afeab0ad280017748b7b63eceeb99a0cbc550\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:04:58.613847 kubelet[2775]: E1101 10:04:58.613812 2775 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7afd0570e8505027cbf48bda157afeab0ad280017748b7b63eceeb99a0cbc550\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54f854d67b-mmp4b" Nov 1 10:04:58.613847 kubelet[2775]: E1101 10:04:58.613830 2775 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7afd0570e8505027cbf48bda157afeab0ad280017748b7b63eceeb99a0cbc550\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54f854d67b-mmp4b" Nov 1 10:04:58.613897 kubelet[2775]: E1101 10:04:58.613883 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-54f854d67b-mmp4b_calico-system(cb93b629-1d38-403a-a17d-82160a57c839)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-54f854d67b-mmp4b_calico-system(cb93b629-1d38-403a-a17d-82160a57c839)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7afd0570e8505027cbf48bda157afeab0ad280017748b7b63eceeb99a0cbc550\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-54f854d67b-mmp4b" podUID="cb93b629-1d38-403a-a17d-82160a57c839" Nov 1 10:04:58.614811 containerd[1640]: time="2025-11-01T10:04:58.614633483Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56f8446f94-wbw7g,Uid:29af33ad-9abc-4ff3-b520-e3177a680c27,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f24f197203dcae77bf67cbddad380e54e633e0010642b08da360b0271f05aff4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:04:58.614883 kubelet[2775]: E1101 10:04:58.614793 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f24f197203dcae77bf67cbddad380e54e633e0010642b08da360b0271f05aff4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:04:58.614883 kubelet[2775]: E1101 10:04:58.614873 2775 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f24f197203dcae77bf67cbddad380e54e633e0010642b08da360b0271f05aff4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56f8446f94-wbw7g" Nov 1 10:04:58.614929 kubelet[2775]: E1101 10:04:58.614895 2775 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f24f197203dcae77bf67cbddad380e54e633e0010642b08da360b0271f05aff4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56f8446f94-wbw7g" Nov 1 10:04:58.614961 kubelet[2775]: E1101 10:04:58.614935 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-56f8446f94-wbw7g_calico-apiserver(29af33ad-9abc-4ff3-b520-e3177a680c27)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-56f8446f94-wbw7g_calico-apiserver(29af33ad-9abc-4ff3-b520-e3177a680c27)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f24f197203dcae77bf67cbddad380e54e633e0010642b08da360b0271f05aff4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-56f8446f94-wbw7g" podUID="29af33ad-9abc-4ff3-b520-e3177a680c27" Nov 1 10:04:58.616862 containerd[1640]: time="2025-11-01T10:04:58.616828412Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64b6f54dbf-5vmmj,Uid:11cb09fe-0906-4aa9-80bd-422fc601a30c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7afef20dcfe2d5e31ae5091c78e18b1fc4ae51f6c07a3d97cc9886b2e9d35969\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:04:58.616992 kubelet[2775]: E1101 10:04:58.616965 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7afef20dcfe2d5e31ae5091c78e18b1fc4ae51f6c07a3d97cc9886b2e9d35969\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:04:58.617033 kubelet[2775]: E1101 10:04:58.617000 2775 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7afef20dcfe2d5e31ae5091c78e18b1fc4ae51f6c07a3d97cc9886b2e9d35969\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64b6f54dbf-5vmmj" Nov 1 10:04:58.617033 kubelet[2775]: E1101 10:04:58.617017 2775 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7afef20dcfe2d5e31ae5091c78e18b1fc4ae51f6c07a3d97cc9886b2e9d35969\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64b6f54dbf-5vmmj" Nov 1 10:04:58.617085 kubelet[2775]: E1101 10:04:58.617051 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-64b6f54dbf-5vmmj_calico-system(11cb09fe-0906-4aa9-80bd-422fc601a30c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-64b6f54dbf-5vmmj_calico-system(11cb09fe-0906-4aa9-80bd-422fc601a30c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7afef20dcfe2d5e31ae5091c78e18b1fc4ae51f6c07a3d97cc9886b2e9d35969\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64b6f54dbf-5vmmj" podUID="11cb09fe-0906-4aa9-80bd-422fc601a30c" Nov 1 10:05:04.629524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2532384531.mount: Deactivated successfully. Nov 1 10:05:05.865706 containerd[1640]: time="2025-11-01T10:05:05.865629226Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:05:05.866651 containerd[1640]: time="2025-11-01T10:05:05.866621127Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Nov 1 10:05:05.868204 containerd[1640]: time="2025-11-01T10:05:05.868167488Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:05:05.870372 containerd[1640]: time="2025-11-01T10:05:05.870340625Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:05:05.870920 containerd[1640]: time="2025-11-01T10:05:05.870892752Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.326878016s" Nov 1 10:05:05.870991 containerd[1640]: time="2025-11-01T10:05:05.870925573Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 1 10:05:05.881646 containerd[1640]: time="2025-11-01T10:05:05.881561990Z" level=info msg="CreateContainer within sandbox \"3200e60e559ac1de3ec9bee7e1f131a5e81413d818a785208a118fb049174832\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 1 10:05:05.900338 containerd[1640]: time="2025-11-01T10:05:05.900297002Z" level=info msg="Container cdc52892fe4eee2a6b15db1e2d44a8d2a399f827db9079e34bf258ba1524c1ad: CDI devices from CRI Config.CDIDevices: []" Nov 1 10:05:06.007724 containerd[1640]: time="2025-11-01T10:05:06.007647982Z" level=info msg="CreateContainer within sandbox \"3200e60e559ac1de3ec9bee7e1f131a5e81413d818a785208a118fb049174832\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"cdc52892fe4eee2a6b15db1e2d44a8d2a399f827db9079e34bf258ba1524c1ad\"" Nov 1 10:05:06.008435 containerd[1640]: time="2025-11-01T10:05:06.008393951Z" level=info msg="StartContainer for \"cdc52892fe4eee2a6b15db1e2d44a8d2a399f827db9079e34bf258ba1524c1ad\"" Nov 1 10:05:06.009960 containerd[1640]: time="2025-11-01T10:05:06.009902793Z" level=info msg="connecting to shim cdc52892fe4eee2a6b15db1e2d44a8d2a399f827db9079e34bf258ba1524c1ad" address="unix:///run/containerd/s/8c5eeaff5ddc6888ef3571cc5ad475ef225a273837b5ca86624de4acb9895295" protocol=ttrpc version=3 Nov 1 10:05:06.096267 systemd[1]: Started cri-containerd-cdc52892fe4eee2a6b15db1e2d44a8d2a399f827db9079e34bf258ba1524c1ad.scope - libcontainer container cdc52892fe4eee2a6b15db1e2d44a8d2a399f827db9079e34bf258ba1524c1ad. Nov 1 10:05:06.252693 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 1 10:05:06.253854 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 1 10:05:06.314393 containerd[1640]: time="2025-11-01T10:05:06.314312091Z" level=info msg="StartContainer for \"cdc52892fe4eee2a6b15db1e2d44a8d2a399f827db9079e34bf258ba1524c1ad\" returns successfully" Nov 1 10:05:06.426187 kubelet[2775]: I1101 10:05:06.426130 2775 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rx8tb\" (UniqueName: \"kubernetes.io/projected/cb93b629-1d38-403a-a17d-82160a57c839-kube-api-access-rx8tb\") pod \"cb93b629-1d38-403a-a17d-82160a57c839\" (UID: \"cb93b629-1d38-403a-a17d-82160a57c839\") " Nov 1 10:05:06.426187 kubelet[2775]: I1101 10:05:06.426175 2775 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cb93b629-1d38-403a-a17d-82160a57c839-whisker-backend-key-pair\") pod \"cb93b629-1d38-403a-a17d-82160a57c839\" (UID: \"cb93b629-1d38-403a-a17d-82160a57c839\") " Nov 1 10:05:06.426734 kubelet[2775]: I1101 10:05:06.426203 2775 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb93b629-1d38-403a-a17d-82160a57c839-whisker-ca-bundle\") pod \"cb93b629-1d38-403a-a17d-82160a57c839\" (UID: \"cb93b629-1d38-403a-a17d-82160a57c839\") " Nov 1 10:05:06.426734 kubelet[2775]: I1101 10:05:06.426607 2775 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb93b629-1d38-403a-a17d-82160a57c839-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "cb93b629-1d38-403a-a17d-82160a57c839" (UID: "cb93b629-1d38-403a-a17d-82160a57c839"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 10:05:06.430064 kubelet[2775]: I1101 10:05:06.430024 2775 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb93b629-1d38-403a-a17d-82160a57c839-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "cb93b629-1d38-403a-a17d-82160a57c839" (UID: "cb93b629-1d38-403a-a17d-82160a57c839"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 10:05:06.430162 kubelet[2775]: I1101 10:05:06.430119 2775 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb93b629-1d38-403a-a17d-82160a57c839-kube-api-access-rx8tb" (OuterVolumeSpecName: "kube-api-access-rx8tb") pod "cb93b629-1d38-403a-a17d-82160a57c839" (UID: "cb93b629-1d38-403a-a17d-82160a57c839"). InnerVolumeSpecName "kube-api-access-rx8tb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 10:05:06.526840 kubelet[2775]: I1101 10:05:06.526783 2775 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rx8tb\" (UniqueName: \"kubernetes.io/projected/cb93b629-1d38-403a-a17d-82160a57c839-kube-api-access-rx8tb\") on node \"localhost\" DevicePath \"\"" Nov 1 10:05:06.526840 kubelet[2775]: I1101 10:05:06.526829 2775 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cb93b629-1d38-403a-a17d-82160a57c839-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 1 10:05:06.527071 kubelet[2775]: I1101 10:05:06.526847 2775 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb93b629-1d38-403a-a17d-82160a57c839-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 1 10:05:06.568225 kubelet[2775]: E1101 10:05:06.568001 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:05:06.573520 systemd[1]: Removed slice kubepods-besteffort-podcb93b629_1d38_403a_a17d_82160a57c839.slice - libcontainer container kubepods-besteffort-podcb93b629_1d38_403a_a17d_82160a57c839.slice. Nov 1 10:05:06.597935 kubelet[2775]: I1101 10:05:06.597805 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-bzv5g" podStartSLOduration=1.920606495 podStartE2EDuration="18.597772064s" podCreationTimestamp="2025-11-01 10:04:48 +0000 UTC" firstStartedPulling="2025-11-01 10:04:49.194736597 +0000 UTC m=+21.891002645" lastFinishedPulling="2025-11-01 10:05:05.871902166 +0000 UTC m=+38.568168214" observedRunningTime="2025-11-01 10:05:06.587691081 +0000 UTC m=+39.283957119" watchObservedRunningTime="2025-11-01 10:05:06.597772064 +0000 UTC m=+39.294038113" Nov 1 10:05:06.644801 systemd[1]: Created slice kubepods-besteffort-pod7d67457e_e809_4b44_b320_e1c49fbcfb7c.slice - libcontainer container kubepods-besteffort-pod7d67457e_e809_4b44_b320_e1c49fbcfb7c.slice. Nov 1 10:05:06.728469 kubelet[2775]: I1101 10:05:06.728394 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7d67457e-e809-4b44-b320-e1c49fbcfb7c-whisker-backend-key-pair\") pod \"whisker-cc77b7fcd-hckvh\" (UID: \"7d67457e-e809-4b44-b320-e1c49fbcfb7c\") " pod="calico-system/whisker-cc77b7fcd-hckvh" Nov 1 10:05:06.728469 kubelet[2775]: I1101 10:05:06.728455 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d67457e-e809-4b44-b320-e1c49fbcfb7c-whisker-ca-bundle\") pod \"whisker-cc77b7fcd-hckvh\" (UID: \"7d67457e-e809-4b44-b320-e1c49fbcfb7c\") " pod="calico-system/whisker-cc77b7fcd-hckvh" Nov 1 10:05:06.728469 kubelet[2775]: I1101 10:05:06.728490 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gb78\" (UniqueName: \"kubernetes.io/projected/7d67457e-e809-4b44-b320-e1c49fbcfb7c-kube-api-access-2gb78\") pod \"whisker-cc77b7fcd-hckvh\" (UID: \"7d67457e-e809-4b44-b320-e1c49fbcfb7c\") " pod="calico-system/whisker-cc77b7fcd-hckvh" Nov 1 10:05:06.882566 systemd[1]: var-lib-kubelet-pods-cb93b629\x2d1d38\x2d403a\x2da17d\x2d82160a57c839-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drx8tb.mount: Deactivated successfully. Nov 1 10:05:06.882670 systemd[1]: var-lib-kubelet-pods-cb93b629\x2d1d38\x2d403a\x2da17d\x2d82160a57c839-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 1 10:05:06.948854 containerd[1640]: time="2025-11-01T10:05:06.948785002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cc77b7fcd-hckvh,Uid:7d67457e-e809-4b44-b320-e1c49fbcfb7c,Namespace:calico-system,Attempt:0,}" Nov 1 10:05:07.306809 systemd-networkd[1526]: cali0819de789f7: Link UP Nov 1 10:05:07.307196 systemd-networkd[1526]: cali0819de789f7: Gained carrier Nov 1 10:05:07.383577 kubelet[2775]: I1101 10:05:07.383521 2775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb93b629-1d38-403a-a17d-82160a57c839" path="/var/lib/kubelet/pods/cb93b629-1d38-403a-a17d-82160a57c839/volumes" Nov 1 10:05:07.455148 containerd[1640]: 2025-11-01 10:05:06.977 [INFO][3935] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 10:05:07.455148 containerd[1640]: 2025-11-01 10:05:06.996 [INFO][3935] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--cc77b7fcd--hckvh-eth0 whisker-cc77b7fcd- calico-system 7d67457e-e809-4b44-b320-e1c49fbcfb7c 889 0 2025-11-01 10:05:06 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:cc77b7fcd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-cc77b7fcd-hckvh eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0819de789f7 [] [] }} ContainerID="f1a692e6bc0de0060cea886b7cf05f50132385adce491bfb8b39a16edca9473e" Namespace="calico-system" Pod="whisker-cc77b7fcd-hckvh" WorkloadEndpoint="localhost-k8s-whisker--cc77b7fcd--hckvh-" Nov 1 10:05:07.455148 containerd[1640]: 2025-11-01 10:05:06.996 [INFO][3935] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f1a692e6bc0de0060cea886b7cf05f50132385adce491bfb8b39a16edca9473e" Namespace="calico-system" Pod="whisker-cc77b7fcd-hckvh" WorkloadEndpoint="localhost-k8s-whisker--cc77b7fcd--hckvh-eth0" Nov 1 10:05:07.455148 containerd[1640]: 2025-11-01 10:05:07.071 [INFO][3950] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f1a692e6bc0de0060cea886b7cf05f50132385adce491bfb8b39a16edca9473e" HandleID="k8s-pod-network.f1a692e6bc0de0060cea886b7cf05f50132385adce491bfb8b39a16edca9473e" Workload="localhost-k8s-whisker--cc77b7fcd--hckvh-eth0" Nov 1 10:05:07.455457 containerd[1640]: 2025-11-01 10:05:07.072 [INFO][3950] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f1a692e6bc0de0060cea886b7cf05f50132385adce491bfb8b39a16edca9473e" HandleID="k8s-pod-network.f1a692e6bc0de0060cea886b7cf05f50132385adce491bfb8b39a16edca9473e" Workload="localhost-k8s-whisker--cc77b7fcd--hckvh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000c1860), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-cc77b7fcd-hckvh", "timestamp":"2025-11-01 10:05:07.071581853 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 10:05:07.455457 containerd[1640]: 2025-11-01 10:05:07.072 [INFO][3950] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 10:05:07.455457 containerd[1640]: 2025-11-01 10:05:07.072 [INFO][3950] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 10:05:07.455457 containerd[1640]: 2025-11-01 10:05:07.072 [INFO][3950] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 10:05:07.455457 containerd[1640]: 2025-11-01 10:05:07.081 [INFO][3950] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f1a692e6bc0de0060cea886b7cf05f50132385adce491bfb8b39a16edca9473e" host="localhost" Nov 1 10:05:07.455457 containerd[1640]: 2025-11-01 10:05:07.086 [INFO][3950] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 10:05:07.455457 containerd[1640]: 2025-11-01 10:05:07.091 [INFO][3950] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 10:05:07.455457 containerd[1640]: 2025-11-01 10:05:07.092 [INFO][3950] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 10:05:07.455457 containerd[1640]: 2025-11-01 10:05:07.094 [INFO][3950] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 10:05:07.455457 containerd[1640]: 2025-11-01 10:05:07.094 [INFO][3950] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f1a692e6bc0de0060cea886b7cf05f50132385adce491bfb8b39a16edca9473e" host="localhost" Nov 1 10:05:07.455743 containerd[1640]: 2025-11-01 10:05:07.096 [INFO][3950] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f1a692e6bc0de0060cea886b7cf05f50132385adce491bfb8b39a16edca9473e Nov 1 10:05:07.455743 containerd[1640]: 2025-11-01 10:05:07.246 [INFO][3950] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f1a692e6bc0de0060cea886b7cf05f50132385adce491bfb8b39a16edca9473e" host="localhost" Nov 1 10:05:07.455743 containerd[1640]: 2025-11-01 10:05:07.295 [INFO][3950] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.f1a692e6bc0de0060cea886b7cf05f50132385adce491bfb8b39a16edca9473e" host="localhost" Nov 1 10:05:07.455743 containerd[1640]: 2025-11-01 10:05:07.295 [INFO][3950] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.f1a692e6bc0de0060cea886b7cf05f50132385adce491bfb8b39a16edca9473e" host="localhost" Nov 1 10:05:07.455743 containerd[1640]: 2025-11-01 10:05:07.295 [INFO][3950] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 10:05:07.455743 containerd[1640]: 2025-11-01 10:05:07.295 [INFO][3950] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="f1a692e6bc0de0060cea886b7cf05f50132385adce491bfb8b39a16edca9473e" HandleID="k8s-pod-network.f1a692e6bc0de0060cea886b7cf05f50132385adce491bfb8b39a16edca9473e" Workload="localhost-k8s-whisker--cc77b7fcd--hckvh-eth0" Nov 1 10:05:07.455928 containerd[1640]: 2025-11-01 10:05:07.299 [INFO][3935] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f1a692e6bc0de0060cea886b7cf05f50132385adce491bfb8b39a16edca9473e" Namespace="calico-system" Pod="whisker-cc77b7fcd-hckvh" WorkloadEndpoint="localhost-k8s-whisker--cc77b7fcd--hckvh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--cc77b7fcd--hckvh-eth0", GenerateName:"whisker-cc77b7fcd-", Namespace:"calico-system", SelfLink:"", UID:"7d67457e-e809-4b44-b320-e1c49fbcfb7c", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 5, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"cc77b7fcd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-cc77b7fcd-hckvh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0819de789f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:05:07.455928 containerd[1640]: 2025-11-01 10:05:07.299 [INFO][3935] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="f1a692e6bc0de0060cea886b7cf05f50132385adce491bfb8b39a16edca9473e" Namespace="calico-system" Pod="whisker-cc77b7fcd-hckvh" WorkloadEndpoint="localhost-k8s-whisker--cc77b7fcd--hckvh-eth0" Nov 1 10:05:07.456028 containerd[1640]: 2025-11-01 10:05:07.299 [INFO][3935] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0819de789f7 ContainerID="f1a692e6bc0de0060cea886b7cf05f50132385adce491bfb8b39a16edca9473e" Namespace="calico-system" Pod="whisker-cc77b7fcd-hckvh" WorkloadEndpoint="localhost-k8s-whisker--cc77b7fcd--hckvh-eth0" Nov 1 10:05:07.456028 containerd[1640]: 2025-11-01 10:05:07.307 [INFO][3935] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f1a692e6bc0de0060cea886b7cf05f50132385adce491bfb8b39a16edca9473e" Namespace="calico-system" Pod="whisker-cc77b7fcd-hckvh" WorkloadEndpoint="localhost-k8s-whisker--cc77b7fcd--hckvh-eth0" Nov 1 10:05:07.456090 containerd[1640]: 2025-11-01 10:05:07.307 [INFO][3935] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f1a692e6bc0de0060cea886b7cf05f50132385adce491bfb8b39a16edca9473e" Namespace="calico-system" Pod="whisker-cc77b7fcd-hckvh" WorkloadEndpoint="localhost-k8s-whisker--cc77b7fcd--hckvh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--cc77b7fcd--hckvh-eth0", GenerateName:"whisker-cc77b7fcd-", Namespace:"calico-system", SelfLink:"", UID:"7d67457e-e809-4b44-b320-e1c49fbcfb7c", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 5, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"cc77b7fcd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f1a692e6bc0de0060cea886b7cf05f50132385adce491bfb8b39a16edca9473e", Pod:"whisker-cc77b7fcd-hckvh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0819de789f7", MAC:"a6:8c:95:65:6b:7f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:05:07.456200 containerd[1640]: 2025-11-01 10:05:07.451 [INFO][3935] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f1a692e6bc0de0060cea886b7cf05f50132385adce491bfb8b39a16edca9473e" Namespace="calico-system" Pod="whisker-cc77b7fcd-hckvh" WorkloadEndpoint="localhost-k8s-whisker--cc77b7fcd--hckvh-eth0" Nov 1 10:05:07.832455 containerd[1640]: time="2025-11-01T10:05:07.832382381Z" level=info msg="connecting to shim f1a692e6bc0de0060cea886b7cf05f50132385adce491bfb8b39a16edca9473e" address="unix:///run/containerd/s/c99ed86c2c8843d5434cf365584b9e92003c8ab1d1bdf1f55fc68dd682bac9cd" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:05:07.890587 systemd[1]: Started cri-containerd-f1a692e6bc0de0060cea886b7cf05f50132385adce491bfb8b39a16edca9473e.scope - libcontainer container f1a692e6bc0de0060cea886b7cf05f50132385adce491bfb8b39a16edca9473e. Nov 1 10:05:07.923847 systemd-resolved[1373]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 10:05:08.092213 containerd[1640]: time="2025-11-01T10:05:08.092023205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cc77b7fcd-hckvh,Uid:7d67457e-e809-4b44-b320-e1c49fbcfb7c,Namespace:calico-system,Attempt:0,} returns sandbox id \"f1a692e6bc0de0060cea886b7cf05f50132385adce491bfb8b39a16edca9473e\"" Nov 1 10:05:08.096792 containerd[1640]: time="2025-11-01T10:05:08.096561871Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 10:05:08.219291 systemd[1]: Started sshd@7-10.0.0.55:22-10.0.0.1:57648.service - OpenSSH per-connection server daemon (10.0.0.1:57648). Nov 1 10:05:08.314511 sshd[4111]: Accepted publickey for core from 10.0.0.1 port 57648 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:05:08.316202 sshd-session[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:05:08.322584 systemd-logind[1618]: New session 8 of user core. Nov 1 10:05:08.329270 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 1 10:05:08.380445 kubelet[2775]: I1101 10:05:08.380098 2775 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 10:05:08.381629 kubelet[2775]: E1101 10:05:08.381570 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:05:08.438513 sshd[4115]: Connection closed by 10.0.0.1 port 57648 Nov 1 10:05:08.439321 sshd-session[4111]: pam_unix(sshd:session): session closed for user core Nov 1 10:05:08.443994 systemd[1]: sshd@7-10.0.0.55:22-10.0.0.1:57648.service: Deactivated successfully. Nov 1 10:05:08.446568 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 10:05:08.454674 containerd[1640]: time="2025-11-01T10:05:08.454627033Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:05:08.456179 containerd[1640]: time="2025-11-01T10:05:08.456024574Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 10:05:08.456393 kubelet[2775]: E1101 10:05:08.456356 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 10:05:08.456520 kubelet[2775]: E1101 10:05:08.456489 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 10:05:08.456721 containerd[1640]: time="2025-11-01T10:05:08.456074528Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 1 10:05:08.458227 systemd-logind[1618]: Session 8 logged out. Waiting for processes to exit. Nov 1 10:05:08.460294 systemd-logind[1618]: Removed session 8. Nov 1 10:05:08.465457 kubelet[2775]: E1101 10:05:08.465329 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:fbee598fef644ce7a5107aaf1ad88c55,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2gb78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-cc77b7fcd-hckvh_calico-system(7d67457e-e809-4b44-b320-e1c49fbcfb7c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 10:05:08.468635 containerd[1640]: time="2025-11-01T10:05:08.468599186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 10:05:08.835073 containerd[1640]: time="2025-11-01T10:05:08.835007231Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:05:08.836332 containerd[1640]: time="2025-11-01T10:05:08.836271422Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 10:05:08.836469 containerd[1640]: time="2025-11-01T10:05:08.836320024Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 1 10:05:08.836561 kubelet[2775]: E1101 10:05:08.836505 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 10:05:08.836636 kubelet[2775]: E1101 10:05:08.836572 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 10:05:08.836783 kubelet[2775]: E1101 10:05:08.836721 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2gb78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-cc77b7fcd-hckvh_calico-system(7d67457e-e809-4b44-b320-e1c49fbcfb7c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 10:05:08.837985 kubelet[2775]: E1101 10:05:08.837919 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cc77b7fcd-hckvh" podUID="7d67457e-e809-4b44-b320-e1c49fbcfb7c" Nov 1 10:05:09.260359 systemd-networkd[1526]: cali0819de789f7: Gained IPv6LL Nov 1 10:05:09.381504 containerd[1640]: time="2025-11-01T10:05:09.381434503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56f8446f94-wbw7g,Uid:29af33ad-9abc-4ff3-b520-e3177a680c27,Namespace:calico-apiserver,Attempt:0,}" Nov 1 10:05:09.486300 systemd-networkd[1526]: calide025374fed: Link UP Nov 1 10:05:09.487191 systemd-networkd[1526]: calide025374fed: Gained carrier Nov 1 10:05:09.577359 kubelet[2775]: E1101 10:05:09.577295 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cc77b7fcd-hckvh" podUID="7d67457e-e809-4b44-b320-e1c49fbcfb7c" Nov 1 10:05:09.630140 containerd[1640]: 2025-11-01 10:05:09.411 [INFO][4207] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 10:05:09.630140 containerd[1640]: 2025-11-01 10:05:09.422 [INFO][4207] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--56f8446f94--wbw7g-eth0 calico-apiserver-56f8446f94- calico-apiserver 29af33ad-9abc-4ff3-b520-e3177a680c27 824 0 2025-11-01 10:04:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:56f8446f94 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-56f8446f94-wbw7g eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calide025374fed [] [] }} ContainerID="09fee4e8d1e049d6d09ce72ade947abd59e1497e296f54707321943bc2d4a53e" Namespace="calico-apiserver" Pod="calico-apiserver-56f8446f94-wbw7g" WorkloadEndpoint="localhost-k8s-calico--apiserver--56f8446f94--wbw7g-" Nov 1 10:05:09.630140 containerd[1640]: 2025-11-01 10:05:09.422 [INFO][4207] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="09fee4e8d1e049d6d09ce72ade947abd59e1497e296f54707321943bc2d4a53e" Namespace="calico-apiserver" Pod="calico-apiserver-56f8446f94-wbw7g" WorkloadEndpoint="localhost-k8s-calico--apiserver--56f8446f94--wbw7g-eth0" Nov 1 10:05:09.630140 containerd[1640]: 2025-11-01 10:05:09.450 [INFO][4222] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="09fee4e8d1e049d6d09ce72ade947abd59e1497e296f54707321943bc2d4a53e" HandleID="k8s-pod-network.09fee4e8d1e049d6d09ce72ade947abd59e1497e296f54707321943bc2d4a53e" Workload="localhost-k8s-calico--apiserver--56f8446f94--wbw7g-eth0" Nov 1 10:05:09.630666 containerd[1640]: 2025-11-01 10:05:09.450 [INFO][4222] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="09fee4e8d1e049d6d09ce72ade947abd59e1497e296f54707321943bc2d4a53e" HandleID="k8s-pod-network.09fee4e8d1e049d6d09ce72ade947abd59e1497e296f54707321943bc2d4a53e" Workload="localhost-k8s-calico--apiserver--56f8446f94--wbw7g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7080), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-56f8446f94-wbw7g", "timestamp":"2025-11-01 10:05:09.450474938 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 10:05:09.630666 containerd[1640]: 2025-11-01 10:05:09.450 [INFO][4222] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 10:05:09.630666 containerd[1640]: 2025-11-01 10:05:09.450 [INFO][4222] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 10:05:09.630666 containerd[1640]: 2025-11-01 10:05:09.450 [INFO][4222] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 10:05:09.630666 containerd[1640]: 2025-11-01 10:05:09.458 [INFO][4222] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.09fee4e8d1e049d6d09ce72ade947abd59e1497e296f54707321943bc2d4a53e" host="localhost" Nov 1 10:05:09.630666 containerd[1640]: 2025-11-01 10:05:09.463 [INFO][4222] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 10:05:09.630666 containerd[1640]: 2025-11-01 10:05:09.467 [INFO][4222] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 10:05:09.630666 containerd[1640]: 2025-11-01 10:05:09.469 [INFO][4222] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 10:05:09.630666 containerd[1640]: 2025-11-01 10:05:09.471 [INFO][4222] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 10:05:09.630666 containerd[1640]: 2025-11-01 10:05:09.471 [INFO][4222] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.09fee4e8d1e049d6d09ce72ade947abd59e1497e296f54707321943bc2d4a53e" host="localhost" Nov 1 10:05:09.631296 containerd[1640]: 2025-11-01 10:05:09.472 [INFO][4222] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.09fee4e8d1e049d6d09ce72ade947abd59e1497e296f54707321943bc2d4a53e Nov 1 10:05:09.631296 containerd[1640]: 2025-11-01 10:05:09.476 [INFO][4222] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.09fee4e8d1e049d6d09ce72ade947abd59e1497e296f54707321943bc2d4a53e" host="localhost" Nov 1 10:05:09.631296 containerd[1640]: 2025-11-01 10:05:09.480 [INFO][4222] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.09fee4e8d1e049d6d09ce72ade947abd59e1497e296f54707321943bc2d4a53e" host="localhost" Nov 1 10:05:09.631296 containerd[1640]: 2025-11-01 10:05:09.480 [INFO][4222] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.09fee4e8d1e049d6d09ce72ade947abd59e1497e296f54707321943bc2d4a53e" host="localhost" Nov 1 10:05:09.631296 containerd[1640]: 2025-11-01 10:05:09.480 [INFO][4222] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 10:05:09.631296 containerd[1640]: 2025-11-01 10:05:09.480 [INFO][4222] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="09fee4e8d1e049d6d09ce72ade947abd59e1497e296f54707321943bc2d4a53e" HandleID="k8s-pod-network.09fee4e8d1e049d6d09ce72ade947abd59e1497e296f54707321943bc2d4a53e" Workload="localhost-k8s-calico--apiserver--56f8446f94--wbw7g-eth0" Nov 1 10:05:09.631464 containerd[1640]: 2025-11-01 10:05:09.484 [INFO][4207] cni-plugin/k8s.go 418: Populated endpoint ContainerID="09fee4e8d1e049d6d09ce72ade947abd59e1497e296f54707321943bc2d4a53e" Namespace="calico-apiserver" Pod="calico-apiserver-56f8446f94-wbw7g" WorkloadEndpoint="localhost-k8s-calico--apiserver--56f8446f94--wbw7g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--56f8446f94--wbw7g-eth0", GenerateName:"calico-apiserver-56f8446f94-", Namespace:"calico-apiserver", SelfLink:"", UID:"29af33ad-9abc-4ff3-b520-e3177a680c27", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 4, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56f8446f94", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-56f8446f94-wbw7g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calide025374fed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:05:09.631537 containerd[1640]: 2025-11-01 10:05:09.484 [INFO][4207] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="09fee4e8d1e049d6d09ce72ade947abd59e1497e296f54707321943bc2d4a53e" Namespace="calico-apiserver" Pod="calico-apiserver-56f8446f94-wbw7g" WorkloadEndpoint="localhost-k8s-calico--apiserver--56f8446f94--wbw7g-eth0" Nov 1 10:05:09.631537 containerd[1640]: 2025-11-01 10:05:09.484 [INFO][4207] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calide025374fed ContainerID="09fee4e8d1e049d6d09ce72ade947abd59e1497e296f54707321943bc2d4a53e" Namespace="calico-apiserver" Pod="calico-apiserver-56f8446f94-wbw7g" WorkloadEndpoint="localhost-k8s-calico--apiserver--56f8446f94--wbw7g-eth0" Nov 1 10:05:09.631537 containerd[1640]: 2025-11-01 10:05:09.487 [INFO][4207] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="09fee4e8d1e049d6d09ce72ade947abd59e1497e296f54707321943bc2d4a53e" Namespace="calico-apiserver" Pod="calico-apiserver-56f8446f94-wbw7g" WorkloadEndpoint="localhost-k8s-calico--apiserver--56f8446f94--wbw7g-eth0" Nov 1 10:05:09.631624 containerd[1640]: 2025-11-01 10:05:09.489 [INFO][4207] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="09fee4e8d1e049d6d09ce72ade947abd59e1497e296f54707321943bc2d4a53e" Namespace="calico-apiserver" Pod="calico-apiserver-56f8446f94-wbw7g" WorkloadEndpoint="localhost-k8s-calico--apiserver--56f8446f94--wbw7g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--56f8446f94--wbw7g-eth0", GenerateName:"calico-apiserver-56f8446f94-", Namespace:"calico-apiserver", SelfLink:"", UID:"29af33ad-9abc-4ff3-b520-e3177a680c27", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 4, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56f8446f94", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"09fee4e8d1e049d6d09ce72ade947abd59e1497e296f54707321943bc2d4a53e", Pod:"calico-apiserver-56f8446f94-wbw7g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calide025374fed", MAC:"1e:ef:98:1a:a5:aa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:05:09.631687 containerd[1640]: 2025-11-01 10:05:09.625 [INFO][4207] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="09fee4e8d1e049d6d09ce72ade947abd59e1497e296f54707321943bc2d4a53e" Namespace="calico-apiserver" Pod="calico-apiserver-56f8446f94-wbw7g" WorkloadEndpoint="localhost-k8s-calico--apiserver--56f8446f94--wbw7g-eth0" Nov 1 10:05:09.667971 containerd[1640]: time="2025-11-01T10:05:09.667898720Z" level=info msg="connecting to shim 09fee4e8d1e049d6d09ce72ade947abd59e1497e296f54707321943bc2d4a53e" address="unix:///run/containerd/s/fcd6879a7c3a1aad03ce8a9bbd8411ed62f5a6813771ec3a6d915360032dd271" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:05:09.698254 systemd[1]: Started cri-containerd-09fee4e8d1e049d6d09ce72ade947abd59e1497e296f54707321943bc2d4a53e.scope - libcontainer container 09fee4e8d1e049d6d09ce72ade947abd59e1497e296f54707321943bc2d4a53e. Nov 1 10:05:09.711453 systemd-resolved[1373]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 10:05:09.740425 containerd[1640]: time="2025-11-01T10:05:09.740362737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56f8446f94-wbw7g,Uid:29af33ad-9abc-4ff3-b520-e3177a680c27,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"09fee4e8d1e049d6d09ce72ade947abd59e1497e296f54707321943bc2d4a53e\"" Nov 1 10:05:09.742130 containerd[1640]: time="2025-11-01T10:05:09.742079899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 10:05:10.043574 containerd[1640]: time="2025-11-01T10:05:10.043510432Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:05:10.046085 containerd[1640]: time="2025-11-01T10:05:10.046041199Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 10:05:10.046085 containerd[1640]: time="2025-11-01T10:05:10.046077066Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 1 10:05:10.046391 kubelet[2775]: E1101 10:05:10.046341 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 10:05:10.046443 kubelet[2775]: E1101 10:05:10.046409 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 10:05:10.046617 kubelet[2775]: E1101 10:05:10.046569 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-89b9k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-56f8446f94-wbw7g_calico-apiserver(29af33ad-9abc-4ff3-b520-e3177a680c27): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 10:05:10.047770 kubelet[2775]: E1101 10:05:10.047727 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56f8446f94-wbw7g" podUID="29af33ad-9abc-4ff3-b520-e3177a680c27" Nov 1 10:05:10.381556 kubelet[2775]: E1101 10:05:10.380703 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:05:10.381723 containerd[1640]: time="2025-11-01T10:05:10.381206066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8599x,Uid:c66a3ca9-ff79-4bff-ac6d-52b470a1658e,Namespace:kube-system,Attempt:0,}" Nov 1 10:05:10.382046 containerd[1640]: time="2025-11-01T10:05:10.381830989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64b6f54dbf-5vmmj,Uid:11cb09fe-0906-4aa9-80bd-422fc601a30c,Namespace:calico-system,Attempt:0,}" Nov 1 10:05:10.495146 systemd-networkd[1526]: calibc82e1970f7: Link UP Nov 1 10:05:10.496190 systemd-networkd[1526]: calibc82e1970f7: Gained carrier Nov 1 10:05:10.508419 containerd[1640]: 2025-11-01 10:05:10.414 [INFO][4307] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 10:05:10.508419 containerd[1640]: 2025-11-01 10:05:10.426 [INFO][4307] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--64b6f54dbf--5vmmj-eth0 calico-kube-controllers-64b6f54dbf- calico-system 11cb09fe-0906-4aa9-80bd-422fc601a30c 820 0 2025-11-01 10:04:49 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:64b6f54dbf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-64b6f54dbf-5vmmj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calibc82e1970f7 [] [] }} ContainerID="1a6ae4e0ceb5346d4fc1fbcd49280984db73e69f14aca22477316d0059009fce" Namespace="calico-system" Pod="calico-kube-controllers-64b6f54dbf-5vmmj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64b6f54dbf--5vmmj-" Nov 1 10:05:10.508419 containerd[1640]: 2025-11-01 10:05:10.426 [INFO][4307] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1a6ae4e0ceb5346d4fc1fbcd49280984db73e69f14aca22477316d0059009fce" Namespace="calico-system" Pod="calico-kube-controllers-64b6f54dbf-5vmmj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64b6f54dbf--5vmmj-eth0" Nov 1 10:05:10.508419 containerd[1640]: 2025-11-01 10:05:10.456 [INFO][4336] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a6ae4e0ceb5346d4fc1fbcd49280984db73e69f14aca22477316d0059009fce" HandleID="k8s-pod-network.1a6ae4e0ceb5346d4fc1fbcd49280984db73e69f14aca22477316d0059009fce" Workload="localhost-k8s-calico--kube--controllers--64b6f54dbf--5vmmj-eth0" Nov 1 10:05:10.508675 containerd[1640]: 2025-11-01 10:05:10.456 [INFO][4336] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1a6ae4e0ceb5346d4fc1fbcd49280984db73e69f14aca22477316d0059009fce" HandleID="k8s-pod-network.1a6ae4e0ceb5346d4fc1fbcd49280984db73e69f14aca22477316d0059009fce" Workload="localhost-k8s-calico--kube--controllers--64b6f54dbf--5vmmj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f6e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-64b6f54dbf-5vmmj", "timestamp":"2025-11-01 10:05:10.456152468 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 10:05:10.508675 containerd[1640]: 2025-11-01 10:05:10.456 [INFO][4336] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 10:05:10.508675 containerd[1640]: 2025-11-01 10:05:10.456 [INFO][4336] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 10:05:10.508675 containerd[1640]: 2025-11-01 10:05:10.456 [INFO][4336] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 10:05:10.508675 containerd[1640]: 2025-11-01 10:05:10.463 [INFO][4336] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1a6ae4e0ceb5346d4fc1fbcd49280984db73e69f14aca22477316d0059009fce" host="localhost" Nov 1 10:05:10.508675 containerd[1640]: 2025-11-01 10:05:10.466 [INFO][4336] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 10:05:10.508675 containerd[1640]: 2025-11-01 10:05:10.470 [INFO][4336] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 10:05:10.508675 containerd[1640]: 2025-11-01 10:05:10.471 [INFO][4336] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 10:05:10.508675 containerd[1640]: 2025-11-01 10:05:10.474 [INFO][4336] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 10:05:10.508675 containerd[1640]: 2025-11-01 10:05:10.474 [INFO][4336] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1a6ae4e0ceb5346d4fc1fbcd49280984db73e69f14aca22477316d0059009fce" host="localhost" Nov 1 10:05:10.508987 containerd[1640]: 2025-11-01 10:05:10.476 [INFO][4336] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1a6ae4e0ceb5346d4fc1fbcd49280984db73e69f14aca22477316d0059009fce Nov 1 10:05:10.508987 containerd[1640]: 2025-11-01 10:05:10.482 [INFO][4336] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1a6ae4e0ceb5346d4fc1fbcd49280984db73e69f14aca22477316d0059009fce" host="localhost" Nov 1 10:05:10.508987 containerd[1640]: 2025-11-01 10:05:10.488 [INFO][4336] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.1a6ae4e0ceb5346d4fc1fbcd49280984db73e69f14aca22477316d0059009fce" host="localhost" Nov 1 10:05:10.508987 containerd[1640]: 2025-11-01 10:05:10.488 [INFO][4336] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.1a6ae4e0ceb5346d4fc1fbcd49280984db73e69f14aca22477316d0059009fce" host="localhost" Nov 1 10:05:10.508987 containerd[1640]: 2025-11-01 10:05:10.489 [INFO][4336] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 10:05:10.508987 containerd[1640]: 2025-11-01 10:05:10.489 [INFO][4336] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="1a6ae4e0ceb5346d4fc1fbcd49280984db73e69f14aca22477316d0059009fce" HandleID="k8s-pod-network.1a6ae4e0ceb5346d4fc1fbcd49280984db73e69f14aca22477316d0059009fce" Workload="localhost-k8s-calico--kube--controllers--64b6f54dbf--5vmmj-eth0" Nov 1 10:05:10.509133 containerd[1640]: 2025-11-01 10:05:10.492 [INFO][4307] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1a6ae4e0ceb5346d4fc1fbcd49280984db73e69f14aca22477316d0059009fce" Namespace="calico-system" Pod="calico-kube-controllers-64b6f54dbf-5vmmj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64b6f54dbf--5vmmj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--64b6f54dbf--5vmmj-eth0", GenerateName:"calico-kube-controllers-64b6f54dbf-", Namespace:"calico-system", SelfLink:"", UID:"11cb09fe-0906-4aa9-80bd-422fc601a30c", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 4, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64b6f54dbf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-64b6f54dbf-5vmmj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibc82e1970f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:05:10.509197 containerd[1640]: 2025-11-01 10:05:10.492 [INFO][4307] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="1a6ae4e0ceb5346d4fc1fbcd49280984db73e69f14aca22477316d0059009fce" Namespace="calico-system" Pod="calico-kube-controllers-64b6f54dbf-5vmmj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64b6f54dbf--5vmmj-eth0" Nov 1 10:05:10.509197 containerd[1640]: 2025-11-01 10:05:10.492 [INFO][4307] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibc82e1970f7 ContainerID="1a6ae4e0ceb5346d4fc1fbcd49280984db73e69f14aca22477316d0059009fce" Namespace="calico-system" Pod="calico-kube-controllers-64b6f54dbf-5vmmj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64b6f54dbf--5vmmj-eth0" Nov 1 10:05:10.509197 containerd[1640]: 2025-11-01 10:05:10.496 [INFO][4307] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1a6ae4e0ceb5346d4fc1fbcd49280984db73e69f14aca22477316d0059009fce" Namespace="calico-system" Pod="calico-kube-controllers-64b6f54dbf-5vmmj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64b6f54dbf--5vmmj-eth0" Nov 1 10:05:10.509264 containerd[1640]: 2025-11-01 10:05:10.496 [INFO][4307] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1a6ae4e0ceb5346d4fc1fbcd49280984db73e69f14aca22477316d0059009fce" Namespace="calico-system" Pod="calico-kube-controllers-64b6f54dbf-5vmmj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64b6f54dbf--5vmmj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--64b6f54dbf--5vmmj-eth0", GenerateName:"calico-kube-controllers-64b6f54dbf-", Namespace:"calico-system", SelfLink:"", UID:"11cb09fe-0906-4aa9-80bd-422fc601a30c", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 4, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64b6f54dbf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1a6ae4e0ceb5346d4fc1fbcd49280984db73e69f14aca22477316d0059009fce", Pod:"calico-kube-controllers-64b6f54dbf-5vmmj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibc82e1970f7", MAC:"72:bb:e7:4a:74:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:05:10.509317 containerd[1640]: 2025-11-01 10:05:10.505 [INFO][4307] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1a6ae4e0ceb5346d4fc1fbcd49280984db73e69f14aca22477316d0059009fce" Namespace="calico-system" Pod="calico-kube-controllers-64b6f54dbf-5vmmj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64b6f54dbf--5vmmj-eth0" Nov 1 10:05:10.531307 containerd[1640]: time="2025-11-01T10:05:10.531246747Z" level=info msg="connecting to shim 1a6ae4e0ceb5346d4fc1fbcd49280984db73e69f14aca22477316d0059009fce" address="unix:///run/containerd/s/f97a41f5260f8609eb081d9d20a1f9a6175ad75b62b6cd2cfcddfbd7e49c9265" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:05:10.558315 systemd[1]: Started cri-containerd-1a6ae4e0ceb5346d4fc1fbcd49280984db73e69f14aca22477316d0059009fce.scope - libcontainer container 1a6ae4e0ceb5346d4fc1fbcd49280984db73e69f14aca22477316d0059009fce. Nov 1 10:05:10.572731 systemd-resolved[1373]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 10:05:10.581459 kubelet[2775]: E1101 10:05:10.581379 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56f8446f94-wbw7g" podUID="29af33ad-9abc-4ff3-b520-e3177a680c27" Nov 1 10:05:10.611615 systemd-networkd[1526]: cali7dd856553ff: Link UP Nov 1 10:05:10.611798 systemd-networkd[1526]: cali7dd856553ff: Gained carrier Nov 1 10:05:10.627471 containerd[1640]: time="2025-11-01T10:05:10.627420200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64b6f54dbf-5vmmj,Uid:11cb09fe-0906-4aa9-80bd-422fc601a30c,Namespace:calico-system,Attempt:0,} returns sandbox id \"1a6ae4e0ceb5346d4fc1fbcd49280984db73e69f14aca22477316d0059009fce\"" Nov 1 10:05:10.627795 containerd[1640]: 2025-11-01 10:05:10.412 [INFO][4305] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 10:05:10.627795 containerd[1640]: 2025-11-01 10:05:10.427 [INFO][4305] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--8599x-eth0 coredns-668d6bf9bc- kube-system c66a3ca9-ff79-4bff-ac6d-52b470a1658e 823 0 2025-11-01 10:04:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-8599x eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7dd856553ff [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9ae4c7d875d73794e8b8712e779d2758ea1d35192696ac6f779ed9c65418cb41" Namespace="kube-system" Pod="coredns-668d6bf9bc-8599x" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8599x-" Nov 1 10:05:10.627795 containerd[1640]: 2025-11-01 10:05:10.427 [INFO][4305] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9ae4c7d875d73794e8b8712e779d2758ea1d35192696ac6f779ed9c65418cb41" Namespace="kube-system" Pod="coredns-668d6bf9bc-8599x" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8599x-eth0" Nov 1 10:05:10.627795 containerd[1640]: 2025-11-01 10:05:10.458 [INFO][4334] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9ae4c7d875d73794e8b8712e779d2758ea1d35192696ac6f779ed9c65418cb41" HandleID="k8s-pod-network.9ae4c7d875d73794e8b8712e779d2758ea1d35192696ac6f779ed9c65418cb41" Workload="localhost-k8s-coredns--668d6bf9bc--8599x-eth0" Nov 1 10:05:10.627984 containerd[1640]: 2025-11-01 10:05:10.458 [INFO][4334] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9ae4c7d875d73794e8b8712e779d2758ea1d35192696ac6f779ed9c65418cb41" HandleID="k8s-pod-network.9ae4c7d875d73794e8b8712e779d2758ea1d35192696ac6f779ed9c65418cb41" Workload="localhost-k8s-coredns--668d6bf9bc--8599x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138da0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-8599x", "timestamp":"2025-11-01 10:05:10.458234003 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 10:05:10.627984 containerd[1640]: 2025-11-01 10:05:10.458 [INFO][4334] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 10:05:10.627984 containerd[1640]: 2025-11-01 10:05:10.489 [INFO][4334] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 10:05:10.627984 containerd[1640]: 2025-11-01 10:05:10.489 [INFO][4334] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 10:05:10.627984 containerd[1640]: 2025-11-01 10:05:10.564 [INFO][4334] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9ae4c7d875d73794e8b8712e779d2758ea1d35192696ac6f779ed9c65418cb41" host="localhost" Nov 1 10:05:10.627984 containerd[1640]: 2025-11-01 10:05:10.568 [INFO][4334] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 10:05:10.627984 containerd[1640]: 2025-11-01 10:05:10.574 [INFO][4334] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 10:05:10.627984 containerd[1640]: 2025-11-01 10:05:10.576 [INFO][4334] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 10:05:10.627984 containerd[1640]: 2025-11-01 10:05:10.579 [INFO][4334] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 10:05:10.627984 containerd[1640]: 2025-11-01 10:05:10.579 [INFO][4334] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9ae4c7d875d73794e8b8712e779d2758ea1d35192696ac6f779ed9c65418cb41" host="localhost" Nov 1 10:05:10.628210 containerd[1640]: 2025-11-01 10:05:10.581 [INFO][4334] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9ae4c7d875d73794e8b8712e779d2758ea1d35192696ac6f779ed9c65418cb41 Nov 1 10:05:10.628210 containerd[1640]: 2025-11-01 10:05:10.588 [INFO][4334] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9ae4c7d875d73794e8b8712e779d2758ea1d35192696ac6f779ed9c65418cb41" host="localhost" Nov 1 10:05:10.628210 containerd[1640]: 2025-11-01 10:05:10.600 [INFO][4334] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.9ae4c7d875d73794e8b8712e779d2758ea1d35192696ac6f779ed9c65418cb41" host="localhost" Nov 1 10:05:10.628210 containerd[1640]: 2025-11-01 10:05:10.600 [INFO][4334] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.9ae4c7d875d73794e8b8712e779d2758ea1d35192696ac6f779ed9c65418cb41" host="localhost" Nov 1 10:05:10.628210 containerd[1640]: 2025-11-01 10:05:10.600 [INFO][4334] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 10:05:10.628210 containerd[1640]: 2025-11-01 10:05:10.600 [INFO][4334] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="9ae4c7d875d73794e8b8712e779d2758ea1d35192696ac6f779ed9c65418cb41" HandleID="k8s-pod-network.9ae4c7d875d73794e8b8712e779d2758ea1d35192696ac6f779ed9c65418cb41" Workload="localhost-k8s-coredns--668d6bf9bc--8599x-eth0" Nov 1 10:05:10.628330 containerd[1640]: 2025-11-01 10:05:10.606 [INFO][4305] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9ae4c7d875d73794e8b8712e779d2758ea1d35192696ac6f779ed9c65418cb41" Namespace="kube-system" Pod="coredns-668d6bf9bc-8599x" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8599x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--8599x-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c66a3ca9-ff79-4bff-ac6d-52b470a1658e", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 4, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-8599x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7dd856553ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:05:10.628390 containerd[1640]: 2025-11-01 10:05:10.606 [INFO][4305] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="9ae4c7d875d73794e8b8712e779d2758ea1d35192696ac6f779ed9c65418cb41" Namespace="kube-system" Pod="coredns-668d6bf9bc-8599x" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8599x-eth0" Nov 1 10:05:10.628390 containerd[1640]: 2025-11-01 10:05:10.606 [INFO][4305] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7dd856553ff ContainerID="9ae4c7d875d73794e8b8712e779d2758ea1d35192696ac6f779ed9c65418cb41" Namespace="kube-system" Pod="coredns-668d6bf9bc-8599x" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8599x-eth0" Nov 1 10:05:10.628390 containerd[1640]: 2025-11-01 10:05:10.611 [INFO][4305] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9ae4c7d875d73794e8b8712e779d2758ea1d35192696ac6f779ed9c65418cb41" Namespace="kube-system" Pod="coredns-668d6bf9bc-8599x" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8599x-eth0" Nov 1 10:05:10.628453 containerd[1640]: 2025-11-01 10:05:10.612 [INFO][4305] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9ae4c7d875d73794e8b8712e779d2758ea1d35192696ac6f779ed9c65418cb41" Namespace="kube-system" Pod="coredns-668d6bf9bc-8599x" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8599x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--8599x-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c66a3ca9-ff79-4bff-ac6d-52b470a1658e", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 4, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9ae4c7d875d73794e8b8712e779d2758ea1d35192696ac6f779ed9c65418cb41", Pod:"coredns-668d6bf9bc-8599x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7dd856553ff", MAC:"b6:3b:4a:0e:26:ff", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:05:10.628453 containerd[1640]: 2025-11-01 10:05:10.622 [INFO][4305] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9ae4c7d875d73794e8b8712e779d2758ea1d35192696ac6f779ed9c65418cb41" Namespace="kube-system" Pod="coredns-668d6bf9bc-8599x" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8599x-eth0" Nov 1 10:05:10.629326 containerd[1640]: time="2025-11-01T10:05:10.629209838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 10:05:10.651858 containerd[1640]: time="2025-11-01T10:05:10.651728774Z" level=info msg="connecting to shim 9ae4c7d875d73794e8b8712e779d2758ea1d35192696ac6f779ed9c65418cb41" address="unix:///run/containerd/s/d64ad99d3123cdc3dd2112369d49468ec92b588c521d88a2f9587a7eb7eee6e3" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:05:10.681242 systemd[1]: Started cri-containerd-9ae4c7d875d73794e8b8712e779d2758ea1d35192696ac6f779ed9c65418cb41.scope - libcontainer container 9ae4c7d875d73794e8b8712e779d2758ea1d35192696ac6f779ed9c65418cb41. Nov 1 10:05:10.693937 systemd-resolved[1373]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 10:05:10.724796 containerd[1640]: time="2025-11-01T10:05:10.724746266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8599x,Uid:c66a3ca9-ff79-4bff-ac6d-52b470a1658e,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ae4c7d875d73794e8b8712e779d2758ea1d35192696ac6f779ed9c65418cb41\"" Nov 1 10:05:10.725494 kubelet[2775]: E1101 10:05:10.725468 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:05:10.727231 containerd[1640]: time="2025-11-01T10:05:10.727198557Z" level=info msg="CreateContainer within sandbox \"9ae4c7d875d73794e8b8712e779d2758ea1d35192696ac6f779ed9c65418cb41\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 10:05:10.732406 systemd-networkd[1526]: calide025374fed: Gained IPv6LL Nov 1 10:05:10.741331 containerd[1640]: time="2025-11-01T10:05:10.741288420Z" level=info msg="Container 9d1ba14b2fe30e10a70c25b78ce8332a2c55528a7522858176ab083d4a6731c7: CDI devices from CRI Config.CDIDevices: []" Nov 1 10:05:10.746702 containerd[1640]: time="2025-11-01T10:05:10.746675296Z" level=info msg="CreateContainer within sandbox \"9ae4c7d875d73794e8b8712e779d2758ea1d35192696ac6f779ed9c65418cb41\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9d1ba14b2fe30e10a70c25b78ce8332a2c55528a7522858176ab083d4a6731c7\"" Nov 1 10:05:10.747312 containerd[1640]: time="2025-11-01T10:05:10.747232872Z" level=info msg="StartContainer for \"9d1ba14b2fe30e10a70c25b78ce8332a2c55528a7522858176ab083d4a6731c7\"" Nov 1 10:05:10.748316 containerd[1640]: time="2025-11-01T10:05:10.748264618Z" level=info msg="connecting to shim 9d1ba14b2fe30e10a70c25b78ce8332a2c55528a7522858176ab083d4a6731c7" address="unix:///run/containerd/s/d64ad99d3123cdc3dd2112369d49468ec92b588c521d88a2f9587a7eb7eee6e3" protocol=ttrpc version=3 Nov 1 10:05:10.774317 systemd[1]: Started cri-containerd-9d1ba14b2fe30e10a70c25b78ce8332a2c55528a7522858176ab083d4a6731c7.scope - libcontainer container 9d1ba14b2fe30e10a70c25b78ce8332a2c55528a7522858176ab083d4a6731c7. Nov 1 10:05:10.809885 containerd[1640]: time="2025-11-01T10:05:10.809825908Z" level=info msg="StartContainer for \"9d1ba14b2fe30e10a70c25b78ce8332a2c55528a7522858176ab083d4a6731c7\" returns successfully" Nov 1 10:05:11.381213 kubelet[2775]: E1101 10:05:11.381094 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:05:11.381921 containerd[1640]: time="2025-11-01T10:05:11.381870182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lbx5t,Uid:90c5081d-f937-438e-bb8a-7ad343c3e65b,Namespace:kube-system,Attempt:0,}" Nov 1 10:05:11.491394 systemd-networkd[1526]: calia6159cd95b4: Link UP Nov 1 10:05:11.491656 systemd-networkd[1526]: calia6159cd95b4: Gained carrier Nov 1 10:05:11.510993 containerd[1640]: 2025-11-01 10:05:11.418 [INFO][4515] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 10:05:11.510993 containerd[1640]: 2025-11-01 10:05:11.428 [INFO][4515] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--lbx5t-eth0 coredns-668d6bf9bc- kube-system 90c5081d-f937-438e-bb8a-7ad343c3e65b 814 0 2025-11-01 10:04:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-lbx5t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia6159cd95b4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8de2e600b5577e3e067c443fcb48b28b228f8ca047b4547a007577152f4c6861" Namespace="kube-system" Pod="coredns-668d6bf9bc-lbx5t" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lbx5t-" Nov 1 10:05:11.510993 containerd[1640]: 2025-11-01 10:05:11.428 [INFO][4515] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8de2e600b5577e3e067c443fcb48b28b228f8ca047b4547a007577152f4c6861" Namespace="kube-system" Pod="coredns-668d6bf9bc-lbx5t" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lbx5t-eth0" Nov 1 10:05:11.510993 containerd[1640]: 2025-11-01 10:05:11.452 [INFO][4530] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8de2e600b5577e3e067c443fcb48b28b228f8ca047b4547a007577152f4c6861" HandleID="k8s-pod-network.8de2e600b5577e3e067c443fcb48b28b228f8ca047b4547a007577152f4c6861" Workload="localhost-k8s-coredns--668d6bf9bc--lbx5t-eth0" Nov 1 10:05:11.510993 containerd[1640]: 2025-11-01 10:05:11.452 [INFO][4530] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8de2e600b5577e3e067c443fcb48b28b228f8ca047b4547a007577152f4c6861" HandleID="k8s-pod-network.8de2e600b5577e3e067c443fcb48b28b228f8ca047b4547a007577152f4c6861" Workload="localhost-k8s-coredns--668d6bf9bc--lbx5t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df5c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-lbx5t", "timestamp":"2025-11-01 10:05:11.45240322 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 10:05:11.510993 containerd[1640]: 2025-11-01 10:05:11.452 [INFO][4530] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 10:05:11.510993 containerd[1640]: 2025-11-01 10:05:11.452 [INFO][4530] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 10:05:11.510993 containerd[1640]: 2025-11-01 10:05:11.452 [INFO][4530] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 10:05:11.510993 containerd[1640]: 2025-11-01 10:05:11.458 [INFO][4530] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8de2e600b5577e3e067c443fcb48b28b228f8ca047b4547a007577152f4c6861" host="localhost" Nov 1 10:05:11.510993 containerd[1640]: 2025-11-01 10:05:11.463 [INFO][4530] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 10:05:11.510993 containerd[1640]: 2025-11-01 10:05:11.468 [INFO][4530] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 10:05:11.510993 containerd[1640]: 2025-11-01 10:05:11.470 [INFO][4530] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 10:05:11.510993 containerd[1640]: 2025-11-01 10:05:11.472 [INFO][4530] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 10:05:11.510993 containerd[1640]: 2025-11-01 10:05:11.472 [INFO][4530] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8de2e600b5577e3e067c443fcb48b28b228f8ca047b4547a007577152f4c6861" host="localhost" Nov 1 10:05:11.510993 containerd[1640]: 2025-11-01 10:05:11.474 [INFO][4530] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8de2e600b5577e3e067c443fcb48b28b228f8ca047b4547a007577152f4c6861 Nov 1 10:05:11.510993 containerd[1640]: 2025-11-01 10:05:11.478 [INFO][4530] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8de2e600b5577e3e067c443fcb48b28b228f8ca047b4547a007577152f4c6861" host="localhost" Nov 1 10:05:11.510993 containerd[1640]: 2025-11-01 10:05:11.485 [INFO][4530] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.8de2e600b5577e3e067c443fcb48b28b228f8ca047b4547a007577152f4c6861" host="localhost" Nov 1 10:05:11.510993 containerd[1640]: 2025-11-01 10:05:11.485 [INFO][4530] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.8de2e600b5577e3e067c443fcb48b28b228f8ca047b4547a007577152f4c6861" host="localhost" Nov 1 10:05:11.510993 containerd[1640]: 2025-11-01 10:05:11.485 [INFO][4530] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 10:05:11.510993 containerd[1640]: 2025-11-01 10:05:11.485 [INFO][4530] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="8de2e600b5577e3e067c443fcb48b28b228f8ca047b4547a007577152f4c6861" HandleID="k8s-pod-network.8de2e600b5577e3e067c443fcb48b28b228f8ca047b4547a007577152f4c6861" Workload="localhost-k8s-coredns--668d6bf9bc--lbx5t-eth0" Nov 1 10:05:11.511694 containerd[1640]: 2025-11-01 10:05:11.489 [INFO][4515] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8de2e600b5577e3e067c443fcb48b28b228f8ca047b4547a007577152f4c6861" Namespace="kube-system" Pod="coredns-668d6bf9bc-lbx5t" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lbx5t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--lbx5t-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"90c5081d-f937-438e-bb8a-7ad343c3e65b", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 4, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-lbx5t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia6159cd95b4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:05:11.511694 containerd[1640]: 2025-11-01 10:05:11.489 [INFO][4515] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="8de2e600b5577e3e067c443fcb48b28b228f8ca047b4547a007577152f4c6861" Namespace="kube-system" Pod="coredns-668d6bf9bc-lbx5t" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lbx5t-eth0" Nov 1 10:05:11.511694 containerd[1640]: 2025-11-01 10:05:11.489 [INFO][4515] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia6159cd95b4 ContainerID="8de2e600b5577e3e067c443fcb48b28b228f8ca047b4547a007577152f4c6861" Namespace="kube-system" Pod="coredns-668d6bf9bc-lbx5t" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lbx5t-eth0" Nov 1 10:05:11.511694 containerd[1640]: 2025-11-01 10:05:11.492 [INFO][4515] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8de2e600b5577e3e067c443fcb48b28b228f8ca047b4547a007577152f4c6861" Namespace="kube-system" Pod="coredns-668d6bf9bc-lbx5t" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lbx5t-eth0" Nov 1 10:05:11.511694 containerd[1640]: 2025-11-01 10:05:11.492 [INFO][4515] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8de2e600b5577e3e067c443fcb48b28b228f8ca047b4547a007577152f4c6861" Namespace="kube-system" Pod="coredns-668d6bf9bc-lbx5t" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lbx5t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--lbx5t-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"90c5081d-f937-438e-bb8a-7ad343c3e65b", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 4, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8de2e600b5577e3e067c443fcb48b28b228f8ca047b4547a007577152f4c6861", Pod:"coredns-668d6bf9bc-lbx5t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia6159cd95b4", MAC:"a6:49:cc:29:08:ac", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:05:11.511694 containerd[1640]: 2025-11-01 10:05:11.506 [INFO][4515] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8de2e600b5577e3e067c443fcb48b28b228f8ca047b4547a007577152f4c6861" Namespace="kube-system" Pod="coredns-668d6bf9bc-lbx5t" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lbx5t-eth0" Nov 1 10:05:11.535669 containerd[1640]: time="2025-11-01T10:05:11.535600947Z" level=info msg="connecting to shim 8de2e600b5577e3e067c443fcb48b28b228f8ca047b4547a007577152f4c6861" address="unix:///run/containerd/s/ddfcdfd988a34cac2ed3681554cf63a94d4d588be52245825bc28f9c62eced60" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:05:11.566253 systemd[1]: Started cri-containerd-8de2e600b5577e3e067c443fcb48b28b228f8ca047b4547a007577152f4c6861.scope - libcontainer container 8de2e600b5577e3e067c443fcb48b28b228f8ca047b4547a007577152f4c6861. Nov 1 10:05:11.579995 systemd-resolved[1373]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 10:05:11.587487 kubelet[2775]: E1101 10:05:11.587169 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:05:11.591065 kubelet[2775]: E1101 10:05:11.591017 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56f8446f94-wbw7g" podUID="29af33ad-9abc-4ff3-b520-e3177a680c27" Nov 1 10:05:11.619578 kubelet[2775]: I1101 10:05:11.619487 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-8599x" podStartSLOduration=37.619468913 podStartE2EDuration="37.619468913s" podCreationTimestamp="2025-11-01 10:04:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:05:11.617544582 +0000 UTC m=+44.313810630" watchObservedRunningTime="2025-11-01 10:05:11.619468913 +0000 UTC m=+44.315734961" Nov 1 10:05:11.622298 containerd[1640]: time="2025-11-01T10:05:11.622240401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lbx5t,Uid:90c5081d-f937-438e-bb8a-7ad343c3e65b,Namespace:kube-system,Attempt:0,} returns sandbox id \"8de2e600b5577e3e067c443fcb48b28b228f8ca047b4547a007577152f4c6861\"" Nov 1 10:05:11.623543 kubelet[2775]: E1101 10:05:11.623499 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:05:11.629662 containerd[1640]: time="2025-11-01T10:05:11.629603364Z" level=info msg="CreateContainer within sandbox \"8de2e600b5577e3e067c443fcb48b28b228f8ca047b4547a007577152f4c6861\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 10:05:11.642375 containerd[1640]: time="2025-11-01T10:05:11.640936355Z" level=info msg="Container bf3ba0a06059aa7bff5d23f89cf8a41337a9dd8bb30cd41186335c8f9d28a63d: CDI devices from CRI Config.CDIDevices: []" Nov 1 10:05:11.648841 containerd[1640]: time="2025-11-01T10:05:11.648804265Z" level=info msg="CreateContainer within sandbox \"8de2e600b5577e3e067c443fcb48b28b228f8ca047b4547a007577152f4c6861\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bf3ba0a06059aa7bff5d23f89cf8a41337a9dd8bb30cd41186335c8f9d28a63d\"" Nov 1 10:05:11.649617 containerd[1640]: time="2025-11-01T10:05:11.649559842Z" level=info msg="StartContainer for \"bf3ba0a06059aa7bff5d23f89cf8a41337a9dd8bb30cd41186335c8f9d28a63d\"" Nov 1 10:05:11.650797 containerd[1640]: time="2025-11-01T10:05:11.650764943Z" level=info msg="connecting to shim bf3ba0a06059aa7bff5d23f89cf8a41337a9dd8bb30cd41186335c8f9d28a63d" address="unix:///run/containerd/s/ddfcdfd988a34cac2ed3681554cf63a94d4d588be52245825bc28f9c62eced60" protocol=ttrpc version=3 Nov 1 10:05:11.679298 systemd[1]: Started cri-containerd-bf3ba0a06059aa7bff5d23f89cf8a41337a9dd8bb30cd41186335c8f9d28a63d.scope - libcontainer container bf3ba0a06059aa7bff5d23f89cf8a41337a9dd8bb30cd41186335c8f9d28a63d. Nov 1 10:05:11.709621 containerd[1640]: time="2025-11-01T10:05:11.709582183Z" level=info msg="StartContainer for \"bf3ba0a06059aa7bff5d23f89cf8a41337a9dd8bb30cd41186335c8f9d28a63d\" returns successfully" Nov 1 10:05:11.781537 kubelet[2775]: I1101 10:05:11.781471 2775 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 10:05:11.781945 kubelet[2775]: E1101 10:05:11.781923 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:05:11.884528 systemd-networkd[1526]: cali7dd856553ff: Gained IPv6LL Nov 1 10:05:12.332375 systemd-networkd[1526]: calibc82e1970f7: Gained IPv6LL Nov 1 10:05:12.385583 containerd[1640]: time="2025-11-01T10:05:12.385286507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5mnnb,Uid:497638cc-4034-4ffe-9443-48cd7ad72cdc,Namespace:calico-system,Attempt:0,}" Nov 1 10:05:12.592964 kubelet[2775]: E1101 10:05:12.592813 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:05:12.593794 kubelet[2775]: E1101 10:05:12.593757 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:05:12.594579 kubelet[2775]: E1101 10:05:12.594509 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:05:12.618336 kubelet[2775]: I1101 10:05:12.618258 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-lbx5t" podStartSLOduration=38.618234004 podStartE2EDuration="38.618234004s" podCreationTimestamp="2025-11-01 10:04:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:05:12.615618608 +0000 UTC m=+45.311884656" watchObservedRunningTime="2025-11-01 10:05:12.618234004 +0000 UTC m=+45.314500072" Nov 1 10:05:12.624279 systemd-networkd[1526]: cali346fa986968: Link UP Nov 1 10:05:12.625255 systemd-networkd[1526]: cali346fa986968: Gained carrier Nov 1 10:05:12.650908 containerd[1640]: 2025-11-01 10:05:12.478 [INFO][4674] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--5mnnb-eth0 goldmane-666569f655- calico-system 497638cc-4034-4ffe-9443-48cd7ad72cdc 818 0 2025-11-01 10:04:46 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-5mnnb eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali346fa986968 [] [] }} ContainerID="a183672d5cfa6f8cd1898a857a0c9a1d6d8ef6f095d68ba1515794c0569874d4" Namespace="calico-system" Pod="goldmane-666569f655-5mnnb" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--5mnnb-" Nov 1 10:05:12.650908 containerd[1640]: 2025-11-01 10:05:12.479 [INFO][4674] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a183672d5cfa6f8cd1898a857a0c9a1d6d8ef6f095d68ba1515794c0569874d4" Namespace="calico-system" Pod="goldmane-666569f655-5mnnb" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--5mnnb-eth0" Nov 1 10:05:12.650908 containerd[1640]: 2025-11-01 10:05:12.550 [INFO][4687] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a183672d5cfa6f8cd1898a857a0c9a1d6d8ef6f095d68ba1515794c0569874d4" HandleID="k8s-pod-network.a183672d5cfa6f8cd1898a857a0c9a1d6d8ef6f095d68ba1515794c0569874d4" Workload="localhost-k8s-goldmane--666569f655--5mnnb-eth0" Nov 1 10:05:12.650908 containerd[1640]: 2025-11-01 10:05:12.551 [INFO][4687] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a183672d5cfa6f8cd1898a857a0c9a1d6d8ef6f095d68ba1515794c0569874d4" HandleID="k8s-pod-network.a183672d5cfa6f8cd1898a857a0c9a1d6d8ef6f095d68ba1515794c0569874d4" Workload="localhost-k8s-goldmane--666569f655--5mnnb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a57a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-5mnnb", "timestamp":"2025-11-01 10:05:12.550364725 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 10:05:12.650908 containerd[1640]: 2025-11-01 10:05:12.551 [INFO][4687] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 10:05:12.650908 containerd[1640]: 2025-11-01 10:05:12.551 [INFO][4687] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 10:05:12.650908 containerd[1640]: 2025-11-01 10:05:12.551 [INFO][4687] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 10:05:12.650908 containerd[1640]: 2025-11-01 10:05:12.559 [INFO][4687] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a183672d5cfa6f8cd1898a857a0c9a1d6d8ef6f095d68ba1515794c0569874d4" host="localhost" Nov 1 10:05:12.650908 containerd[1640]: 2025-11-01 10:05:12.564 [INFO][4687] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 10:05:12.650908 containerd[1640]: 2025-11-01 10:05:12.570 [INFO][4687] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 10:05:12.650908 containerd[1640]: 2025-11-01 10:05:12.574 [INFO][4687] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 10:05:12.650908 containerd[1640]: 2025-11-01 10:05:12.577 [INFO][4687] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 10:05:12.650908 containerd[1640]: 2025-11-01 10:05:12.577 [INFO][4687] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a183672d5cfa6f8cd1898a857a0c9a1d6d8ef6f095d68ba1515794c0569874d4" host="localhost" Nov 1 10:05:12.650908 containerd[1640]: 2025-11-01 10:05:12.579 [INFO][4687] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a183672d5cfa6f8cd1898a857a0c9a1d6d8ef6f095d68ba1515794c0569874d4 Nov 1 10:05:12.650908 containerd[1640]: 2025-11-01 10:05:12.587 [INFO][4687] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a183672d5cfa6f8cd1898a857a0c9a1d6d8ef6f095d68ba1515794c0569874d4" host="localhost" Nov 1 10:05:12.650908 containerd[1640]: 2025-11-01 10:05:12.609 [INFO][4687] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.a183672d5cfa6f8cd1898a857a0c9a1d6d8ef6f095d68ba1515794c0569874d4" host="localhost" Nov 1 10:05:12.650908 containerd[1640]: 2025-11-01 10:05:12.609 [INFO][4687] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.a183672d5cfa6f8cd1898a857a0c9a1d6d8ef6f095d68ba1515794c0569874d4" host="localhost" Nov 1 10:05:12.650908 containerd[1640]: 2025-11-01 10:05:12.609 [INFO][4687] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 10:05:12.650908 containerd[1640]: 2025-11-01 10:05:12.609 [INFO][4687] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="a183672d5cfa6f8cd1898a857a0c9a1d6d8ef6f095d68ba1515794c0569874d4" HandleID="k8s-pod-network.a183672d5cfa6f8cd1898a857a0c9a1d6d8ef6f095d68ba1515794c0569874d4" Workload="localhost-k8s-goldmane--666569f655--5mnnb-eth0" Nov 1 10:05:12.651621 containerd[1640]: 2025-11-01 10:05:12.619 [INFO][4674] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a183672d5cfa6f8cd1898a857a0c9a1d6d8ef6f095d68ba1515794c0569874d4" Namespace="calico-system" Pod="goldmane-666569f655-5mnnb" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--5mnnb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--5mnnb-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"497638cc-4034-4ffe-9443-48cd7ad72cdc", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 4, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-5mnnb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali346fa986968", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:05:12.651621 containerd[1640]: 2025-11-01 10:05:12.619 [INFO][4674] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="a183672d5cfa6f8cd1898a857a0c9a1d6d8ef6f095d68ba1515794c0569874d4" Namespace="calico-system" Pod="goldmane-666569f655-5mnnb" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--5mnnb-eth0" Nov 1 10:05:12.651621 containerd[1640]: 2025-11-01 10:05:12.619 [INFO][4674] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali346fa986968 ContainerID="a183672d5cfa6f8cd1898a857a0c9a1d6d8ef6f095d68ba1515794c0569874d4" Namespace="calico-system" Pod="goldmane-666569f655-5mnnb" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--5mnnb-eth0" Nov 1 10:05:12.651621 containerd[1640]: 2025-11-01 10:05:12.625 [INFO][4674] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a183672d5cfa6f8cd1898a857a0c9a1d6d8ef6f095d68ba1515794c0569874d4" Namespace="calico-system" Pod="goldmane-666569f655-5mnnb" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--5mnnb-eth0" Nov 1 10:05:12.651621 containerd[1640]: 2025-11-01 10:05:12.625 [INFO][4674] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a183672d5cfa6f8cd1898a857a0c9a1d6d8ef6f095d68ba1515794c0569874d4" Namespace="calico-system" Pod="goldmane-666569f655-5mnnb" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--5mnnb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--5mnnb-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"497638cc-4034-4ffe-9443-48cd7ad72cdc", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 4, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a183672d5cfa6f8cd1898a857a0c9a1d6d8ef6f095d68ba1515794c0569874d4", Pod:"goldmane-666569f655-5mnnb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali346fa986968", MAC:"66:5d:ad:b0:9a:ca", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:05:12.651621 containerd[1640]: 2025-11-01 10:05:12.640 [INFO][4674] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a183672d5cfa6f8cd1898a857a0c9a1d6d8ef6f095d68ba1515794c0569874d4" Namespace="calico-system" Pod="goldmane-666569f655-5mnnb" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--5mnnb-eth0" Nov 1 10:05:12.716619 containerd[1640]: time="2025-11-01T10:05:12.716507469Z" level=info msg="connecting to shim a183672d5cfa6f8cd1898a857a0c9a1d6d8ef6f095d68ba1515794c0569874d4" address="unix:///run/containerd/s/4541ec689835fa74b68bed50c0699774fbaa58ebe13669d7e32698acd98700c6" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:05:12.769611 systemd[1]: Started cri-containerd-a183672d5cfa6f8cd1898a857a0c9a1d6d8ef6f095d68ba1515794c0569874d4.scope - libcontainer container a183672d5cfa6f8cd1898a857a0c9a1d6d8ef6f095d68ba1515794c0569874d4. Nov 1 10:05:12.800307 systemd-resolved[1373]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 10:05:12.844542 systemd-networkd[1526]: calia6159cd95b4: Gained IPv6LL Nov 1 10:05:12.887810 containerd[1640]: time="2025-11-01T10:05:12.887643321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5mnnb,Uid:497638cc-4034-4ffe-9443-48cd7ad72cdc,Namespace:calico-system,Attempt:0,} returns sandbox id \"a183672d5cfa6f8cd1898a857a0c9a1d6d8ef6f095d68ba1515794c0569874d4\"" Nov 1 10:05:12.920322 systemd-networkd[1526]: vxlan.calico: Link UP Nov 1 10:05:12.920332 systemd-networkd[1526]: vxlan.calico: Gained carrier Nov 1 10:05:13.383320 containerd[1640]: time="2025-11-01T10:05:13.383227555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56f8446f94-kclf6,Uid:fc376135-15c2-4563-9e6f-3663c5522932,Namespace:calico-apiserver,Attempt:0,}" Nov 1 10:05:13.459931 systemd[1]: Started sshd@8-10.0.0.55:22-10.0.0.1:35372.service - OpenSSH per-connection server daemon (10.0.0.1:35372). Nov 1 10:05:13.582309 sshd[4861]: Accepted publickey for core from 10.0.0.1 port 35372 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:05:13.584388 sshd-session[4861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:05:13.594079 systemd-logind[1618]: New session 9 of user core. Nov 1 10:05:13.603464 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 1 10:05:13.619755 kubelet[2775]: E1101 10:05:13.619591 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:05:13.620791 kubelet[2775]: E1101 10:05:13.619672 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:05:13.661693 systemd-networkd[1526]: cali1a25ad90194: Link UP Nov 1 10:05:13.663764 systemd-networkd[1526]: cali1a25ad90194: Gained carrier Nov 1 10:05:13.692696 containerd[1640]: time="2025-11-01T10:05:13.692636657Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:05:13.708615 containerd[1640]: time="2025-11-01T10:05:13.708518851Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 10:05:13.708976 containerd[1640]: time="2025-11-01T10:05:13.708844552Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 1 10:05:13.709317 kubelet[2775]: E1101 10:05:13.709267 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 10:05:13.709856 kubelet[2775]: E1101 10:05:13.709499 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 10:05:13.710431 containerd[1640]: time="2025-11-01T10:05:13.710403857Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 10:05:13.721653 kubelet[2775]: E1101 10:05:13.721507 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gl2p9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-64b6f54dbf-5vmmj_calico-system(11cb09fe-0906-4aa9-80bd-422fc601a30c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 10:05:13.721949 containerd[1640]: 2025-11-01 10:05:13.498 [INFO][4849] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--56f8446f94--kclf6-eth0 calico-apiserver-56f8446f94- calico-apiserver fc376135-15c2-4563-9e6f-3663c5522932 821 0 2025-11-01 10:04:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:56f8446f94 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-56f8446f94-kclf6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1a25ad90194 [] [] }} ContainerID="219635f2fb5140a8c1aa63aed824945f8f5e31e2d9024f0981bcd9b36c6cf1ea" Namespace="calico-apiserver" Pod="calico-apiserver-56f8446f94-kclf6" WorkloadEndpoint="localhost-k8s-calico--apiserver--56f8446f94--kclf6-" Nov 1 10:05:13.721949 containerd[1640]: 2025-11-01 10:05:13.498 [INFO][4849] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="219635f2fb5140a8c1aa63aed824945f8f5e31e2d9024f0981bcd9b36c6cf1ea" Namespace="calico-apiserver" Pod="calico-apiserver-56f8446f94-kclf6" WorkloadEndpoint="localhost-k8s-calico--apiserver--56f8446f94--kclf6-eth0" Nov 1 10:05:13.721949 containerd[1640]: 2025-11-01 10:05:13.547 [INFO][4867] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="219635f2fb5140a8c1aa63aed824945f8f5e31e2d9024f0981bcd9b36c6cf1ea" HandleID="k8s-pod-network.219635f2fb5140a8c1aa63aed824945f8f5e31e2d9024f0981bcd9b36c6cf1ea" Workload="localhost-k8s-calico--apiserver--56f8446f94--kclf6-eth0" Nov 1 10:05:13.721949 containerd[1640]: 2025-11-01 10:05:13.547 [INFO][4867] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="219635f2fb5140a8c1aa63aed824945f8f5e31e2d9024f0981bcd9b36c6cf1ea" HandleID="k8s-pod-network.219635f2fb5140a8c1aa63aed824945f8f5e31e2d9024f0981bcd9b36c6cf1ea" Workload="localhost-k8s-calico--apiserver--56f8446f94--kclf6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ed30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-56f8446f94-kclf6", "timestamp":"2025-11-01 10:05:13.547475077 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 10:05:13.721949 containerd[1640]: 2025-11-01 10:05:13.548 [INFO][4867] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 10:05:13.721949 containerd[1640]: 2025-11-01 10:05:13.549 [INFO][4867] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 10:05:13.721949 containerd[1640]: 2025-11-01 10:05:13.549 [INFO][4867] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 10:05:13.721949 containerd[1640]: 2025-11-01 10:05:13.574 [INFO][4867] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.219635f2fb5140a8c1aa63aed824945f8f5e31e2d9024f0981bcd9b36c6cf1ea" host="localhost" Nov 1 10:05:13.721949 containerd[1640]: 2025-11-01 10:05:13.590 [INFO][4867] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 10:05:13.721949 containerd[1640]: 2025-11-01 10:05:13.600 [INFO][4867] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 10:05:13.721949 containerd[1640]: 2025-11-01 10:05:13.603 [INFO][4867] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 10:05:13.721949 containerd[1640]: 2025-11-01 10:05:13.608 [INFO][4867] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 10:05:13.721949 containerd[1640]: 2025-11-01 10:05:13.610 [INFO][4867] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.219635f2fb5140a8c1aa63aed824945f8f5e31e2d9024f0981bcd9b36c6cf1ea" host="localhost" Nov 1 10:05:13.721949 containerd[1640]: 2025-11-01 10:05:13.620 [INFO][4867] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.219635f2fb5140a8c1aa63aed824945f8f5e31e2d9024f0981bcd9b36c6cf1ea Nov 1 10:05:13.721949 containerd[1640]: 2025-11-01 10:05:13.628 [INFO][4867] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.219635f2fb5140a8c1aa63aed824945f8f5e31e2d9024f0981bcd9b36c6cf1ea" host="localhost" Nov 1 10:05:13.721949 containerd[1640]: 2025-11-01 10:05:13.641 [INFO][4867] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.219635f2fb5140a8c1aa63aed824945f8f5e31e2d9024f0981bcd9b36c6cf1ea" host="localhost" Nov 1 10:05:13.721949 containerd[1640]: 2025-11-01 10:05:13.641 [INFO][4867] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.219635f2fb5140a8c1aa63aed824945f8f5e31e2d9024f0981bcd9b36c6cf1ea" host="localhost" Nov 1 10:05:13.721949 containerd[1640]: 2025-11-01 10:05:13.642 [INFO][4867] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 10:05:13.721949 containerd[1640]: 2025-11-01 10:05:13.642 [INFO][4867] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="219635f2fb5140a8c1aa63aed824945f8f5e31e2d9024f0981bcd9b36c6cf1ea" HandleID="k8s-pod-network.219635f2fb5140a8c1aa63aed824945f8f5e31e2d9024f0981bcd9b36c6cf1ea" Workload="localhost-k8s-calico--apiserver--56f8446f94--kclf6-eth0" Nov 1 10:05:13.722661 containerd[1640]: 2025-11-01 10:05:13.651 [INFO][4849] cni-plugin/k8s.go 418: Populated endpoint ContainerID="219635f2fb5140a8c1aa63aed824945f8f5e31e2d9024f0981bcd9b36c6cf1ea" Namespace="calico-apiserver" Pod="calico-apiserver-56f8446f94-kclf6" WorkloadEndpoint="localhost-k8s-calico--apiserver--56f8446f94--kclf6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--56f8446f94--kclf6-eth0", GenerateName:"calico-apiserver-56f8446f94-", Namespace:"calico-apiserver", SelfLink:"", UID:"fc376135-15c2-4563-9e6f-3663c5522932", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 4, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56f8446f94", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-56f8446f94-kclf6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1a25ad90194", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:05:13.722661 containerd[1640]: 2025-11-01 10:05:13.651 [INFO][4849] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="219635f2fb5140a8c1aa63aed824945f8f5e31e2d9024f0981bcd9b36c6cf1ea" Namespace="calico-apiserver" Pod="calico-apiserver-56f8446f94-kclf6" WorkloadEndpoint="localhost-k8s-calico--apiserver--56f8446f94--kclf6-eth0" Nov 1 10:05:13.722661 containerd[1640]: 2025-11-01 10:05:13.651 [INFO][4849] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1a25ad90194 ContainerID="219635f2fb5140a8c1aa63aed824945f8f5e31e2d9024f0981bcd9b36c6cf1ea" Namespace="calico-apiserver" Pod="calico-apiserver-56f8446f94-kclf6" WorkloadEndpoint="localhost-k8s-calico--apiserver--56f8446f94--kclf6-eth0" Nov 1 10:05:13.722661 containerd[1640]: 2025-11-01 10:05:13.664 [INFO][4849] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="219635f2fb5140a8c1aa63aed824945f8f5e31e2d9024f0981bcd9b36c6cf1ea" Namespace="calico-apiserver" Pod="calico-apiserver-56f8446f94-kclf6" WorkloadEndpoint="localhost-k8s-calico--apiserver--56f8446f94--kclf6-eth0" Nov 1 10:05:13.722661 containerd[1640]: 2025-11-01 10:05:13.666 [INFO][4849] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="219635f2fb5140a8c1aa63aed824945f8f5e31e2d9024f0981bcd9b36c6cf1ea" Namespace="calico-apiserver" Pod="calico-apiserver-56f8446f94-kclf6" WorkloadEndpoint="localhost-k8s-calico--apiserver--56f8446f94--kclf6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--56f8446f94--kclf6-eth0", GenerateName:"calico-apiserver-56f8446f94-", Namespace:"calico-apiserver", SelfLink:"", UID:"fc376135-15c2-4563-9e6f-3663c5522932", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 4, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56f8446f94", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"219635f2fb5140a8c1aa63aed824945f8f5e31e2d9024f0981bcd9b36c6cf1ea", Pod:"calico-apiserver-56f8446f94-kclf6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1a25ad90194", MAC:"f2:d6:81:30:1a:b6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:05:13.722661 containerd[1640]: 2025-11-01 10:05:13.713 [INFO][4849] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="219635f2fb5140a8c1aa63aed824945f8f5e31e2d9024f0981bcd9b36c6cf1ea" Namespace="calico-apiserver" Pod="calico-apiserver-56f8446f94-kclf6" WorkloadEndpoint="localhost-k8s-calico--apiserver--56f8446f94--kclf6-eth0" Nov 1 10:05:13.724612 kubelet[2775]: E1101 10:05:13.724551 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64b6f54dbf-5vmmj" podUID="11cb09fe-0906-4aa9-80bd-422fc601a30c" Nov 1 10:05:13.754670 containerd[1640]: time="2025-11-01T10:05:13.754602850Z" level=info msg="connecting to shim 219635f2fb5140a8c1aa63aed824945f8f5e31e2d9024f0981bcd9b36c6cf1ea" address="unix:///run/containerd/s/3c5a099ac61477cb0448a5051d8b9a05fc6b8771768aef5073b91b13d58ec914" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:05:13.768620 sshd[4877]: Connection closed by 10.0.0.1 port 35372 Nov 1 10:05:13.768985 sshd-session[4861]: pam_unix(sshd:session): session closed for user core Nov 1 10:05:13.774368 systemd[1]: sshd@8-10.0.0.55:22-10.0.0.1:35372.service: Deactivated successfully. Nov 1 10:05:13.777307 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 10:05:13.779975 systemd-logind[1618]: Session 9 logged out. Waiting for processes to exit. Nov 1 10:05:13.781753 systemd-logind[1618]: Removed session 9. Nov 1 10:05:13.810301 systemd[1]: Started cri-containerd-219635f2fb5140a8c1aa63aed824945f8f5e31e2d9024f0981bcd9b36c6cf1ea.scope - libcontainer container 219635f2fb5140a8c1aa63aed824945f8f5e31e2d9024f0981bcd9b36c6cf1ea. Nov 1 10:05:13.834478 systemd-resolved[1373]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 10:05:13.889370 containerd[1640]: time="2025-11-01T10:05:13.889301442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56f8446f94-kclf6,Uid:fc376135-15c2-4563-9e6f-3663c5522932,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"219635f2fb5140a8c1aa63aed824945f8f5e31e2d9024f0981bcd9b36c6cf1ea\"" Nov 1 10:05:13.932485 systemd-networkd[1526]: cali346fa986968: Gained IPv6LL Nov 1 10:05:14.252393 systemd-networkd[1526]: vxlan.calico: Gained IPv6LL Nov 1 10:05:14.381562 containerd[1640]: time="2025-11-01T10:05:14.381439084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zlp4v,Uid:5e9801c6-fe95-4f67-a365-4280796e7e3e,Namespace:calico-system,Attempt:0,}" Nov 1 10:05:14.498180 systemd-networkd[1526]: calibd1d27f3600: Link UP Nov 1 10:05:14.498850 systemd-networkd[1526]: calibd1d27f3600: Gained carrier Nov 1 10:05:14.516264 containerd[1640]: 2025-11-01 10:05:14.424 [INFO][4947] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--zlp4v-eth0 csi-node-driver- calico-system 5e9801c6-fe95-4f67-a365-4280796e7e3e 711 0 2025-11-01 10:04:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-zlp4v eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibd1d27f3600 [] [] }} ContainerID="6b4d711f10a750ba90a365c32d9b6268b45c12d5bad71394e756e8c85bde660c" Namespace="calico-system" Pod="csi-node-driver-zlp4v" WorkloadEndpoint="localhost-k8s-csi--node--driver--zlp4v-" Nov 1 10:05:14.516264 containerd[1640]: 2025-11-01 10:05:14.424 [INFO][4947] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6b4d711f10a750ba90a365c32d9b6268b45c12d5bad71394e756e8c85bde660c" Namespace="calico-system" Pod="csi-node-driver-zlp4v" WorkloadEndpoint="localhost-k8s-csi--node--driver--zlp4v-eth0" Nov 1 10:05:14.516264 containerd[1640]: 2025-11-01 10:05:14.453 [INFO][4962] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6b4d711f10a750ba90a365c32d9b6268b45c12d5bad71394e756e8c85bde660c" HandleID="k8s-pod-network.6b4d711f10a750ba90a365c32d9b6268b45c12d5bad71394e756e8c85bde660c" Workload="localhost-k8s-csi--node--driver--zlp4v-eth0" Nov 1 10:05:14.516264 containerd[1640]: 2025-11-01 10:05:14.453 [INFO][4962] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6b4d711f10a750ba90a365c32d9b6268b45c12d5bad71394e756e8c85bde660c" HandleID="k8s-pod-network.6b4d711f10a750ba90a365c32d9b6268b45c12d5bad71394e756e8c85bde660c" Workload="localhost-k8s-csi--node--driver--zlp4v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7290), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-zlp4v", "timestamp":"2025-11-01 10:05:14.4533131 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 10:05:14.516264 containerd[1640]: 2025-11-01 10:05:14.453 [INFO][4962] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 10:05:14.516264 containerd[1640]: 2025-11-01 10:05:14.453 [INFO][4962] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 10:05:14.516264 containerd[1640]: 2025-11-01 10:05:14.453 [INFO][4962] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 10:05:14.516264 containerd[1640]: 2025-11-01 10:05:14.460 [INFO][4962] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6b4d711f10a750ba90a365c32d9b6268b45c12d5bad71394e756e8c85bde660c" host="localhost" Nov 1 10:05:14.516264 containerd[1640]: 2025-11-01 10:05:14.468 [INFO][4962] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 10:05:14.516264 containerd[1640]: 2025-11-01 10:05:14.473 [INFO][4962] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 10:05:14.516264 containerd[1640]: 2025-11-01 10:05:14.475 [INFO][4962] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 10:05:14.516264 containerd[1640]: 2025-11-01 10:05:14.478 [INFO][4962] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 10:05:14.516264 containerd[1640]: 2025-11-01 10:05:14.478 [INFO][4962] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6b4d711f10a750ba90a365c32d9b6268b45c12d5bad71394e756e8c85bde660c" host="localhost" Nov 1 10:05:14.516264 containerd[1640]: 2025-11-01 10:05:14.479 [INFO][4962] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6b4d711f10a750ba90a365c32d9b6268b45c12d5bad71394e756e8c85bde660c Nov 1 10:05:14.516264 containerd[1640]: 2025-11-01 10:05:14.483 [INFO][4962] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6b4d711f10a750ba90a365c32d9b6268b45c12d5bad71394e756e8c85bde660c" host="localhost" Nov 1 10:05:14.516264 containerd[1640]: 2025-11-01 10:05:14.491 [INFO][4962] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.6b4d711f10a750ba90a365c32d9b6268b45c12d5bad71394e756e8c85bde660c" host="localhost" Nov 1 10:05:14.516264 containerd[1640]: 2025-11-01 10:05:14.491 [INFO][4962] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.6b4d711f10a750ba90a365c32d9b6268b45c12d5bad71394e756e8c85bde660c" host="localhost" Nov 1 10:05:14.516264 containerd[1640]: 2025-11-01 10:05:14.491 [INFO][4962] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 10:05:14.516264 containerd[1640]: 2025-11-01 10:05:14.491 [INFO][4962] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="6b4d711f10a750ba90a365c32d9b6268b45c12d5bad71394e756e8c85bde660c" HandleID="k8s-pod-network.6b4d711f10a750ba90a365c32d9b6268b45c12d5bad71394e756e8c85bde660c" Workload="localhost-k8s-csi--node--driver--zlp4v-eth0" Nov 1 10:05:14.517001 containerd[1640]: 2025-11-01 10:05:14.495 [INFO][4947] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6b4d711f10a750ba90a365c32d9b6268b45c12d5bad71394e756e8c85bde660c" Namespace="calico-system" Pod="csi-node-driver-zlp4v" WorkloadEndpoint="localhost-k8s-csi--node--driver--zlp4v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--zlp4v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5e9801c6-fe95-4f67-a365-4280796e7e3e", ResourceVersion:"711", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 4, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-zlp4v", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibd1d27f3600", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:05:14.517001 containerd[1640]: 2025-11-01 10:05:14.495 [INFO][4947] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="6b4d711f10a750ba90a365c32d9b6268b45c12d5bad71394e756e8c85bde660c" Namespace="calico-system" Pod="csi-node-driver-zlp4v" WorkloadEndpoint="localhost-k8s-csi--node--driver--zlp4v-eth0" Nov 1 10:05:14.517001 containerd[1640]: 2025-11-01 10:05:14.495 [INFO][4947] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibd1d27f3600 ContainerID="6b4d711f10a750ba90a365c32d9b6268b45c12d5bad71394e756e8c85bde660c" Namespace="calico-system" Pod="csi-node-driver-zlp4v" WorkloadEndpoint="localhost-k8s-csi--node--driver--zlp4v-eth0" Nov 1 10:05:14.517001 containerd[1640]: 2025-11-01 10:05:14.498 [INFO][4947] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6b4d711f10a750ba90a365c32d9b6268b45c12d5bad71394e756e8c85bde660c" Namespace="calico-system" Pod="csi-node-driver-zlp4v" WorkloadEndpoint="localhost-k8s-csi--node--driver--zlp4v-eth0" Nov 1 10:05:14.517001 containerd[1640]: 2025-11-01 10:05:14.499 [INFO][4947] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6b4d711f10a750ba90a365c32d9b6268b45c12d5bad71394e756e8c85bde660c" Namespace="calico-system" Pod="csi-node-driver-zlp4v" WorkloadEndpoint="localhost-k8s-csi--node--driver--zlp4v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--zlp4v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5e9801c6-fe95-4f67-a365-4280796e7e3e", ResourceVersion:"711", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 4, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6b4d711f10a750ba90a365c32d9b6268b45c12d5bad71394e756e8c85bde660c", Pod:"csi-node-driver-zlp4v", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibd1d27f3600", MAC:"a6:d3:a1:0d:9c:74", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:05:14.517001 containerd[1640]: 2025-11-01 10:05:14.510 [INFO][4947] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6b4d711f10a750ba90a365c32d9b6268b45c12d5bad71394e756e8c85bde660c" Namespace="calico-system" Pod="csi-node-driver-zlp4v" WorkloadEndpoint="localhost-k8s-csi--node--driver--zlp4v-eth0" Nov 1 10:05:14.545129 containerd[1640]: time="2025-11-01T10:05:14.545014550Z" level=info msg="connecting to shim 6b4d711f10a750ba90a365c32d9b6268b45c12d5bad71394e756e8c85bde660c" address="unix:///run/containerd/s/33c87dfa67b6fb0c4b9d4edab8a6b1d1c995680a678dfb11caf3a2f2b74eb773" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:05:14.581427 systemd[1]: Started cri-containerd-6b4d711f10a750ba90a365c32d9b6268b45c12d5bad71394e756e8c85bde660c.scope - libcontainer container 6b4d711f10a750ba90a365c32d9b6268b45c12d5bad71394e756e8c85bde660c. Nov 1 10:05:14.597153 systemd-resolved[1373]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 10:05:14.623584 kubelet[2775]: E1101 10:05:14.623323 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:05:14.623584 kubelet[2775]: E1101 10:05:14.623461 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64b6f54dbf-5vmmj" podUID="11cb09fe-0906-4aa9-80bd-422fc601a30c" Nov 1 10:05:14.627032 containerd[1640]: time="2025-11-01T10:05:14.626974296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zlp4v,Uid:5e9801c6-fe95-4f67-a365-4280796e7e3e,Namespace:calico-system,Attempt:0,} returns sandbox id \"6b4d711f10a750ba90a365c32d9b6268b45c12d5bad71394e756e8c85bde660c\"" Nov 1 10:05:15.212694 systemd-networkd[1526]: cali1a25ad90194: Gained IPv6LL Nov 1 10:05:15.220394 containerd[1640]: time="2025-11-01T10:05:15.220331441Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:05:15.221812 containerd[1640]: time="2025-11-01T10:05:15.221740574Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 10:05:15.221812 containerd[1640]: time="2025-11-01T10:05:15.221798873Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 1 10:05:15.222153 kubelet[2775]: E1101 10:05:15.222062 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 10:05:15.222253 kubelet[2775]: E1101 10:05:15.222169 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 10:05:15.222589 kubelet[2775]: E1101 10:05:15.222506 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hg4sc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5mnnb_calico-system(497638cc-4034-4ffe-9443-48cd7ad72cdc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 10:05:15.223592 containerd[1640]: time="2025-11-01T10:05:15.223561851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 10:05:15.223851 kubelet[2775]: E1101 10:05:15.223770 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5mnnb" podUID="497638cc-4034-4ffe-9443-48cd7ad72cdc" Nov 1 10:05:15.624815 kubelet[2775]: E1101 10:05:15.624166 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5mnnb" podUID="497638cc-4034-4ffe-9443-48cd7ad72cdc" Nov 1 10:05:15.784996 containerd[1640]: time="2025-11-01T10:05:15.784916933Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:05:15.786243 containerd[1640]: time="2025-11-01T10:05:15.786199900Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 10:05:15.786321 containerd[1640]: time="2025-11-01T10:05:15.786302793Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 1 10:05:15.786582 kubelet[2775]: E1101 10:05:15.786534 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 10:05:15.786694 kubelet[2775]: E1101 10:05:15.786600 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 10:05:15.787157 kubelet[2775]: E1101 10:05:15.787039 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xfvvb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-56f8446f94-kclf6_calico-apiserver(fc376135-15c2-4563-9e6f-3663c5522932): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 10:05:15.787333 containerd[1640]: time="2025-11-01T10:05:15.787180800Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 10:05:15.789063 kubelet[2775]: E1101 10:05:15.789007 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56f8446f94-kclf6" podUID="fc376135-15c2-4563-9e6f-3663c5522932" Nov 1 10:05:16.208232 containerd[1640]: time="2025-11-01T10:05:16.208159833Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:05:16.209720 containerd[1640]: time="2025-11-01T10:05:16.209669945Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 10:05:16.209796 containerd[1640]: time="2025-11-01T10:05:16.209745127Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 1 10:05:16.210024 kubelet[2775]: E1101 10:05:16.209955 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 10:05:16.210091 kubelet[2775]: E1101 10:05:16.210025 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 10:05:16.210274 kubelet[2775]: E1101 10:05:16.210221 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dh4zp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zlp4v_calico-system(5e9801c6-fe95-4f67-a365-4280796e7e3e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 10:05:16.212334 containerd[1640]: time="2025-11-01T10:05:16.212288358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 10:05:16.364264 systemd-networkd[1526]: calibd1d27f3600: Gained IPv6LL Nov 1 10:05:16.620730 containerd[1640]: time="2025-11-01T10:05:16.620672208Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:05:16.625157 kubelet[2775]: E1101 10:05:16.625076 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56f8446f94-kclf6" podUID="fc376135-15c2-4563-9e6f-3663c5522932" Nov 1 10:05:16.630585 containerd[1640]: time="2025-11-01T10:05:16.630528427Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 10:05:16.630762 kubelet[2775]: E1101 10:05:16.630717 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 10:05:16.630815 kubelet[2775]: E1101 10:05:16.630760 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 10:05:16.630966 kubelet[2775]: E1101 10:05:16.630890 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dh4zp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zlp4v_calico-system(5e9801c6-fe95-4f67-a365-4280796e7e3e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 10:05:16.632387 kubelet[2775]: E1101 10:05:16.632135 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zlp4v" podUID="5e9801c6-fe95-4f67-a365-4280796e7e3e" Nov 1 10:05:16.632497 containerd[1640]: time="2025-11-01T10:05:16.630599961Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 1 10:05:17.630038 kubelet[2775]: E1101 10:05:17.629216 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zlp4v" podUID="5e9801c6-fe95-4f67-a365-4280796e7e3e" Nov 1 10:05:18.787560 systemd[1]: Started sshd@9-10.0.0.55:22-10.0.0.1:35376.service - OpenSSH per-connection server daemon (10.0.0.1:35376). Nov 1 10:05:18.853140 sshd[5035]: Accepted publickey for core from 10.0.0.1 port 35376 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:05:18.855388 sshd-session[5035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:05:18.861123 systemd-logind[1618]: New session 10 of user core. Nov 1 10:05:18.870392 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 1 10:05:18.969916 sshd[5038]: Connection closed by 10.0.0.1 port 35376 Nov 1 10:05:18.970298 sshd-session[5035]: pam_unix(sshd:session): session closed for user core Nov 1 10:05:18.975297 systemd[1]: sshd@9-10.0.0.55:22-10.0.0.1:35376.service: Deactivated successfully. Nov 1 10:05:18.978527 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 10:05:18.979593 systemd-logind[1618]: Session 10 logged out. Waiting for processes to exit. Nov 1 10:05:18.981612 systemd-logind[1618]: Removed session 10. Nov 1 10:05:20.382069 containerd[1640]: time="2025-11-01T10:05:20.382012025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 10:05:20.718988 containerd[1640]: time="2025-11-01T10:05:20.718825218Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:05:20.726331 containerd[1640]: time="2025-11-01T10:05:20.726282305Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 10:05:20.726474 containerd[1640]: time="2025-11-01T10:05:20.726395418Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 1 10:05:20.726604 kubelet[2775]: E1101 10:05:20.726552 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 10:05:20.726967 kubelet[2775]: E1101 10:05:20.726614 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 10:05:20.726967 kubelet[2775]: E1101 10:05:20.726744 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:fbee598fef644ce7a5107aaf1ad88c55,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2gb78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-cc77b7fcd-hckvh_calico-system(7d67457e-e809-4b44-b320-e1c49fbcfb7c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 10:05:20.728875 containerd[1640]: time="2025-11-01T10:05:20.728844471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 10:05:21.056851 containerd[1640]: time="2025-11-01T10:05:21.056766386Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:05:21.058232 containerd[1640]: time="2025-11-01T10:05:21.058156833Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 10:05:21.058340 containerd[1640]: time="2025-11-01T10:05:21.058242915Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 1 10:05:21.058564 kubelet[2775]: E1101 10:05:21.058462 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 10:05:21.058649 kubelet[2775]: E1101 10:05:21.058576 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 10:05:21.058860 kubelet[2775]: E1101 10:05:21.058785 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2gb78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-cc77b7fcd-hckvh_calico-system(7d67457e-e809-4b44-b320-e1c49fbcfb7c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 10:05:21.060319 kubelet[2775]: E1101 10:05:21.060091 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cc77b7fcd-hckvh" podUID="7d67457e-e809-4b44-b320-e1c49fbcfb7c" Nov 1 10:05:23.992246 systemd[1]: Started sshd@10-10.0.0.55:22-10.0.0.1:46700.service - OpenSSH per-connection server daemon (10.0.0.1:46700). Nov 1 10:05:24.046786 sshd[5062]: Accepted publickey for core from 10.0.0.1 port 46700 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:05:24.048611 sshd-session[5062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:05:24.053404 systemd-logind[1618]: New session 11 of user core. Nov 1 10:05:24.071253 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 1 10:05:24.139392 sshd[5065]: Connection closed by 10.0.0.1 port 46700 Nov 1 10:05:24.139684 sshd-session[5062]: pam_unix(sshd:session): session closed for user core Nov 1 10:05:24.143500 systemd[1]: sshd@10-10.0.0.55:22-10.0.0.1:46700.service: Deactivated successfully. Nov 1 10:05:24.145317 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 10:05:24.146159 systemd-logind[1618]: Session 11 logged out. Waiting for processes to exit. Nov 1 10:05:24.147177 systemd-logind[1618]: Removed session 11. Nov 1 10:05:26.383777 containerd[1640]: time="2025-11-01T10:05:26.383668418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 10:05:26.781156 containerd[1640]: time="2025-11-01T10:05:26.780941030Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:05:26.804317 containerd[1640]: time="2025-11-01T10:05:26.803964042Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 10:05:26.804317 containerd[1640]: time="2025-11-01T10:05:26.804167884Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 1 10:05:26.804561 kubelet[2775]: E1101 10:05:26.804385 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 10:05:26.804561 kubelet[2775]: E1101 10:05:26.804466 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 10:05:26.805098 kubelet[2775]: E1101 10:05:26.804626 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-89b9k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-56f8446f94-wbw7g_calico-apiserver(29af33ad-9abc-4ff3-b520-e3177a680c27): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 10:05:26.805898 kubelet[2775]: E1101 10:05:26.805856 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56f8446f94-wbw7g" podUID="29af33ad-9abc-4ff3-b520-e3177a680c27" Nov 1 10:05:28.381983 containerd[1640]: time="2025-11-01T10:05:28.381904191Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 10:05:28.714636 containerd[1640]: time="2025-11-01T10:05:28.714472455Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:05:28.731272 containerd[1640]: time="2025-11-01T10:05:28.731215728Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 10:05:28.731272 containerd[1640]: time="2025-11-01T10:05:28.731261975Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 1 10:05:28.731591 kubelet[2775]: E1101 10:05:28.731526 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 10:05:28.731960 kubelet[2775]: E1101 10:05:28.731602 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 10:05:28.731960 kubelet[2775]: E1101 10:05:28.731905 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xfvvb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-56f8446f94-kclf6_calico-apiserver(fc376135-15c2-4563-9e6f-3663c5522932): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 10:05:28.732341 containerd[1640]: time="2025-11-01T10:05:28.732317073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 10:05:28.733467 kubelet[2775]: E1101 10:05:28.733420 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56f8446f94-kclf6" podUID="fc376135-15c2-4563-9e6f-3663c5522932" Nov 1 10:05:29.024860 containerd[1640]: time="2025-11-01T10:05:29.024657737Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:05:29.048392 containerd[1640]: time="2025-11-01T10:05:29.048269453Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 10:05:29.048611 containerd[1640]: time="2025-11-01T10:05:29.048270855Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 1 10:05:29.048744 kubelet[2775]: E1101 10:05:29.048682 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 10:05:29.048807 kubelet[2775]: E1101 10:05:29.048748 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 10:05:29.048968 kubelet[2775]: E1101 10:05:29.048908 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gl2p9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-64b6f54dbf-5vmmj_calico-system(11cb09fe-0906-4aa9-80bd-422fc601a30c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 10:05:29.050181 kubelet[2775]: E1101 10:05:29.050115 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64b6f54dbf-5vmmj" podUID="11cb09fe-0906-4aa9-80bd-422fc601a30c" Nov 1 10:05:29.152298 systemd[1]: Started sshd@11-10.0.0.55:22-10.0.0.1:46712.service - OpenSSH per-connection server daemon (10.0.0.1:46712). Nov 1 10:05:29.213509 sshd[5083]: Accepted publickey for core from 10.0.0.1 port 46712 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:05:29.215258 sshd-session[5083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:05:29.219676 systemd-logind[1618]: New session 12 of user core. Nov 1 10:05:29.230380 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 1 10:05:29.497568 containerd[1640]: time="2025-11-01T10:05:29.497504526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 10:05:29.526121 sshd[5086]: Connection closed by 10.0.0.1 port 46712 Nov 1 10:05:29.526256 sshd-session[5083]: pam_unix(sshd:session): session closed for user core Nov 1 10:05:29.540085 systemd[1]: sshd@11-10.0.0.55:22-10.0.0.1:46712.service: Deactivated successfully. Nov 1 10:05:29.548852 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 10:05:29.553257 systemd-logind[1618]: Session 12 logged out. Waiting for processes to exit. Nov 1 10:05:29.562649 systemd[1]: Started sshd@12-10.0.0.55:22-10.0.0.1:46726.service - OpenSSH per-connection server daemon (10.0.0.1:46726). Nov 1 10:05:29.565616 systemd-logind[1618]: Removed session 12. Nov 1 10:05:29.651200 sshd[5100]: Accepted publickey for core from 10.0.0.1 port 46726 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:05:29.653880 sshd-session[5100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:05:29.663017 systemd-logind[1618]: New session 13 of user core. Nov 1 10:05:29.674510 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 1 10:05:29.828898 containerd[1640]: time="2025-11-01T10:05:29.828666998Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:05:29.836254 containerd[1640]: time="2025-11-01T10:05:29.835864438Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 10:05:29.836254 containerd[1640]: time="2025-11-01T10:05:29.835928007Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 1 10:05:29.839689 kubelet[2775]: E1101 10:05:29.839544 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 10:05:29.840498 kubelet[2775]: E1101 10:05:29.839701 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 10:05:29.840498 kubelet[2775]: E1101 10:05:29.840093 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hg4sc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5mnnb_calico-system(497638cc-4034-4ffe-9443-48cd7ad72cdc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 10:05:29.841574 kubelet[2775]: E1101 10:05:29.841485 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5mnnb" podUID="497638cc-4034-4ffe-9443-48cd7ad72cdc" Nov 1 10:05:29.851577 sshd[5103]: Connection closed by 10.0.0.1 port 46726 Nov 1 10:05:29.853544 sshd-session[5100]: pam_unix(sshd:session): session closed for user core Nov 1 10:05:29.862938 systemd[1]: sshd@12-10.0.0.55:22-10.0.0.1:46726.service: Deactivated successfully. Nov 1 10:05:29.865292 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 10:05:29.866331 systemd-logind[1618]: Session 13 logged out. Waiting for processes to exit. Nov 1 10:05:29.869513 systemd[1]: Started sshd@13-10.0.0.55:22-10.0.0.1:46736.service - OpenSSH per-connection server daemon (10.0.0.1:46736). Nov 1 10:05:29.870600 systemd-logind[1618]: Removed session 13. Nov 1 10:05:29.953977 sshd[5115]: Accepted publickey for core from 10.0.0.1 port 46736 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:05:29.955644 sshd-session[5115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:05:29.960245 systemd-logind[1618]: New session 14 of user core. Nov 1 10:05:29.972440 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 1 10:05:30.078658 sshd[5118]: Connection closed by 10.0.0.1 port 46736 Nov 1 10:05:30.078973 sshd-session[5115]: pam_unix(sshd:session): session closed for user core Nov 1 10:05:30.084604 systemd[1]: sshd@13-10.0.0.55:22-10.0.0.1:46736.service: Deactivated successfully. Nov 1 10:05:30.087212 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 10:05:30.088750 systemd-logind[1618]: Session 14 logged out. Waiting for processes to exit. Nov 1 10:05:30.090968 systemd-logind[1618]: Removed session 14. Nov 1 10:05:31.382976 containerd[1640]: time="2025-11-01T10:05:31.382910413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 10:05:31.735494 containerd[1640]: time="2025-11-01T10:05:31.735238074Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:05:31.737744 containerd[1640]: time="2025-11-01T10:05:31.737671819Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 10:05:31.738064 containerd[1640]: time="2025-11-01T10:05:31.737804488Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 1 10:05:31.738208 kubelet[2775]: E1101 10:05:31.738149 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 10:05:31.738636 kubelet[2775]: E1101 10:05:31.738237 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 10:05:31.738636 kubelet[2775]: E1101 10:05:31.738400 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dh4zp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zlp4v_calico-system(5e9801c6-fe95-4f67-a365-4280796e7e3e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 10:05:31.740522 containerd[1640]: time="2025-11-01T10:05:31.740464216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 10:05:32.054575 containerd[1640]: time="2025-11-01T10:05:32.054497013Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:05:32.055800 containerd[1640]: time="2025-11-01T10:05:32.055741208Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 10:05:32.055873 containerd[1640]: time="2025-11-01T10:05:32.055802022Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 1 10:05:32.056077 kubelet[2775]: E1101 10:05:32.056029 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 10:05:32.056165 kubelet[2775]: E1101 10:05:32.056088 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 10:05:32.056286 kubelet[2775]: E1101 10:05:32.056250 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dh4zp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zlp4v_calico-system(5e9801c6-fe95-4f67-a365-4280796e7e3e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 10:05:32.057554 kubelet[2775]: E1101 10:05:32.057523 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zlp4v" podUID="5e9801c6-fe95-4f67-a365-4280796e7e3e" Nov 1 10:05:32.382727 kubelet[2775]: E1101 10:05:32.382570 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cc77b7fcd-hckvh" podUID="7d67457e-e809-4b44-b320-e1c49fbcfb7c" Nov 1 10:05:35.099818 systemd[1]: Started sshd@14-10.0.0.55:22-10.0.0.1:39158.service - OpenSSH per-connection server daemon (10.0.0.1:39158). Nov 1 10:05:35.156121 sshd[5145]: Accepted publickey for core from 10.0.0.1 port 39158 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:05:35.157398 sshd-session[5145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:05:35.161435 systemd-logind[1618]: New session 15 of user core. Nov 1 10:05:35.165254 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 1 10:05:35.232087 sshd[5148]: Connection closed by 10.0.0.1 port 39158 Nov 1 10:05:35.232398 sshd-session[5145]: pam_unix(sshd:session): session closed for user core Nov 1 10:05:35.236506 systemd[1]: sshd@14-10.0.0.55:22-10.0.0.1:39158.service: Deactivated successfully. Nov 1 10:05:35.238523 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 10:05:35.239476 systemd-logind[1618]: Session 15 logged out. Waiting for processes to exit. Nov 1 10:05:35.240602 systemd-logind[1618]: Removed session 15. Nov 1 10:05:36.381465 kubelet[2775]: E1101 10:05:36.381404 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:05:39.382136 kubelet[2775]: E1101 10:05:39.381914 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56f8446f94-wbw7g" podUID="29af33ad-9abc-4ff3-b520-e3177a680c27" Nov 1 10:05:40.246337 systemd[1]: Started sshd@15-10.0.0.55:22-10.0.0.1:38656.service - OpenSSH per-connection server daemon (10.0.0.1:38656). Nov 1 10:05:40.312122 sshd[5187]: Accepted publickey for core from 10.0.0.1 port 38656 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:05:40.313935 sshd-session[5187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:05:40.319072 systemd-logind[1618]: New session 16 of user core. Nov 1 10:05:40.325385 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 1 10:05:40.408970 sshd[5190]: Connection closed by 10.0.0.1 port 38656 Nov 1 10:05:40.409296 sshd-session[5187]: pam_unix(sshd:session): session closed for user core Nov 1 10:05:40.413767 systemd[1]: sshd@15-10.0.0.55:22-10.0.0.1:38656.service: Deactivated successfully. Nov 1 10:05:40.416638 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 10:05:40.418653 systemd-logind[1618]: Session 16 logged out. Waiting for processes to exit. Nov 1 10:05:40.421073 systemd-logind[1618]: Removed session 16. Nov 1 10:05:42.382140 kubelet[2775]: E1101 10:05:42.381790 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56f8446f94-kclf6" podUID="fc376135-15c2-4563-9e6f-3663c5522932" Nov 1 10:05:43.381735 kubelet[2775]: E1101 10:05:43.381679 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5mnnb" podUID="497638cc-4034-4ffe-9443-48cd7ad72cdc" Nov 1 10:05:44.381718 kubelet[2775]: E1101 10:05:44.381642 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64b6f54dbf-5vmmj" podUID="11cb09fe-0906-4aa9-80bd-422fc601a30c" Nov 1 10:05:45.427376 systemd[1]: Started sshd@16-10.0.0.55:22-10.0.0.1:38664.service - OpenSSH per-connection server daemon (10.0.0.1:38664). Nov 1 10:05:45.485779 sshd[5204]: Accepted publickey for core from 10.0.0.1 port 38664 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:05:45.487746 sshd-session[5204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:05:45.492918 systemd-logind[1618]: New session 17 of user core. Nov 1 10:05:45.501333 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 1 10:05:45.577164 sshd[5207]: Connection closed by 10.0.0.1 port 38664 Nov 1 10:05:45.577498 sshd-session[5204]: pam_unix(sshd:session): session closed for user core Nov 1 10:05:45.581425 systemd[1]: sshd@16-10.0.0.55:22-10.0.0.1:38664.service: Deactivated successfully. Nov 1 10:05:45.583340 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 10:05:45.584186 systemd-logind[1618]: Session 17 logged out. Waiting for processes to exit. Nov 1 10:05:45.585226 systemd-logind[1618]: Removed session 17. Nov 1 10:05:47.384501 kubelet[2775]: E1101 10:05:47.384432 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zlp4v" podUID="5e9801c6-fe95-4f67-a365-4280796e7e3e" Nov 1 10:05:47.388414 containerd[1640]: time="2025-11-01T10:05:47.388357006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 10:05:47.769800 containerd[1640]: time="2025-11-01T10:05:47.769631510Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:05:47.809081 containerd[1640]: time="2025-11-01T10:05:47.808996597Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 10:05:47.809268 containerd[1640]: time="2025-11-01T10:05:47.809143749Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 1 10:05:47.809372 kubelet[2775]: E1101 10:05:47.809314 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 10:05:47.809434 kubelet[2775]: E1101 10:05:47.809389 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 10:05:47.812680 kubelet[2775]: E1101 10:05:47.812614 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:fbee598fef644ce7a5107aaf1ad88c55,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2gb78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-cc77b7fcd-hckvh_calico-system(7d67457e-e809-4b44-b320-e1c49fbcfb7c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 10:05:47.814762 containerd[1640]: time="2025-11-01T10:05:47.814735402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 10:05:48.116138 containerd[1640]: time="2025-11-01T10:05:48.116035802Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:05:48.172317 containerd[1640]: time="2025-11-01T10:05:48.172222048Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 10:05:48.172317 containerd[1640]: time="2025-11-01T10:05:48.172302792Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 1 10:05:48.172578 kubelet[2775]: E1101 10:05:48.172519 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 10:05:48.172626 kubelet[2775]: E1101 10:05:48.172589 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 10:05:48.172792 kubelet[2775]: E1101 10:05:48.172736 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2gb78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-cc77b7fcd-hckvh_calico-system(7d67457e-e809-4b44-b320-e1c49fbcfb7c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 10:05:48.173943 kubelet[2775]: E1101 10:05:48.173903 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cc77b7fcd-hckvh" podUID="7d67457e-e809-4b44-b320-e1c49fbcfb7c" Nov 1 10:05:50.594775 systemd[1]: Started sshd@17-10.0.0.55:22-10.0.0.1:36764.service - OpenSSH per-connection server daemon (10.0.0.1:36764). Nov 1 10:05:50.672571 sshd[5220]: Accepted publickey for core from 10.0.0.1 port 36764 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:05:50.674609 sshd-session[5220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:05:50.680457 systemd-logind[1618]: New session 18 of user core. Nov 1 10:05:50.695385 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 1 10:05:50.793210 sshd[5223]: Connection closed by 10.0.0.1 port 36764 Nov 1 10:05:50.793580 sshd-session[5220]: pam_unix(sshd:session): session closed for user core Nov 1 10:05:50.797392 systemd[1]: sshd@17-10.0.0.55:22-10.0.0.1:36764.service: Deactivated successfully. Nov 1 10:05:50.800025 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 10:05:50.801904 systemd-logind[1618]: Session 18 logged out. Waiting for processes to exit. Nov 1 10:05:50.803935 systemd-logind[1618]: Removed session 18. Nov 1 10:05:51.380870 kubelet[2775]: E1101 10:05:51.380767 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:05:51.382404 containerd[1640]: time="2025-11-01T10:05:51.382249122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 10:05:51.725462 containerd[1640]: time="2025-11-01T10:05:51.725308824Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:05:51.727044 containerd[1640]: time="2025-11-01T10:05:51.726977151Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 10:05:51.727173 containerd[1640]: time="2025-11-01T10:05:51.727061392Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 1 10:05:51.727293 kubelet[2775]: E1101 10:05:51.727241 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 10:05:51.727334 kubelet[2775]: E1101 10:05:51.727305 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 10:05:51.727536 kubelet[2775]: E1101 10:05:51.727473 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-89b9k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-56f8446f94-wbw7g_calico-apiserver(29af33ad-9abc-4ff3-b520-e3177a680c27): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 10:05:51.728703 kubelet[2775]: E1101 10:05:51.728677 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56f8446f94-wbw7g" podUID="29af33ad-9abc-4ff3-b520-e3177a680c27" Nov 1 10:05:55.386884 containerd[1640]: time="2025-11-01T10:05:55.382922966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 10:05:55.755756 containerd[1640]: time="2025-11-01T10:05:55.755606677Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:05:55.757123 containerd[1640]: time="2025-11-01T10:05:55.757023847Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 10:05:55.757123 containerd[1640]: time="2025-11-01T10:05:55.757080946Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 1 10:05:55.757340 kubelet[2775]: E1101 10:05:55.757291 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 10:05:55.757716 kubelet[2775]: E1101 10:05:55.757347 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 10:05:55.757716 kubelet[2775]: E1101 10:05:55.757476 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hg4sc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5mnnb_calico-system(497638cc-4034-4ffe-9443-48cd7ad72cdc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 10:05:55.758716 kubelet[2775]: E1101 10:05:55.758660 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5mnnb" podUID="497638cc-4034-4ffe-9443-48cd7ad72cdc" Nov 1 10:05:55.806323 systemd[1]: Started sshd@18-10.0.0.55:22-10.0.0.1:36774.service - OpenSSH per-connection server daemon (10.0.0.1:36774). Nov 1 10:05:55.851336 sshd[5246]: Accepted publickey for core from 10.0.0.1 port 36774 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:05:55.852597 sshd-session[5246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:05:55.857391 systemd-logind[1618]: New session 19 of user core. Nov 1 10:05:55.868261 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 1 10:05:55.947273 sshd[5249]: Connection closed by 10.0.0.1 port 36774 Nov 1 10:05:55.947667 sshd-session[5246]: pam_unix(sshd:session): session closed for user core Nov 1 10:05:55.965351 systemd[1]: sshd@18-10.0.0.55:22-10.0.0.1:36774.service: Deactivated successfully. Nov 1 10:05:55.967583 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 10:05:55.968353 systemd-logind[1618]: Session 19 logged out. Waiting for processes to exit. Nov 1 10:05:55.971566 systemd[1]: Started sshd@19-10.0.0.55:22-10.0.0.1:36788.service - OpenSSH per-connection server daemon (10.0.0.1:36788). Nov 1 10:05:55.972338 systemd-logind[1618]: Removed session 19. Nov 1 10:05:56.031191 sshd[5262]: Accepted publickey for core from 10.0.0.1 port 36788 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:05:56.032809 sshd-session[5262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:05:56.037535 systemd-logind[1618]: New session 20 of user core. Nov 1 10:05:56.048243 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 1 10:05:56.329771 sshd[5265]: Connection closed by 10.0.0.1 port 36788 Nov 1 10:05:56.330087 sshd-session[5262]: pam_unix(sshd:session): session closed for user core Nov 1 10:05:56.339744 systemd[1]: sshd@19-10.0.0.55:22-10.0.0.1:36788.service: Deactivated successfully. Nov 1 10:05:56.341650 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 10:05:56.342480 systemd-logind[1618]: Session 20 logged out. Waiting for processes to exit. Nov 1 10:05:56.345029 systemd[1]: Started sshd@20-10.0.0.55:22-10.0.0.1:36798.service - OpenSSH per-connection server daemon (10.0.0.1:36798). Nov 1 10:05:56.346135 systemd-logind[1618]: Removed session 20. Nov 1 10:05:56.382185 containerd[1640]: time="2025-11-01T10:05:56.382136407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 10:05:56.411315 sshd[5277]: Accepted publickey for core from 10.0.0.1 port 36798 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:05:56.412970 sshd-session[5277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:05:56.417604 systemd-logind[1618]: New session 21 of user core. Nov 1 10:05:56.428350 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 1 10:05:56.853686 sshd[5280]: Connection closed by 10.0.0.1 port 36798 Nov 1 10:05:56.854183 sshd-session[5277]: pam_unix(sshd:session): session closed for user core Nov 1 10:05:56.869283 systemd[1]: sshd@20-10.0.0.55:22-10.0.0.1:36798.service: Deactivated successfully. Nov 1 10:05:56.872678 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 10:05:56.874083 systemd-logind[1618]: Session 21 logged out. Waiting for processes to exit. Nov 1 10:05:56.878292 systemd[1]: Started sshd@21-10.0.0.55:22-10.0.0.1:36800.service - OpenSSH per-connection server daemon (10.0.0.1:36800). Nov 1 10:05:56.879422 systemd-logind[1618]: Removed session 21. Nov 1 10:05:56.934918 sshd[5300]: Accepted publickey for core from 10.0.0.1 port 36800 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:05:56.936227 sshd-session[5300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:05:56.940637 systemd-logind[1618]: New session 22 of user core. Nov 1 10:05:56.950243 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 1 10:05:56.998754 containerd[1640]: time="2025-11-01T10:05:56.998689517Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:05:57.003124 containerd[1640]: time="2025-11-01T10:05:57.002987170Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 10:05:57.003124 containerd[1640]: time="2025-11-01T10:05:57.003039140Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 1 10:05:57.003355 kubelet[2775]: E1101 10:05:57.003304 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 10:05:57.004364 kubelet[2775]: E1101 10:05:57.003367 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 10:05:57.004364 kubelet[2775]: E1101 10:05:57.004195 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xfvvb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-56f8446f94-kclf6_calico-apiserver(fc376135-15c2-4563-9e6f-3663c5522932): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 10:05:57.005328 containerd[1640]: time="2025-11-01T10:05:57.003824860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 10:05:57.006332 kubelet[2775]: E1101 10:05:57.006295 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56f8446f94-kclf6" podUID="fc376135-15c2-4563-9e6f-3663c5522932" Nov 1 10:05:57.128365 sshd[5303]: Connection closed by 10.0.0.1 port 36800 Nov 1 10:05:57.128607 sshd-session[5300]: pam_unix(sshd:session): session closed for user core Nov 1 10:05:57.140919 systemd[1]: sshd@21-10.0.0.55:22-10.0.0.1:36800.service: Deactivated successfully. Nov 1 10:05:57.142763 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 10:05:57.143650 systemd-logind[1618]: Session 22 logged out. Waiting for processes to exit. Nov 1 10:05:57.146743 systemd[1]: Started sshd@22-10.0.0.55:22-10.0.0.1:36806.service - OpenSSH per-connection server daemon (10.0.0.1:36806). Nov 1 10:05:57.147435 systemd-logind[1618]: Removed session 22. Nov 1 10:05:57.202932 sshd[5314]: Accepted publickey for core from 10.0.0.1 port 36806 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:05:57.204339 sshd-session[5314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:05:57.209631 systemd-logind[1618]: New session 23 of user core. Nov 1 10:05:57.217303 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 1 10:05:57.292000 sshd[5317]: Connection closed by 10.0.0.1 port 36806 Nov 1 10:05:57.292377 sshd-session[5314]: pam_unix(sshd:session): session closed for user core Nov 1 10:05:57.297355 systemd[1]: sshd@22-10.0.0.55:22-10.0.0.1:36806.service: Deactivated successfully. Nov 1 10:05:57.299644 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 10:05:57.300757 systemd-logind[1618]: Session 23 logged out. Waiting for processes to exit. Nov 1 10:05:57.302275 systemd-logind[1618]: Removed session 23. Nov 1 10:05:57.321276 containerd[1640]: time="2025-11-01T10:05:57.321229816Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:05:57.322497 containerd[1640]: time="2025-11-01T10:05:57.322434277Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 10:05:57.322603 containerd[1640]: time="2025-11-01T10:05:57.322460297Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 1 10:05:57.322738 kubelet[2775]: E1101 10:05:57.322700 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 10:05:57.322803 kubelet[2775]: E1101 10:05:57.322754 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 10:05:57.322939 kubelet[2775]: E1101 10:05:57.322887 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gl2p9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-64b6f54dbf-5vmmj_calico-system(11cb09fe-0906-4aa9-80bd-422fc601a30c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 10:05:57.324134 kubelet[2775]: E1101 10:05:57.324071 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64b6f54dbf-5vmmj" podUID="11cb09fe-0906-4aa9-80bd-422fc601a30c" Nov 1 10:06:01.386751 containerd[1640]: time="2025-11-01T10:06:01.386467620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 10:06:01.709656 containerd[1640]: time="2025-11-01T10:06:01.709445267Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:06:01.714224 containerd[1640]: time="2025-11-01T10:06:01.714149359Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 10:06:01.714330 containerd[1640]: time="2025-11-01T10:06:01.714223912Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 1 10:06:01.714437 kubelet[2775]: E1101 10:06:01.714365 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 10:06:01.714437 kubelet[2775]: E1101 10:06:01.714434 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 10:06:01.714861 kubelet[2775]: E1101 10:06:01.714560 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dh4zp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zlp4v_calico-system(5e9801c6-fe95-4f67-a365-4280796e7e3e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 10:06:01.716883 containerd[1640]: time="2025-11-01T10:06:01.716836684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 10:06:02.070077 containerd[1640]: time="2025-11-01T10:06:02.070010841Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:06:02.071311 containerd[1640]: time="2025-11-01T10:06:02.071259932Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 10:06:02.071485 containerd[1640]: time="2025-11-01T10:06:02.071345305Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 1 10:06:02.071594 kubelet[2775]: E1101 10:06:02.071539 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 10:06:02.071648 kubelet[2775]: E1101 10:06:02.071618 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 10:06:02.071818 kubelet[2775]: E1101 10:06:02.071777 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dh4zp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zlp4v_calico-system(5e9801c6-fe95-4f67-a365-4280796e7e3e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 10:06:02.073011 kubelet[2775]: E1101 10:06:02.072968 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zlp4v" podUID="5e9801c6-fe95-4f67-a365-4280796e7e3e" Nov 1 10:06:02.310259 systemd[1]: Started sshd@23-10.0.0.55:22-10.0.0.1:36176.service - OpenSSH per-connection server daemon (10.0.0.1:36176). Nov 1 10:06:02.367811 sshd[5332]: Accepted publickey for core from 10.0.0.1 port 36176 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:06:02.369678 sshd-session[5332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:06:02.374524 systemd-logind[1618]: New session 24 of user core. Nov 1 10:06:02.386307 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 1 10:06:02.471622 sshd[5335]: Connection closed by 10.0.0.1 port 36176 Nov 1 10:06:02.471959 sshd-session[5332]: pam_unix(sshd:session): session closed for user core Nov 1 10:06:02.476895 systemd[1]: sshd@23-10.0.0.55:22-10.0.0.1:36176.service: Deactivated successfully. Nov 1 10:06:02.479162 systemd[1]: session-24.scope: Deactivated successfully. Nov 1 10:06:02.480043 systemd-logind[1618]: Session 24 logged out. Waiting for processes to exit. Nov 1 10:06:02.481703 systemd-logind[1618]: Removed session 24. Nov 1 10:06:03.382871 kubelet[2775]: E1101 10:06:03.382704 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cc77b7fcd-hckvh" podUID="7d67457e-e809-4b44-b320-e1c49fbcfb7c" Nov 1 10:06:05.382090 kubelet[2775]: E1101 10:06:05.381998 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56f8446f94-wbw7g" podUID="29af33ad-9abc-4ff3-b520-e3177a680c27" Nov 1 10:06:07.382658 kubelet[2775]: E1101 10:06:07.382543 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:06:07.494756 systemd[1]: Started sshd@24-10.0.0.55:22-10.0.0.1:36192.service - OpenSSH per-connection server daemon (10.0.0.1:36192). Nov 1 10:06:07.571975 sshd[5350]: Accepted publickey for core from 10.0.0.1 port 36192 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:06:07.574152 sshd-session[5350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:06:07.579541 systemd-logind[1618]: New session 25 of user core. Nov 1 10:06:07.593329 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 1 10:06:07.680564 sshd[5353]: Connection closed by 10.0.0.1 port 36192 Nov 1 10:06:07.680860 sshd-session[5350]: pam_unix(sshd:session): session closed for user core Nov 1 10:06:07.686875 systemd[1]: sshd@24-10.0.0.55:22-10.0.0.1:36192.service: Deactivated successfully. Nov 1 10:06:07.689356 systemd[1]: session-25.scope: Deactivated successfully. Nov 1 10:06:07.690234 systemd-logind[1618]: Session 25 logged out. Waiting for processes to exit. Nov 1 10:06:07.691884 systemd-logind[1618]: Removed session 25. Nov 1 10:06:08.632595 kubelet[2775]: E1101 10:06:08.632549 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:06:09.381528 kubelet[2775]: E1101 10:06:09.381223 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:06:09.384658 kubelet[2775]: E1101 10:06:09.384593 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56f8446f94-kclf6" podUID="fc376135-15c2-4563-9e6f-3663c5522932" Nov 1 10:06:10.384132 kubelet[2775]: E1101 10:06:10.383775 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5mnnb" podUID="497638cc-4034-4ffe-9443-48cd7ad72cdc" Nov 1 10:06:10.384720 kubelet[2775]: E1101 10:06:10.384597 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64b6f54dbf-5vmmj" podUID="11cb09fe-0906-4aa9-80bd-422fc601a30c" Nov 1 10:06:12.699014 systemd[1]: Started sshd@25-10.0.0.55:22-10.0.0.1:55564.service - OpenSSH per-connection server daemon (10.0.0.1:55564). Nov 1 10:06:12.755494 sshd[5392]: Accepted publickey for core from 10.0.0.1 port 55564 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:06:12.757689 sshd-session[5392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:06:12.762418 systemd-logind[1618]: New session 26 of user core. Nov 1 10:06:12.773247 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 1 10:06:12.862998 sshd[5395]: Connection closed by 10.0.0.1 port 55564 Nov 1 10:06:12.863410 sshd-session[5392]: pam_unix(sshd:session): session closed for user core Nov 1 10:06:12.867778 systemd[1]: sshd@25-10.0.0.55:22-10.0.0.1:55564.service: Deactivated successfully. Nov 1 10:06:12.869921 systemd[1]: session-26.scope: Deactivated successfully. Nov 1 10:06:12.870751 systemd-logind[1618]: Session 26 logged out. Waiting for processes to exit. Nov 1 10:06:12.872035 systemd-logind[1618]: Removed session 26. Nov 1 10:06:14.383814 kubelet[2775]: E1101 10:06:14.383746 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cc77b7fcd-hckvh" podUID="7d67457e-e809-4b44-b320-e1c49fbcfb7c" Nov 1 10:06:16.388744 kubelet[2775]: E1101 10:06:16.388676 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zlp4v" podUID="5e9801c6-fe95-4f67-a365-4280796e7e3e" Nov 1 10:06:17.385521 kubelet[2775]: E1101 10:06:17.385436 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56f8446f94-wbw7g" podUID="29af33ad-9abc-4ff3-b520-e3177a680c27" Nov 1 10:06:17.876909 systemd[1]: Started sshd@26-10.0.0.55:22-10.0.0.1:55576.service - OpenSSH per-connection server daemon (10.0.0.1:55576). Nov 1 10:06:17.947743 sshd[5408]: Accepted publickey for core from 10.0.0.1 port 55576 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:06:17.949532 sshd-session[5408]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:06:17.954542 systemd-logind[1618]: New session 27 of user core. Nov 1 10:06:17.962277 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 1 10:06:18.047641 sshd[5411]: Connection closed by 10.0.0.1 port 55576 Nov 1 10:06:18.047978 sshd-session[5408]: pam_unix(sshd:session): session closed for user core Nov 1 10:06:18.053431 systemd[1]: sshd@26-10.0.0.55:22-10.0.0.1:55576.service: Deactivated successfully. Nov 1 10:06:18.055728 systemd[1]: session-27.scope: Deactivated successfully. Nov 1 10:06:18.056816 systemd-logind[1618]: Session 27 logged out. Waiting for processes to exit. Nov 1 10:06:18.058887 systemd-logind[1618]: Removed session 27.