Oct 29 00:33:04.711350 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 28 22:31:02 -00 2025 Oct 29 00:33:04.711383 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=54ef1c344b2a47697b32f3227bd37f41d37acb1889c1eaea33b22ce408b7b3ae Oct 29 00:33:04.711400 kernel: BIOS-provided physical RAM map: Oct 29 00:33:04.711407 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 29 00:33:04.711414 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Oct 29 00:33:04.711421 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Oct 29 00:33:04.711429 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Oct 29 00:33:04.711436 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Oct 29 00:33:04.711446 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Oct 29 00:33:04.711453 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Oct 29 00:33:04.711466 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Oct 29 00:33:04.711473 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Oct 29 00:33:04.711480 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Oct 29 00:33:04.711487 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Oct 29 00:33:04.711496 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Oct 29 00:33:04.711510 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Oct 29 00:33:04.711520 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Oct 29 00:33:04.711528 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Oct 29 00:33:04.711535 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Oct 29 00:33:04.711543 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Oct 29 00:33:04.711550 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Oct 29 00:33:04.711558 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Oct 29 00:33:04.711565 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Oct 29 00:33:04.711573 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 29 00:33:04.711580 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Oct 29 00:33:04.711599 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 29 00:33:04.711606 kernel: NX (Execute Disable) protection: active Oct 29 00:33:04.711614 kernel: APIC: Static calls initialized Oct 29 00:33:04.711621 kernel: e820: update [mem 0x9b319018-0x9b322c57] usable ==> usable Oct 29 00:33:04.711629 kernel: e820: update [mem 0x9b2dc018-0x9b318e57] usable ==> usable Oct 29 00:33:04.711636 kernel: extended physical RAM map: Oct 29 00:33:04.711644 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 29 00:33:04.711652 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Oct 29 00:33:04.711659 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Oct 29 00:33:04.711667 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Oct 29 00:33:04.711674 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Oct 29 00:33:04.711696 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Oct 29 00:33:04.711704 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Oct 29 00:33:04.711711 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2dc017] usable Oct 29 00:33:04.711719 kernel: reserve setup_data: [mem 0x000000009b2dc018-0x000000009b318e57] usable Oct 29 00:33:04.711735 kernel: reserve setup_data: [mem 0x000000009b318e58-0x000000009b319017] usable Oct 29 00:33:04.711749 kernel: reserve setup_data: [mem 0x000000009b319018-0x000000009b322c57] usable Oct 29 00:33:04.711757 kernel: reserve setup_data: [mem 0x000000009b322c58-0x000000009bd3efff] usable Oct 29 00:33:04.711765 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Oct 29 00:33:04.711773 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Oct 29 00:33:04.711781 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Oct 29 00:33:04.711875 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Oct 29 00:33:04.711886 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Oct 29 00:33:04.711894 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Oct 29 00:33:04.711912 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Oct 29 00:33:04.711920 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Oct 29 00:33:04.711928 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Oct 29 00:33:04.711936 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Oct 29 00:33:04.711943 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Oct 29 00:33:04.711951 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Oct 29 00:33:04.711959 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 29 00:33:04.711966 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Oct 29 00:33:04.711974 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 29 00:33:04.712002 kernel: efi: EFI v2.7 by EDK II Oct 29 00:33:04.712026 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Oct 29 00:33:04.712042 kernel: random: crng init done Oct 29 00:33:04.712052 kernel: efi: Remove mem150: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Oct 29 00:33:04.712060 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Oct 29 00:33:04.712070 kernel: secureboot: Secure boot disabled Oct 29 00:33:04.712078 kernel: SMBIOS 2.8 present. Oct 29 00:33:04.712086 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Oct 29 00:33:04.712093 kernel: DMI: Memory slots populated: 1/1 Oct 29 00:33:04.712101 kernel: Hypervisor detected: KVM Oct 29 00:33:04.712109 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Oct 29 00:33:04.712116 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 29 00:33:04.712124 kernel: kvm-clock: using sched offset of 5697335994 cycles Oct 29 00:33:04.712140 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 29 00:33:04.712148 kernel: tsc: Detected 2794.748 MHz processor Oct 29 00:33:04.712157 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 29 00:33:04.712165 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 29 00:33:04.712173 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Oct 29 00:33:04.712181 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Oct 29 00:33:04.712190 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 29 00:33:04.712198 kernel: Using GB pages for direct mapping Oct 29 00:33:04.712228 kernel: ACPI: Early table checksum verification disabled Oct 29 00:33:04.712236 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Oct 29 00:33:04.712244 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Oct 29 00:33:04.712253 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:33:04.712261 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:33:04.712269 kernel: ACPI: FACS 0x000000009CBDD000 000040 Oct 29 00:33:04.712278 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:33:04.712293 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:33:04.712301 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:33:04.712309 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:33:04.712317 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Oct 29 00:33:04.712325 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Oct 29 00:33:04.712333 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Oct 29 00:33:04.712341 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Oct 29 00:33:04.712356 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Oct 29 00:33:04.712365 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Oct 29 00:33:04.712374 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Oct 29 00:33:04.712384 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Oct 29 00:33:04.712391 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Oct 29 00:33:04.712399 kernel: No NUMA configuration found Oct 29 00:33:04.712407 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Oct 29 00:33:04.712422 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Oct 29 00:33:04.712431 kernel: Zone ranges: Oct 29 00:33:04.712439 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 29 00:33:04.712447 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Oct 29 00:33:04.712455 kernel: Normal empty Oct 29 00:33:04.712463 kernel: Device empty Oct 29 00:33:04.712471 kernel: Movable zone start for each node Oct 29 00:33:04.712479 kernel: Early memory node ranges Oct 29 00:33:04.712494 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Oct 29 00:33:04.712504 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Oct 29 00:33:04.712512 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Oct 29 00:33:04.712520 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Oct 29 00:33:04.712528 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Oct 29 00:33:04.712536 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Oct 29 00:33:04.712544 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Oct 29 00:33:04.712551 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Oct 29 00:33:04.712568 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Oct 29 00:33:04.712576 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 29 00:33:04.712603 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Oct 29 00:33:04.712619 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Oct 29 00:33:04.712627 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 29 00:33:04.712635 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Oct 29 00:33:04.712644 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Oct 29 00:33:04.712652 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Oct 29 00:33:04.712660 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Oct 29 00:33:04.712675 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Oct 29 00:33:04.712684 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 29 00:33:04.712701 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 29 00:33:04.712709 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 29 00:33:04.712794 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 29 00:33:04.712803 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 29 00:33:04.712811 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 29 00:33:04.712819 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 29 00:33:04.712828 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 29 00:33:04.712836 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 29 00:33:04.712844 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 29 00:33:04.712989 kernel: TSC deadline timer available Oct 29 00:33:04.712997 kernel: CPU topo: Max. logical packages: 1 Oct 29 00:33:04.713006 kernel: CPU topo: Max. logical dies: 1 Oct 29 00:33:04.713014 kernel: CPU topo: Max. dies per package: 1 Oct 29 00:33:04.713022 kernel: CPU topo: Max. threads per core: 1 Oct 29 00:33:04.713030 kernel: CPU topo: Num. cores per package: 4 Oct 29 00:33:04.713039 kernel: CPU topo: Num. threads per package: 4 Oct 29 00:33:04.713054 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Oct 29 00:33:04.713062 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 29 00:33:04.713070 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 29 00:33:04.713079 kernel: kvm-guest: setup PV sched yield Oct 29 00:33:04.713087 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Oct 29 00:33:04.713095 kernel: Booting paravirtualized kernel on KVM Oct 29 00:33:04.713104 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 29 00:33:04.713112 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 29 00:33:04.713128 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Oct 29 00:33:04.713136 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Oct 29 00:33:04.713144 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 29 00:33:04.713152 kernel: kvm-guest: PV spinlocks enabled Oct 29 00:33:04.713161 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 29 00:33:04.713173 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=54ef1c344b2a47697b32f3227bd37f41d37acb1889c1eaea33b22ce408b7b3ae Oct 29 00:33:04.713188 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 29 00:33:04.713197 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 29 00:33:04.713219 kernel: Fallback order for Node 0: 0 Oct 29 00:33:04.713228 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Oct 29 00:33:04.713236 kernel: Policy zone: DMA32 Oct 29 00:33:04.713244 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 29 00:33:04.713253 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 29 00:33:04.713269 kernel: ftrace: allocating 40092 entries in 157 pages Oct 29 00:33:04.713278 kernel: ftrace: allocated 157 pages with 5 groups Oct 29 00:33:04.713286 kernel: Dynamic Preempt: voluntary Oct 29 00:33:04.713294 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 29 00:33:04.713304 kernel: rcu: RCU event tracing is enabled. Oct 29 00:33:04.713312 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 29 00:33:04.713321 kernel: Trampoline variant of Tasks RCU enabled. Oct 29 00:33:04.713329 kernel: Rude variant of Tasks RCU enabled. Oct 29 00:33:04.713344 kernel: Tracing variant of Tasks RCU enabled. Oct 29 00:33:04.713353 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 29 00:33:04.713361 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 29 00:33:04.713372 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 29 00:33:04.713381 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 29 00:33:04.713389 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 29 00:33:04.713398 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 29 00:33:04.713412 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 29 00:33:04.713421 kernel: Console: colour dummy device 80x25 Oct 29 00:33:04.713429 kernel: printk: legacy console [ttyS0] enabled Oct 29 00:33:04.713437 kernel: ACPI: Core revision 20240827 Oct 29 00:33:04.713446 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 29 00:33:04.713454 kernel: APIC: Switch to symmetric I/O mode setup Oct 29 00:33:04.713462 kernel: x2apic enabled Oct 29 00:33:04.713477 kernel: APIC: Switched APIC routing to: physical x2apic Oct 29 00:33:04.713486 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 29 00:33:04.713494 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 29 00:33:04.713502 kernel: kvm-guest: setup PV IPIs Oct 29 00:33:04.713511 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 29 00:33:04.713519 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Oct 29 00:33:04.713528 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Oct 29 00:33:04.713589 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 29 00:33:04.713597 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 29 00:33:04.713605 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 29 00:33:04.713614 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 29 00:33:04.713622 kernel: Spectre V2 : Mitigation: Retpolines Oct 29 00:33:04.713630 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 29 00:33:04.713639 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 29 00:33:04.713654 kernel: active return thunk: retbleed_return_thunk Oct 29 00:33:04.713662 kernel: RETBleed: Mitigation: untrained return thunk Oct 29 00:33:04.713673 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 29 00:33:04.713682 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 29 00:33:04.713698 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 29 00:33:04.713707 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 29 00:33:04.713716 kernel: active return thunk: srso_return_thunk Oct 29 00:33:04.713732 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 29 00:33:04.713740 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 29 00:33:04.713749 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 29 00:33:04.713757 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 29 00:33:04.713766 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 29 00:33:04.713774 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 29 00:33:04.713789 kernel: Freeing SMP alternatives memory: 32K Oct 29 00:33:04.713798 kernel: pid_max: default: 32768 minimum: 301 Oct 29 00:33:04.713806 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 29 00:33:04.713814 kernel: landlock: Up and running. Oct 29 00:33:04.713822 kernel: SELinux: Initializing. Oct 29 00:33:04.713831 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 29 00:33:04.713839 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 29 00:33:04.713848 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 29 00:33:04.713882 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 29 00:33:04.713890 kernel: ... version: 0 Oct 29 00:33:04.713899 kernel: ... bit width: 48 Oct 29 00:33:04.713907 kernel: ... generic registers: 6 Oct 29 00:33:04.713916 kernel: ... value mask: 0000ffffffffffff Oct 29 00:33:04.713924 kernel: ... max period: 00007fffffffffff Oct 29 00:33:04.713932 kernel: ... fixed-purpose events: 0 Oct 29 00:33:04.713949 kernel: ... event mask: 000000000000003f Oct 29 00:33:04.713957 kernel: signal: max sigframe size: 1776 Oct 29 00:33:04.713966 kernel: rcu: Hierarchical SRCU implementation. Oct 29 00:33:04.713974 kernel: rcu: Max phase no-delay instances is 400. Oct 29 00:33:04.713985 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 29 00:33:04.713994 kernel: smp: Bringing up secondary CPUs ... Oct 29 00:33:04.714003 kernel: smpboot: x86: Booting SMP configuration: Oct 29 00:33:04.714018 kernel: .... node #0, CPUs: #1 #2 #3 Oct 29 00:33:04.714026 kernel: smp: Brought up 1 node, 4 CPUs Oct 29 00:33:04.714035 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Oct 29 00:33:04.714044 kernel: Memory: 2445192K/2565800K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15964K init, 2080K bss, 114668K reserved, 0K cma-reserved) Oct 29 00:33:04.714052 kernel: devtmpfs: initialized Oct 29 00:33:04.714061 kernel: x86/mm: Memory block size: 128MB Oct 29 00:33:04.714069 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Oct 29 00:33:04.714084 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Oct 29 00:33:04.714093 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Oct 29 00:33:04.714102 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Oct 29 00:33:04.714110 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Oct 29 00:33:04.714119 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Oct 29 00:33:04.714127 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 29 00:33:04.714136 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 29 00:33:04.714151 kernel: pinctrl core: initialized pinctrl subsystem Oct 29 00:33:04.714164 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 29 00:33:04.714173 kernel: audit: initializing netlink subsys (disabled) Oct 29 00:33:04.714181 kernel: audit: type=2000 audit(1761697981.857:1): state=initialized audit_enabled=0 res=1 Oct 29 00:33:04.714189 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 29 00:33:04.714198 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 29 00:33:04.714233 kernel: cpuidle: using governor menu Oct 29 00:33:04.714249 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 29 00:33:04.714258 kernel: dca service started, version 1.12.1 Oct 29 00:33:04.714266 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Oct 29 00:33:04.714275 kernel: PCI: Using configuration type 1 for base access Oct 29 00:33:04.714283 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 29 00:33:04.714292 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 29 00:33:04.714300 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 29 00:33:04.714315 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 29 00:33:04.714324 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 29 00:33:04.714332 kernel: ACPI: Added _OSI(Module Device) Oct 29 00:33:04.714340 kernel: ACPI: Added _OSI(Processor Device) Oct 29 00:33:04.714348 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 29 00:33:04.714357 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 29 00:33:04.714365 kernel: ACPI: Interpreter enabled Oct 29 00:33:04.714380 kernel: ACPI: PM: (supports S0 S3 S5) Oct 29 00:33:04.714389 kernel: ACPI: Using IOAPIC for interrupt routing Oct 29 00:33:04.714397 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 29 00:33:04.714405 kernel: PCI: Using E820 reservations for host bridge windows Oct 29 00:33:04.714414 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 29 00:33:04.714422 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 29 00:33:04.714713 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 29 00:33:04.714922 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 29 00:33:04.715101 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 29 00:33:04.715112 kernel: PCI host bridge to bus 0000:00 Oct 29 00:33:04.715308 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 29 00:33:04.715481 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 29 00:33:04.715869 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 29 00:33:04.716030 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Oct 29 00:33:04.716192 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Oct 29 00:33:04.716372 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Oct 29 00:33:04.716535 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 29 00:33:04.716844 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Oct 29 00:33:04.717048 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Oct 29 00:33:04.717238 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Oct 29 00:33:04.717427 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Oct 29 00:33:04.717599 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Oct 29 00:33:04.717801 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 29 00:33:04.717987 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 29 00:33:04.718179 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Oct 29 00:33:04.718372 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Oct 29 00:33:04.718566 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Oct 29 00:33:04.718766 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Oct 29 00:33:04.718944 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Oct 29 00:33:04.719139 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Oct 29 00:33:04.719341 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Oct 29 00:33:04.719533 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Oct 29 00:33:04.719718 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Oct 29 00:33:04.719895 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Oct 29 00:33:04.720070 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Oct 29 00:33:04.720278 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Oct 29 00:33:04.720462 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Oct 29 00:33:04.720637 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 29 00:33:04.720841 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Oct 29 00:33:04.721017 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Oct 29 00:33:04.721222 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Oct 29 00:33:04.721418 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Oct 29 00:33:04.721593 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Oct 29 00:33:04.721605 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 29 00:33:04.721614 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 29 00:33:04.721623 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 29 00:33:04.721631 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 29 00:33:04.721652 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 29 00:33:04.721661 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 29 00:33:04.721669 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 29 00:33:04.721678 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 29 00:33:04.721687 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 29 00:33:04.721704 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 29 00:33:04.721712 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 29 00:33:04.721728 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 29 00:33:04.721737 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 29 00:33:04.721745 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 29 00:33:04.721754 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 29 00:33:04.721763 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 29 00:33:04.721771 kernel: iommu: Default domain type: Translated Oct 29 00:33:04.721780 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 29 00:33:04.721795 kernel: efivars: Registered efivars operations Oct 29 00:33:04.721804 kernel: PCI: Using ACPI for IRQ routing Oct 29 00:33:04.721812 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 29 00:33:04.721821 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Oct 29 00:33:04.721829 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Oct 29 00:33:04.721837 kernel: e820: reserve RAM buffer [mem 0x9b2dc018-0x9bffffff] Oct 29 00:33:04.721846 kernel: e820: reserve RAM buffer [mem 0x9b319018-0x9bffffff] Oct 29 00:33:04.721862 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Oct 29 00:33:04.721870 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Oct 29 00:33:04.721879 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Oct 29 00:33:04.721887 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Oct 29 00:33:04.722064 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 29 00:33:04.722253 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 29 00:33:04.722441 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 29 00:33:04.722452 kernel: vgaarb: loaded Oct 29 00:33:04.722461 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 29 00:33:04.722470 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 29 00:33:04.722478 kernel: clocksource: Switched to clocksource kvm-clock Oct 29 00:33:04.722487 kernel: VFS: Disk quotas dquot_6.6.0 Oct 29 00:33:04.722496 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 29 00:33:04.722515 kernel: pnp: PnP ACPI init Oct 29 00:33:04.722769 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Oct 29 00:33:04.722792 kernel: pnp: PnP ACPI: found 6 devices Oct 29 00:33:04.722802 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 29 00:33:04.722811 kernel: NET: Registered PF_INET protocol family Oct 29 00:33:04.722820 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 29 00:33:04.722829 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 29 00:33:04.722845 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 29 00:33:04.722854 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 29 00:33:04.722863 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 29 00:33:04.722872 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 29 00:33:04.722881 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 29 00:33:04.722890 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 29 00:33:04.722899 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 29 00:33:04.722915 kernel: NET: Registered PF_XDP protocol family Oct 29 00:33:04.723092 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Oct 29 00:33:04.723285 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Oct 29 00:33:04.723449 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 29 00:33:04.723626 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 29 00:33:04.723800 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 29 00:33:04.723977 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Oct 29 00:33:04.724139 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Oct 29 00:33:04.724317 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Oct 29 00:33:04.724331 kernel: PCI: CLS 0 bytes, default 64 Oct 29 00:33:04.724340 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Oct 29 00:33:04.724360 kernel: Initialise system trusted keyrings Oct 29 00:33:04.724369 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 29 00:33:04.724378 kernel: Key type asymmetric registered Oct 29 00:33:04.724387 kernel: Asymmetric key parser 'x509' registered Oct 29 00:33:04.724396 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 29 00:33:04.724405 kernel: io scheduler mq-deadline registered Oct 29 00:33:04.724422 kernel: io scheduler kyber registered Oct 29 00:33:04.724432 kernel: io scheduler bfq registered Oct 29 00:33:04.724443 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 29 00:33:04.724452 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 29 00:33:04.724462 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 29 00:33:04.724471 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 29 00:33:04.724480 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 29 00:33:04.724489 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 29 00:33:04.724505 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 29 00:33:04.724514 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 29 00:33:04.724522 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 29 00:33:04.724715 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 29 00:33:04.724730 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 29 00:33:04.724902 kernel: rtc_cmos 00:04: registered as rtc0 Oct 29 00:33:04.725086 kernel: rtc_cmos 00:04: setting system clock to 2025-10-29T00:33:02 UTC (1761697982) Oct 29 00:33:04.725271 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Oct 29 00:33:04.725285 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 29 00:33:04.725293 kernel: efifb: probing for efifb Oct 29 00:33:04.725302 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Oct 29 00:33:04.725312 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Oct 29 00:33:04.725321 kernel: efifb: scrolling: redraw Oct 29 00:33:04.725341 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Oct 29 00:33:04.725350 kernel: Console: switching to colour frame buffer device 160x50 Oct 29 00:33:04.725358 kernel: fb0: EFI VGA frame buffer device Oct 29 00:33:04.725374 kernel: pstore: Using crash dump compression: deflate Oct 29 00:33:04.725383 kernel: pstore: Registered efi_pstore as persistent store backend Oct 29 00:33:04.725392 kernel: NET: Registered PF_INET6 protocol family Oct 29 00:33:04.725401 kernel: Segment Routing with IPv6 Oct 29 00:33:04.725416 kernel: In-situ OAM (IOAM) with IPv6 Oct 29 00:33:04.725425 kernel: NET: Registered PF_PACKET protocol family Oct 29 00:33:04.725434 kernel: Key type dns_resolver registered Oct 29 00:33:04.725443 kernel: IPI shorthand broadcast: enabled Oct 29 00:33:04.725452 kernel: sched_clock: Marking stable (1617019625, 293745581)->(2002367414, -91602208) Oct 29 00:33:04.725461 kernel: registered taskstats version 1 Oct 29 00:33:04.725469 kernel: Loading compiled-in X.509 certificates Oct 29 00:33:04.725485 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 4eb70affb0e364bb9bcbea2a9416e57c31aed070' Oct 29 00:33:04.725494 kernel: Demotion targets for Node 0: null Oct 29 00:33:04.725503 kernel: Key type .fscrypt registered Oct 29 00:33:04.725512 kernel: Key type fscrypt-provisioning registered Oct 29 00:33:04.725521 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 29 00:33:04.725529 kernel: ima: Allocated hash algorithm: sha1 Oct 29 00:33:04.725538 kernel: ima: No architecture policies found Oct 29 00:33:04.725547 kernel: clk: Disabling unused clocks Oct 29 00:33:04.725564 kernel: Freeing unused kernel image (initmem) memory: 15964K Oct 29 00:33:04.725573 kernel: Write protecting the kernel read-only data: 40960k Oct 29 00:33:04.725582 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Oct 29 00:33:04.725590 kernel: Run /init as init process Oct 29 00:33:04.725599 kernel: with arguments: Oct 29 00:33:04.725608 kernel: /init Oct 29 00:33:04.725617 kernel: with environment: Oct 29 00:33:04.725632 kernel: HOME=/ Oct 29 00:33:04.725641 kernel: TERM=linux Oct 29 00:33:04.725650 kernel: SCSI subsystem initialized Oct 29 00:33:04.725659 kernel: libata version 3.00 loaded. Oct 29 00:33:04.725852 kernel: ahci 0000:00:1f.2: version 3.0 Oct 29 00:33:04.725865 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 29 00:33:04.726051 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Oct 29 00:33:04.726265 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Oct 29 00:33:04.726478 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 29 00:33:04.726705 kernel: scsi host0: ahci Oct 29 00:33:04.726899 kernel: scsi host1: ahci Oct 29 00:33:04.727092 kernel: scsi host2: ahci Oct 29 00:33:04.727721 kernel: scsi host3: ahci Oct 29 00:33:04.727929 kernel: scsi host4: ahci Oct 29 00:33:04.728136 kernel: scsi host5: ahci Oct 29 00:33:04.728155 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Oct 29 00:33:04.728175 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Oct 29 00:33:04.728185 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Oct 29 00:33:04.728318 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Oct 29 00:33:04.728330 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Oct 29 00:33:04.728344 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Oct 29 00:33:04.728358 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 29 00:33:04.728375 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 29 00:33:04.728385 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 29 00:33:04.728396 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 29 00:33:04.728417 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 29 00:33:04.728428 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 29 00:33:04.728439 kernel: ata3.00: LPM support broken, forcing max_power Oct 29 00:33:04.728452 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 29 00:33:04.728463 kernel: ata3.00: applying bridge limits Oct 29 00:33:04.728480 kernel: ata3.00: LPM support broken, forcing max_power Oct 29 00:33:04.728496 kernel: ata3.00: configured for UDMA/100 Oct 29 00:33:04.729050 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 29 00:33:04.729541 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 29 00:33:04.729758 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Oct 29 00:33:04.729773 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 29 00:33:04.729783 kernel: GPT:16515071 != 27000831 Oct 29 00:33:04.729791 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 29 00:33:04.729813 kernel: GPT:16515071 != 27000831 Oct 29 00:33:04.729822 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 29 00:33:04.729831 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 29 00:33:04.729840 kernel: Invalid ELF header magic: != \u007fELF Oct 29 00:33:04.730040 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 29 00:33:04.730052 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 29 00:33:04.730260 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 29 00:33:04.730285 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 29 00:33:04.730294 kernel: device-mapper: uevent: version 1.0.3 Oct 29 00:33:04.730303 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 29 00:33:04.730312 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Oct 29 00:33:04.730321 kernel: Invalid ELF header magic: != \u007fELF Oct 29 00:33:04.730330 kernel: Invalid ELF header magic: != \u007fELF Oct 29 00:33:04.730338 kernel: raid6: avx2x4 gen() 29321 MB/s Oct 29 00:33:04.730354 kernel: raid6: avx2x2 gen() 26711 MB/s Oct 29 00:33:04.730363 kernel: raid6: avx2x1 gen() 24754 MB/s Oct 29 00:33:04.730372 kernel: raid6: using algorithm avx2x4 gen() 29321 MB/s Oct 29 00:33:04.730381 kernel: raid6: .... xor() 7445 MB/s, rmw enabled Oct 29 00:33:04.730390 kernel: raid6: using avx2x2 recovery algorithm Oct 29 00:33:04.730399 kernel: Invalid ELF header magic: != \u007fELF Oct 29 00:33:04.730410 kernel: Invalid ELF header magic: != \u007fELF Oct 29 00:33:04.730420 kernel: Invalid ELF header magic: != \u007fELF Oct 29 00:33:04.730440 kernel: xor: automatically using best checksumming function avx Oct 29 00:33:04.730451 kernel: Invalid ELF header magic: != \u007fELF Oct 29 00:33:04.730462 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 29 00:33:04.730473 kernel: BTRFS: device fsid c0171910-1eb4-4fd7-b94c-9d6b11be282f devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (178) Oct 29 00:33:04.730485 kernel: BTRFS info (device dm-0): first mount of filesystem c0171910-1eb4-4fd7-b94c-9d6b11be282f Oct 29 00:33:04.730496 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 29 00:33:04.730507 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 29 00:33:04.730526 kernel: BTRFS info (device dm-0): enabling free space tree Oct 29 00:33:04.730538 kernel: Invalid ELF header magic: != \u007fELF Oct 29 00:33:04.730548 kernel: loop: module loaded Oct 29 00:33:04.730559 kernel: loop0: detected capacity change from 0 to 100120 Oct 29 00:33:04.730571 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 29 00:33:04.730583 systemd[1]: Successfully made /usr/ read-only. Oct 29 00:33:04.730598 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 29 00:33:04.730618 systemd[1]: Detected virtualization kvm. Oct 29 00:33:04.730627 systemd[1]: Detected architecture x86-64. Oct 29 00:33:04.730636 systemd[1]: Running in initrd. Oct 29 00:33:04.730646 systemd[1]: No hostname configured, using default hostname. Oct 29 00:33:04.730655 systemd[1]: Hostname set to . Oct 29 00:33:04.730677 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 29 00:33:04.730687 systemd[1]: Queued start job for default target initrd.target. Oct 29 00:33:04.730706 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 29 00:33:04.730716 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 29 00:33:04.730726 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 29 00:33:04.730736 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 29 00:33:04.730746 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 29 00:33:04.730768 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 29 00:33:04.730789 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 29 00:33:04.730799 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 29 00:33:04.730809 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 29 00:33:04.730819 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 29 00:33:04.730828 systemd[1]: Reached target paths.target - Path Units. Oct 29 00:33:04.730851 systemd[1]: Reached target slices.target - Slice Units. Oct 29 00:33:04.730860 systemd[1]: Reached target swap.target - Swaps. Oct 29 00:33:04.730869 systemd[1]: Reached target timers.target - Timer Units. Oct 29 00:33:04.730879 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 29 00:33:04.730889 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 29 00:33:04.730898 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 29 00:33:04.730908 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 29 00:33:04.730924 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 29 00:33:04.730940 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 29 00:33:04.730950 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 29 00:33:04.730959 systemd[1]: Reached target sockets.target - Socket Units. Oct 29 00:33:04.730968 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 29 00:33:04.730978 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 29 00:33:04.730994 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 29 00:33:04.731004 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 29 00:33:04.731014 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 29 00:33:04.731023 systemd[1]: Starting systemd-fsck-usr.service... Oct 29 00:33:04.731033 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 29 00:33:04.731042 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 29 00:33:04.731052 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 00:33:04.731069 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 29 00:33:04.731079 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 29 00:33:04.731089 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 29 00:33:04.731104 systemd[1]: Finished systemd-fsck-usr.service. Oct 29 00:33:04.731114 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 29 00:33:04.731124 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 29 00:33:04.731166 systemd-journald[312]: Collecting audit messages is disabled. Oct 29 00:33:04.731197 systemd-journald[312]: Journal started Oct 29 00:33:04.731231 systemd-journald[312]: Runtime Journal (/run/log/journal/3804c3e82b9a46549efe766d0fd1990e) is 6M, max 48.1M, 42.1M free. Oct 29 00:33:04.734224 systemd[1]: Started systemd-journald.service - Journal Service. Oct 29 00:33:04.837770 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 29 00:33:04.846246 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 29 00:33:04.852224 kernel: Bridge firewalling registered Oct 29 00:33:04.852311 systemd-modules-load[314]: Inserted module 'br_netfilter' Oct 29 00:33:04.852312 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 29 00:33:04.854294 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 29 00:33:04.856484 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 29 00:33:04.872715 systemd-tmpfiles[331]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 29 00:33:04.873788 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 00:33:04.880867 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 29 00:33:04.884873 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 29 00:33:04.891747 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 29 00:33:04.897847 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 29 00:33:04.937734 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 29 00:33:04.944168 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 29 00:33:04.976189 systemd-resolved[343]: Positive Trust Anchors: Oct 29 00:33:04.976226 systemd-resolved[343]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 29 00:33:04.976232 systemd-resolved[343]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 29 00:33:04.976263 systemd-resolved[343]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 29 00:33:05.002270 dracut-cmdline[355]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=54ef1c344b2a47697b32f3227bd37f41d37acb1889c1eaea33b22ce408b7b3ae Oct 29 00:33:05.013650 systemd-resolved[343]: Defaulting to hostname 'linux'. Oct 29 00:33:05.015401 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 29 00:33:05.019496 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 29 00:33:05.121248 kernel: Loading iSCSI transport class v2.0-870. Oct 29 00:33:05.139276 kernel: iscsi: registered transport (tcp) Oct 29 00:33:05.171509 kernel: iscsi: registered transport (qla4xxx) Oct 29 00:33:05.171689 kernel: QLogic iSCSI HBA Driver Oct 29 00:33:05.205590 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 29 00:33:05.258467 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 29 00:33:05.262259 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 29 00:33:05.326975 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 29 00:33:05.332532 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 29 00:33:05.334920 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 29 00:33:05.377839 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 29 00:33:05.380472 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 29 00:33:05.421124 systemd-udevd[589]: Using default interface naming scheme 'v257'. Oct 29 00:33:05.435830 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 29 00:33:05.437161 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 29 00:33:05.466896 dracut-pre-trigger[649]: rd.md=0: removing MD RAID activation Oct 29 00:33:05.473645 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 29 00:33:05.478826 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 29 00:33:05.513636 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 29 00:33:05.518926 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 29 00:33:05.543585 systemd-networkd[708]: lo: Link UP Oct 29 00:33:05.543595 systemd-networkd[708]: lo: Gained carrier Oct 29 00:33:05.544352 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 29 00:33:05.547073 systemd[1]: Reached target network.target - Network. Oct 29 00:33:05.625912 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 29 00:33:05.634692 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 29 00:33:05.704971 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 29 00:33:05.740655 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 29 00:33:05.748244 kernel: cryptd: max_cpu_qlen set to 1000 Oct 29 00:33:05.757656 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 29 00:33:05.761814 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Oct 29 00:33:05.770237 systemd-networkd[708]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 29 00:33:05.770248 systemd-networkd[708]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 29 00:33:05.771402 systemd-networkd[708]: eth0: Link UP Oct 29 00:33:05.771618 systemd-networkd[708]: eth0: Gained carrier Oct 29 00:33:05.784369 kernel: AES CTR mode by8 optimization enabled Oct 29 00:33:05.771628 systemd-networkd[708]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 29 00:33:05.775376 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 29 00:33:05.797300 systemd-networkd[708]: eth0: DHCPv4 address 10.0.0.10/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 29 00:33:05.802237 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 29 00:33:05.805128 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 29 00:33:05.805367 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 00:33:05.811769 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 00:33:05.830769 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 00:33:05.858675 disk-uuid[831]: Primary Header is updated. Oct 29 00:33:05.858675 disk-uuid[831]: Secondary Entries is updated. Oct 29 00:33:05.858675 disk-uuid[831]: Secondary Header is updated. Oct 29 00:33:05.860129 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 29 00:33:05.862286 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 29 00:33:05.864498 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 29 00:33:05.865425 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 29 00:33:05.883827 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 29 00:33:05.926496 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 00:33:05.943898 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 29 00:33:06.926753 disk-uuid[839]: Warning: The kernel is still using the old partition table. Oct 29 00:33:06.926753 disk-uuid[839]: The new table will be used at the next reboot or after you Oct 29 00:33:06.926753 disk-uuid[839]: run partprobe(8) or kpartx(8) Oct 29 00:33:06.926753 disk-uuid[839]: The operation has completed successfully. Oct 29 00:33:06.940168 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 29 00:33:06.940400 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 29 00:33:06.945459 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 29 00:33:06.992271 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (860) Oct 29 00:33:06.995629 kernel: BTRFS info (device vda6): first mount of filesystem ba5c42d5-4e97-4410-b3e4-abc54f9b4dae Oct 29 00:33:06.995669 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 29 00:33:06.999660 kernel: BTRFS info (device vda6): turning on async discard Oct 29 00:33:06.999743 kernel: BTRFS info (device vda6): enabling free space tree Oct 29 00:33:07.009232 kernel: BTRFS info (device vda6): last unmount of filesystem ba5c42d5-4e97-4410-b3e4-abc54f9b4dae Oct 29 00:33:07.010383 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 29 00:33:07.013515 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 29 00:33:07.299002 ignition[879]: Ignition 2.22.0 Oct 29 00:33:07.299023 ignition[879]: Stage: fetch-offline Oct 29 00:33:07.299198 ignition[879]: no configs at "/usr/lib/ignition/base.d" Oct 29 00:33:07.299240 ignition[879]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 00:33:07.299562 ignition[879]: parsed url from cmdline: "" Oct 29 00:33:07.299568 ignition[879]: no config URL provided Oct 29 00:33:07.299576 ignition[879]: reading system config file "/usr/lib/ignition/user.ign" Oct 29 00:33:07.299595 ignition[879]: no config at "/usr/lib/ignition/user.ign" Oct 29 00:33:07.299668 ignition[879]: op(1): [started] loading QEMU firmware config module Oct 29 00:33:07.299678 ignition[879]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 29 00:33:07.311896 ignition[879]: op(1): [finished] loading QEMU firmware config module Oct 29 00:33:07.403114 ignition[879]: parsing config with SHA512: 999fcb92b09a2863c10c2ff8893024cc11f407fa0260aedbbb2ef63efd2bcb84f03fd6d2c1f8e979eb2b7c32b14d7b8c72d1ab7347193ebe135c0bcfe6968585 Oct 29 00:33:07.412635 unknown[879]: fetched base config from "system" Oct 29 00:33:07.412653 unknown[879]: fetched user config from "qemu" Oct 29 00:33:07.413191 ignition[879]: fetch-offline: fetch-offline passed Oct 29 00:33:07.413335 ignition[879]: Ignition finished successfully Oct 29 00:33:07.419942 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 29 00:33:07.424667 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 29 00:33:07.429380 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 29 00:33:07.475540 ignition[891]: Ignition 2.22.0 Oct 29 00:33:07.475558 ignition[891]: Stage: kargs Oct 29 00:33:07.475795 ignition[891]: no configs at "/usr/lib/ignition/base.d" Oct 29 00:33:07.475808 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 00:33:07.477425 ignition[891]: kargs: kargs passed Oct 29 00:33:07.477488 ignition[891]: Ignition finished successfully Oct 29 00:33:07.483758 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 29 00:33:07.487012 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 29 00:33:07.544965 ignition[899]: Ignition 2.22.0 Oct 29 00:33:07.544983 ignition[899]: Stage: disks Oct 29 00:33:07.545160 ignition[899]: no configs at "/usr/lib/ignition/base.d" Oct 29 00:33:07.545171 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 00:33:07.546089 ignition[899]: disks: disks passed Oct 29 00:33:07.546157 ignition[899]: Ignition finished successfully Oct 29 00:33:07.555488 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 29 00:33:07.559934 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 29 00:33:07.560051 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 29 00:33:07.563577 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 29 00:33:07.569142 systemd[1]: Reached target sysinit.target - System Initialization. Oct 29 00:33:07.570818 systemd[1]: Reached target basic.target - Basic System. Oct 29 00:33:07.575657 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 29 00:33:07.634327 systemd-fsck[909]: ROOT: clean, 15/456736 files, 38230/456704 blocks Oct 29 00:33:07.643350 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 29 00:33:07.649824 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 29 00:33:07.790234 kernel: EXT4-fs (vda9): mounted filesystem ef53721c-fae5-4ad9-8976-8181c84bc175 r/w with ordered data mode. Quota mode: none. Oct 29 00:33:07.790658 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 29 00:33:07.793396 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 29 00:33:07.797191 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 29 00:33:07.797552 systemd-networkd[708]: eth0: Gained IPv6LL Oct 29 00:33:07.800950 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 29 00:33:07.803397 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 29 00:33:07.803463 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 29 00:33:07.803513 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 29 00:33:07.826882 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (918) Oct 29 00:33:07.826935 kernel: BTRFS info (device vda6): first mount of filesystem ba5c42d5-4e97-4410-b3e4-abc54f9b4dae Oct 29 00:33:07.826948 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 29 00:33:07.815751 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 29 00:33:07.831322 kernel: BTRFS info (device vda6): turning on async discard Oct 29 00:33:07.831341 kernel: BTRFS info (device vda6): enabling free space tree Oct 29 00:33:07.819268 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 29 00:33:07.832844 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 29 00:33:07.909022 initrd-setup-root[942]: cut: /sysroot/etc/passwd: No such file or directory Oct 29 00:33:07.957495 initrd-setup-root[949]: cut: /sysroot/etc/group: No such file or directory Oct 29 00:33:07.965007 initrd-setup-root[956]: cut: /sysroot/etc/shadow: No such file or directory Oct 29 00:33:07.970732 initrd-setup-root[963]: cut: /sysroot/etc/gshadow: No such file or directory Oct 29 00:33:08.089262 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 29 00:33:08.093220 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 29 00:33:08.097526 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 29 00:33:08.126509 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 29 00:33:08.129349 kernel: BTRFS info (device vda6): last unmount of filesystem ba5c42d5-4e97-4410-b3e4-abc54f9b4dae Oct 29 00:33:08.144291 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 29 00:33:08.175975 ignition[1032]: INFO : Ignition 2.22.0 Oct 29 00:33:08.175975 ignition[1032]: INFO : Stage: mount Oct 29 00:33:08.178875 ignition[1032]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 29 00:33:08.178875 ignition[1032]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 00:33:08.194676 ignition[1032]: INFO : mount: mount passed Oct 29 00:33:08.196070 ignition[1032]: INFO : Ignition finished successfully Oct 29 00:33:08.200647 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 29 00:33:08.203797 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 29 00:33:08.233378 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 29 00:33:08.260474 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1044) Oct 29 00:33:08.260544 kernel: BTRFS info (device vda6): first mount of filesystem ba5c42d5-4e97-4410-b3e4-abc54f9b4dae Oct 29 00:33:08.260617 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 29 00:33:08.266156 kernel: BTRFS info (device vda6): turning on async discard Oct 29 00:33:08.266249 kernel: BTRFS info (device vda6): enabling free space tree Oct 29 00:33:08.268032 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 29 00:33:08.306408 ignition[1061]: INFO : Ignition 2.22.0 Oct 29 00:33:08.306408 ignition[1061]: INFO : Stage: files Oct 29 00:33:08.309826 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 29 00:33:08.309826 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 00:33:08.450495 ignition[1061]: DEBUG : files: compiled without relabeling support, skipping Oct 29 00:33:08.452716 ignition[1061]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 29 00:33:08.452716 ignition[1061]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 29 00:33:08.457558 ignition[1061]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 29 00:33:08.457558 ignition[1061]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 29 00:33:08.457558 ignition[1061]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 29 00:33:08.456355 unknown[1061]: wrote ssh authorized keys file for user: core Oct 29 00:33:08.466118 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 29 00:33:08.466118 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 29 00:33:08.515892 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 29 00:33:08.900435 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 29 00:33:08.900435 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 29 00:33:08.907227 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 29 00:33:08.907227 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 29 00:33:08.907227 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 29 00:33:08.907227 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 29 00:33:08.907227 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 29 00:33:08.907227 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 29 00:33:08.907227 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 29 00:33:08.907227 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 29 00:33:08.907227 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 29 00:33:08.907227 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 29 00:33:08.938924 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 29 00:33:08.938924 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 29 00:33:08.938924 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Oct 29 00:33:09.368173 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 29 00:33:10.159630 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 29 00:33:10.159630 ignition[1061]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 29 00:33:10.167232 ignition[1061]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 29 00:33:10.167232 ignition[1061]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 29 00:33:10.167232 ignition[1061]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 29 00:33:10.167232 ignition[1061]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 29 00:33:10.167232 ignition[1061]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 29 00:33:10.184080 ignition[1061]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 29 00:33:10.184080 ignition[1061]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 29 00:33:10.184080 ignition[1061]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 29 00:33:10.215539 ignition[1061]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 29 00:33:10.270922 ignition[1061]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 29 00:33:10.273777 ignition[1061]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 29 00:33:10.273777 ignition[1061]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 29 00:33:10.273777 ignition[1061]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 29 00:33:10.273777 ignition[1061]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 29 00:33:10.273777 ignition[1061]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 29 00:33:10.273777 ignition[1061]: INFO : files: files passed Oct 29 00:33:10.273777 ignition[1061]: INFO : Ignition finished successfully Oct 29 00:33:10.328387 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 29 00:33:10.336784 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 29 00:33:10.340866 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 29 00:33:10.359764 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 29 00:33:10.359986 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 29 00:33:10.365942 initrd-setup-root-after-ignition[1090]: grep: /sysroot/oem/oem-release: No such file or directory Oct 29 00:33:10.368664 initrd-setup-root-after-ignition[1092]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 29 00:33:10.368664 initrd-setup-root-after-ignition[1092]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 29 00:33:10.374677 initrd-setup-root-after-ignition[1096]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 29 00:33:10.378012 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 29 00:33:10.380475 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 29 00:33:10.386238 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 29 00:33:10.479463 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 29 00:33:10.479660 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 29 00:33:10.481905 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 29 00:33:10.488159 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 29 00:33:10.490933 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 29 00:33:10.492483 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 29 00:33:10.532580 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 29 00:33:10.534815 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 29 00:33:10.563825 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 29 00:33:10.564061 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 29 00:33:10.568110 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 29 00:33:10.571948 systemd[1]: Stopped target timers.target - Timer Units. Oct 29 00:33:10.575366 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 29 00:33:10.575509 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 29 00:33:10.580543 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 29 00:33:10.584126 systemd[1]: Stopped target basic.target - Basic System. Oct 29 00:33:10.587281 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 29 00:33:10.590578 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 29 00:33:10.592422 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 29 00:33:10.592974 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 29 00:33:10.601269 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 29 00:33:10.601679 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 29 00:33:10.608250 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 29 00:33:10.608903 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 29 00:33:10.609469 systemd[1]: Stopped target swap.target - Swaps. Oct 29 00:33:10.610021 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 29 00:33:10.610156 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 29 00:33:10.620703 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 29 00:33:10.627162 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 29 00:33:10.631477 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 29 00:33:10.633237 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 29 00:33:10.633430 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 29 00:33:10.633683 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 29 00:33:10.642451 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 29 00:33:10.642641 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 29 00:33:10.644463 systemd[1]: Stopped target paths.target - Path Units. Oct 29 00:33:10.648103 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 29 00:33:10.654348 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 29 00:33:10.654634 systemd[1]: Stopped target slices.target - Slice Units. Oct 29 00:33:10.660755 systemd[1]: Stopped target sockets.target - Socket Units. Oct 29 00:33:10.662258 systemd[1]: iscsid.socket: Deactivated successfully. Oct 29 00:33:10.662392 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 29 00:33:10.666926 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 29 00:33:10.667056 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 29 00:33:10.670081 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 29 00:33:10.670256 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 29 00:33:10.673704 systemd[1]: ignition-files.service: Deactivated successfully. Oct 29 00:33:10.673843 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 29 00:33:10.680509 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 29 00:33:10.685792 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 29 00:33:10.687310 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 29 00:33:10.687525 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 29 00:33:10.692338 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 29 00:33:10.692471 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 29 00:33:10.695847 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 29 00:33:10.695982 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 29 00:33:10.713314 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 29 00:33:10.713494 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 29 00:33:10.734486 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 29 00:33:10.742590 ignition[1116]: INFO : Ignition 2.22.0 Oct 29 00:33:10.742590 ignition[1116]: INFO : Stage: umount Oct 29 00:33:10.745415 ignition[1116]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 29 00:33:10.745415 ignition[1116]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 00:33:10.745415 ignition[1116]: INFO : umount: umount passed Oct 29 00:33:10.745415 ignition[1116]: INFO : Ignition finished successfully Oct 29 00:33:10.748292 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 29 00:33:10.748458 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 29 00:33:10.750195 systemd[1]: Stopped target network.target - Network. Oct 29 00:33:10.755295 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 29 00:33:10.755373 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 29 00:33:10.758563 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 29 00:33:10.758625 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 29 00:33:10.761795 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 29 00:33:10.761860 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 29 00:33:10.764924 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 29 00:33:10.764976 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 29 00:33:10.770111 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 29 00:33:10.773333 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 29 00:33:10.785192 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 29 00:33:10.785613 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 29 00:33:10.792957 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 29 00:33:10.793268 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 29 00:33:10.802141 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 29 00:33:10.806863 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 29 00:33:10.806939 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 29 00:33:10.811036 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 29 00:33:10.814473 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 29 00:33:10.814601 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 29 00:33:10.818515 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 29 00:33:10.818599 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 29 00:33:10.822286 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 29 00:33:10.822348 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 29 00:33:10.825028 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 29 00:33:10.832953 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 29 00:33:10.842458 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 29 00:33:10.845962 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 29 00:33:10.846101 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 29 00:33:10.853411 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 29 00:33:10.853685 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 29 00:33:10.857891 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 29 00:33:10.858011 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 29 00:33:10.861694 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 29 00:33:10.861747 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 29 00:33:10.865714 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 29 00:33:10.865800 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 29 00:33:10.871381 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 29 00:33:10.871450 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 29 00:33:10.872809 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 29 00:33:10.872890 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 29 00:33:10.883886 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 29 00:33:10.885374 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 29 00:33:10.885466 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 29 00:33:10.892320 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 29 00:33:10.892466 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 29 00:33:10.898322 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 29 00:33:10.898376 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 29 00:33:10.903308 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 29 00:33:10.903393 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 29 00:33:10.905168 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 29 00:33:10.905251 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 00:33:10.929183 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 29 00:33:10.929358 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 29 00:33:10.932950 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 29 00:33:10.933074 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 29 00:33:10.936798 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 29 00:33:10.943481 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 29 00:33:10.973381 systemd[1]: Switching root. Oct 29 00:33:11.019611 systemd-journald[312]: Journal stopped Oct 29 00:33:13.229837 systemd-journald[312]: Received SIGTERM from PID 1 (systemd). Oct 29 00:33:13.229913 kernel: SELinux: policy capability network_peer_controls=1 Oct 29 00:33:13.229930 kernel: SELinux: policy capability open_perms=1 Oct 29 00:33:13.229951 kernel: SELinux: policy capability extended_socket_class=1 Oct 29 00:33:13.229982 kernel: SELinux: policy capability always_check_network=0 Oct 29 00:33:13.229994 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 29 00:33:13.230012 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 29 00:33:13.230033 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 29 00:33:13.230046 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 29 00:33:13.230060 kernel: SELinux: policy capability userspace_initial_context=0 Oct 29 00:33:13.230072 kernel: audit: type=1403 audit(1761697992.164:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 29 00:33:13.230094 systemd[1]: Successfully loaded SELinux policy in 77.330ms. Oct 29 00:33:13.230122 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.202ms. Oct 29 00:33:13.230137 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 29 00:33:13.230150 systemd[1]: Detected virtualization kvm. Oct 29 00:33:13.230163 systemd[1]: Detected architecture x86-64. Oct 29 00:33:13.230176 systemd[1]: Detected first boot. Oct 29 00:33:13.230195 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 29 00:33:13.230222 zram_generator::config[1161]: No configuration found. Oct 29 00:33:13.230250 kernel: Guest personality initialized and is inactive Oct 29 00:33:13.230263 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 29 00:33:13.230275 kernel: Initialized host personality Oct 29 00:33:13.230287 kernel: NET: Registered PF_VSOCK protocol family Oct 29 00:33:13.230299 systemd[1]: Populated /etc with preset unit settings. Oct 29 00:33:13.230311 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 29 00:33:13.230332 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 29 00:33:13.230345 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 29 00:33:13.230359 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 29 00:33:13.230374 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 29 00:33:13.230387 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 29 00:33:13.230399 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 29 00:33:13.230412 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 29 00:33:13.230433 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 29 00:33:13.230447 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 29 00:33:13.230460 systemd[1]: Created slice user.slice - User and Session Slice. Oct 29 00:33:13.230473 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 29 00:33:13.230496 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 29 00:33:13.230513 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 29 00:33:13.230539 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 29 00:33:13.230564 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 29 00:33:13.230582 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 29 00:33:13.230598 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 29 00:33:13.230616 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 29 00:33:13.230629 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 29 00:33:13.230642 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 29 00:33:13.230663 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 29 00:33:13.230676 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 29 00:33:13.230688 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 29 00:33:13.230701 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 29 00:33:13.230714 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 29 00:33:13.230727 systemd[1]: Reached target slices.target - Slice Units. Oct 29 00:33:13.230740 systemd[1]: Reached target swap.target - Swaps. Oct 29 00:33:13.230760 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 29 00:33:13.230773 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 29 00:33:13.230786 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 29 00:33:13.230799 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 29 00:33:13.230812 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 29 00:33:13.230825 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 29 00:33:13.230838 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 29 00:33:13.230856 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 29 00:33:13.230877 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 29 00:33:13.230889 systemd[1]: Mounting media.mount - External Media Directory... Oct 29 00:33:13.230905 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 00:33:13.230918 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 29 00:33:13.230930 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 29 00:33:13.230943 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 29 00:33:13.230964 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 29 00:33:13.230977 systemd[1]: Reached target machines.target - Containers. Oct 29 00:33:13.230990 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 29 00:33:13.231003 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 29 00:33:13.231016 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 29 00:33:13.231029 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 29 00:33:13.231042 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 29 00:33:13.231064 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 29 00:33:13.231077 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 29 00:33:13.231090 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 29 00:33:13.231103 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 29 00:33:13.231116 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 29 00:33:13.231130 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 29 00:33:13.231143 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 29 00:33:13.231174 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 29 00:33:13.231187 systemd[1]: Stopped systemd-fsck-usr.service. Oct 29 00:33:13.231214 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 00:33:13.231228 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 29 00:33:13.231241 kernel: ACPI: bus type drm_connector registered Oct 29 00:33:13.231253 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 29 00:33:13.231268 kernel: fuse: init (API version 7.41) Oct 29 00:33:13.231289 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 29 00:33:13.231302 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 29 00:33:13.231323 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 29 00:33:13.231344 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 29 00:33:13.231376 systemd-journald[1243]: Collecting audit messages is disabled. Oct 29 00:33:13.231406 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 00:33:13.231419 systemd-journald[1243]: Journal started Oct 29 00:33:13.231443 systemd-journald[1243]: Runtime Journal (/run/log/journal/3804c3e82b9a46549efe766d0fd1990e) is 6M, max 48.1M, 42.1M free. Oct 29 00:33:12.900304 systemd[1]: Queued start job for default target multi-user.target. Oct 29 00:33:13.236376 systemd[1]: Started systemd-journald.service - Journal Service. Oct 29 00:33:12.928952 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 29 00:33:12.929610 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 29 00:33:13.238904 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 29 00:33:13.241437 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 29 00:33:13.243738 systemd[1]: Mounted media.mount - External Media Directory. Oct 29 00:33:13.245611 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 29 00:33:13.248357 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 29 00:33:13.250370 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 29 00:33:13.252533 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 29 00:33:13.255626 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 29 00:33:13.258453 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 29 00:33:13.258689 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 29 00:33:13.261334 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 00:33:13.261579 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 29 00:33:13.264185 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 29 00:33:13.264428 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 29 00:33:13.266906 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 00:33:13.267126 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 29 00:33:13.269920 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 29 00:33:13.270140 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 29 00:33:13.272700 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 00:33:13.272917 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 29 00:33:13.275604 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 29 00:33:13.278354 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 29 00:33:13.282090 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 29 00:33:13.285031 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 29 00:33:13.300886 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 29 00:33:13.303914 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 29 00:33:13.307930 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 29 00:33:13.312132 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 29 00:33:13.316395 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 29 00:33:13.316503 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 29 00:33:13.319930 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 29 00:33:13.322914 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 00:33:13.328373 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 29 00:33:13.331910 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 29 00:33:13.334263 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 29 00:33:13.335524 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 29 00:33:13.337960 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 29 00:33:13.340335 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 29 00:33:13.345341 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 29 00:33:13.348390 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 29 00:33:13.353374 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 29 00:33:13.357432 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 29 00:33:13.401356 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 29 00:33:13.404872 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 29 00:33:13.405558 systemd-journald[1243]: Time spent on flushing to /var/log/journal/3804c3e82b9a46549efe766d0fd1990e is 30.388ms for 1063 entries. Oct 29 00:33:13.405558 systemd-journald[1243]: System Journal (/var/log/journal/3804c3e82b9a46549efe766d0fd1990e) is 8M, max 163.5M, 155.5M free. Oct 29 00:33:13.448836 systemd-journald[1243]: Received client request to flush runtime journal. Oct 29 00:33:13.448926 kernel: loop1: detected capacity change from 0 to 110976 Oct 29 00:33:13.409599 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 29 00:33:13.422384 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 29 00:33:13.425054 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 29 00:33:13.452310 systemd-tmpfiles[1281]: ACLs are not supported, ignoring. Oct 29 00:33:13.452324 systemd-tmpfiles[1281]: ACLs are not supported, ignoring. Oct 29 00:33:13.453634 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 29 00:33:13.460031 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 29 00:33:13.464312 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 29 00:33:13.483547 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 29 00:33:13.491239 kernel: loop2: detected capacity change from 0 to 229808 Oct 29 00:33:13.516245 kernel: loop3: detected capacity change from 0 to 128048 Oct 29 00:33:13.517869 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 29 00:33:13.522947 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 29 00:33:13.528377 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 29 00:33:13.543545 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 29 00:33:13.552318 kernel: loop4: detected capacity change from 0 to 110976 Oct 29 00:33:13.557489 systemd-tmpfiles[1304]: ACLs are not supported, ignoring. Oct 29 00:33:13.557836 systemd-tmpfiles[1304]: ACLs are not supported, ignoring. Oct 29 00:33:13.564258 kernel: loop5: detected capacity change from 0 to 229808 Oct 29 00:33:13.564434 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 29 00:33:13.574250 kernel: loop6: detected capacity change from 0 to 128048 Oct 29 00:33:13.584054 (sd-merge)[1308]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Oct 29 00:33:13.589971 (sd-merge)[1308]: Merged extensions into '/usr'. Oct 29 00:33:13.595147 systemd[1]: Reload requested from client PID 1280 ('systemd-sysext') (unit systemd-sysext.service)... Oct 29 00:33:13.595168 systemd[1]: Reloading... Oct 29 00:33:13.652247 zram_generator::config[1342]: No configuration found. Oct 29 00:33:14.036192 systemd-resolved[1303]: Positive Trust Anchors: Oct 29 00:33:14.036235 systemd-resolved[1303]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 29 00:33:14.036240 systemd-resolved[1303]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 29 00:33:14.036274 systemd-resolved[1303]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 29 00:33:14.041196 systemd-resolved[1303]: Defaulting to hostname 'linux'. Oct 29 00:33:14.183027 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 29 00:33:14.183691 systemd[1]: Reloading finished in 587 ms. Oct 29 00:33:14.213566 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 29 00:33:14.215943 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 29 00:33:14.218960 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 29 00:33:14.223758 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 29 00:33:14.245517 systemd[1]: Starting ensure-sysext.service... Oct 29 00:33:14.249566 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 29 00:33:14.331021 systemd[1]: Reload requested from client PID 1378 ('systemctl') (unit ensure-sysext.service)... Oct 29 00:33:14.331050 systemd[1]: Reloading... Oct 29 00:33:14.411360 zram_generator::config[1412]: No configuration found. Oct 29 00:33:14.418156 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 29 00:33:14.418196 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 29 00:33:14.418805 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 29 00:33:14.419558 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 29 00:33:14.420578 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 29 00:33:14.420869 systemd-tmpfiles[1379]: ACLs are not supported, ignoring. Oct 29 00:33:14.420959 systemd-tmpfiles[1379]: ACLs are not supported, ignoring. Oct 29 00:33:14.428360 systemd-tmpfiles[1379]: Detected autofs mount point /boot during canonicalization of boot. Oct 29 00:33:14.428379 systemd-tmpfiles[1379]: Skipping /boot Oct 29 00:33:14.442551 systemd-tmpfiles[1379]: Detected autofs mount point /boot during canonicalization of boot. Oct 29 00:33:14.442571 systemd-tmpfiles[1379]: Skipping /boot Oct 29 00:33:14.618292 systemd[1]: Reloading finished in 286 ms. Oct 29 00:33:14.644536 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 29 00:33:14.675670 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 29 00:33:14.688827 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 29 00:33:14.692882 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 29 00:33:14.705477 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 29 00:33:14.710518 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 29 00:33:14.715022 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 29 00:33:14.720689 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 29 00:33:14.726293 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 00:33:14.726604 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 29 00:33:14.728920 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 29 00:33:14.735925 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 29 00:33:14.743509 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 29 00:33:14.746431 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 00:33:14.746577 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 00:33:14.746674 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 00:33:14.752434 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 29 00:33:14.757838 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 00:33:14.763591 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 29 00:33:14.766767 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 00:33:14.767007 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 29 00:33:14.769234 systemd-udevd[1456]: Using default interface naming scheme 'v257'. Oct 29 00:33:14.770071 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 00:33:14.770582 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 29 00:33:14.783271 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 00:33:14.783659 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 29 00:33:14.783859 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 00:33:14.783996 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 00:33:14.784106 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 29 00:33:14.784246 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 29 00:33:14.784346 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 00:33:14.785329 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 29 00:33:14.801068 systemd[1]: Finished ensure-sysext.service. Oct 29 00:33:14.806805 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 00:33:14.806989 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 29 00:33:14.809123 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 29 00:33:14.814730 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 29 00:33:14.822546 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 29 00:33:14.827371 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 29 00:33:14.829384 augenrules[1491]: No rules Oct 29 00:33:14.829666 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 00:33:14.829717 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 00:33:14.832681 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 29 00:33:14.833047 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 00:33:14.833579 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 29 00:33:14.838011 systemd[1]: audit-rules.service: Deactivated successfully. Oct 29 00:33:14.892587 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 29 00:33:14.895140 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 29 00:33:14.898065 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 00:33:14.898396 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 29 00:33:14.901653 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 29 00:33:14.901960 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 29 00:33:14.904928 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 00:33:14.905176 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 29 00:33:14.908470 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 00:33:14.908710 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 29 00:33:14.932925 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 29 00:33:14.934867 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 29 00:33:14.934925 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 29 00:33:14.934963 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 29 00:33:15.033853 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 29 00:33:15.048662 systemd[1]: Reached target time-set.target - System Time Set. Oct 29 00:33:15.068317 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 29 00:33:15.080518 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 29 00:33:15.083424 systemd-networkd[1518]: lo: Link UP Oct 29 00:33:15.083940 systemd-networkd[1518]: lo: Gained carrier Oct 29 00:33:15.084773 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 29 00:33:15.086669 systemd-networkd[1518]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 29 00:33:15.087061 systemd-networkd[1518]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 29 00:33:15.087084 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 29 00:33:15.088655 systemd-networkd[1518]: eth0: Link UP Oct 29 00:33:15.093714 kernel: mousedev: PS/2 mouse device common for all mice Oct 29 00:33:15.089196 systemd[1]: Reached target network.target - Network. Oct 29 00:33:15.089352 systemd-networkd[1518]: eth0: Gained carrier Oct 29 00:33:15.089367 systemd-networkd[1518]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 29 00:33:15.094416 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 29 00:33:15.165950 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 29 00:33:15.178260 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 29 00:33:15.182313 systemd-networkd[1518]: eth0: DHCPv4 address 10.0.0.10/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 29 00:33:15.189524 kernel: ACPI: button: Power Button [PWRF] Oct 29 00:33:15.193924 systemd-timesyncd[1497]: Network configuration changed, trying to establish connection. Oct 29 00:33:16.155431 systemd-timesyncd[1497]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 29 00:33:16.155517 systemd-timesyncd[1497]: Initial clock synchronization to Wed 2025-10-29 00:33:16.155115 UTC. Oct 29 00:33:16.157932 systemd-resolved[1303]: Clock change detected. Flushing caches. Oct 29 00:33:16.163290 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 29 00:33:16.204659 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Oct 29 00:33:16.205051 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 29 00:33:16.205281 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 29 00:33:16.198083 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 29 00:33:16.433601 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 00:33:16.447832 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 29 00:33:16.448400 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 00:33:16.453647 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 00:33:16.485394 kernel: kvm_amd: TSC scaling supported Oct 29 00:33:16.485475 kernel: kvm_amd: Nested Virtualization enabled Oct 29 00:33:16.485514 kernel: kvm_amd: Nested Paging enabled Oct 29 00:33:16.485528 kernel: kvm_amd: LBR virtualization supported Oct 29 00:33:16.493315 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 29 00:33:16.493380 kernel: kvm_amd: Virtual GIF supported Oct 29 00:33:16.574672 kernel: EDAC MC: Ver: 3.0.0 Oct 29 00:33:16.651371 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 00:33:16.674152 ldconfig[1450]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 29 00:33:16.684827 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 29 00:33:16.688749 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 29 00:33:16.737601 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 29 00:33:16.740235 systemd[1]: Reached target sysinit.target - System Initialization. Oct 29 00:33:16.742519 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 29 00:33:16.744919 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 29 00:33:16.747256 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 29 00:33:16.749739 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 29 00:33:16.751937 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 29 00:33:16.754353 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 29 00:33:16.756997 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 29 00:33:16.757060 systemd[1]: Reached target paths.target - Path Units. Oct 29 00:33:16.758887 systemd[1]: Reached target timers.target - Timer Units. Oct 29 00:33:16.762495 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 29 00:33:16.766790 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 29 00:33:16.773104 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 29 00:33:16.775654 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 29 00:33:16.777703 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 29 00:33:16.783697 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 29 00:33:16.785897 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 29 00:33:16.788611 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 29 00:33:16.791589 systemd[1]: Reached target sockets.target - Socket Units. Oct 29 00:33:16.793195 systemd[1]: Reached target basic.target - Basic System. Oct 29 00:33:16.794962 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 29 00:33:16.794997 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 29 00:33:16.796552 systemd[1]: Starting containerd.service - containerd container runtime... Oct 29 00:33:16.801888 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 29 00:33:16.805120 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 29 00:33:16.831350 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 29 00:33:16.835194 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 29 00:33:16.837191 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 29 00:33:16.838842 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 29 00:33:16.843334 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 29 00:33:16.848752 jq[1572]: false Oct 29 00:33:16.844909 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 29 00:33:16.856777 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 29 00:33:16.861235 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 29 00:33:16.866496 google_oslogin_nss_cache[1574]: oslogin_cache_refresh[1574]: Refreshing passwd entry cache Oct 29 00:33:16.867186 oslogin_cache_refresh[1574]: Refreshing passwd entry cache Oct 29 00:33:16.872564 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 29 00:33:16.874702 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 29 00:33:16.874775 extend-filesystems[1573]: Found /dev/vda6 Oct 29 00:33:16.875407 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 29 00:33:16.877151 systemd[1]: Starting update-engine.service - Update Engine... Oct 29 00:33:16.882543 extend-filesystems[1573]: Found /dev/vda9 Oct 29 00:33:16.884458 google_oslogin_nss_cache[1574]: oslogin_cache_refresh[1574]: Failure getting users, quitting Oct 29 00:33:16.884458 google_oslogin_nss_cache[1574]: oslogin_cache_refresh[1574]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 29 00:33:16.884418 oslogin_cache_refresh[1574]: Failure getting users, quitting Oct 29 00:33:16.885769 google_oslogin_nss_cache[1574]: oslogin_cache_refresh[1574]: Refreshing group entry cache Oct 29 00:33:16.884442 oslogin_cache_refresh[1574]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 29 00:33:16.884514 oslogin_cache_refresh[1574]: Refreshing group entry cache Oct 29 00:33:16.887821 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 29 00:33:16.888931 extend-filesystems[1573]: Checking size of /dev/vda9 Oct 29 00:33:16.898955 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 29 00:33:16.902681 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 29 00:33:16.903253 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 29 00:33:16.903721 systemd[1]: motdgen.service: Deactivated successfully. Oct 29 00:33:16.904109 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 29 00:33:16.905533 extend-filesystems[1573]: Resized partition /dev/vda9 Oct 29 00:33:16.911654 google_oslogin_nss_cache[1574]: oslogin_cache_refresh[1574]: Failure getting groups, quitting Oct 29 00:33:16.911654 google_oslogin_nss_cache[1574]: oslogin_cache_refresh[1574]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 29 00:33:16.908904 oslogin_cache_refresh[1574]: Failure getting groups, quitting Oct 29 00:33:16.908921 oslogin_cache_refresh[1574]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 29 00:33:16.914065 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 29 00:33:16.914410 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 29 00:33:16.925572 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 29 00:33:16.926018 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 29 00:33:16.936790 update_engine[1589]: I20251029 00:33:16.936537 1589 main.cc:92] Flatcar Update Engine starting Oct 29 00:33:16.940498 extend-filesystems[1612]: resize2fs 1.47.3 (8-Jul-2025) Oct 29 00:33:16.943406 tar[1600]: linux-amd64/LICENSE Oct 29 00:33:16.948443 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Oct 29 00:33:16.948501 tar[1600]: linux-amd64/helm Oct 29 00:33:16.948553 jq[1591]: true Oct 29 00:33:17.019524 jq[1617]: true Oct 29 00:33:17.025678 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Oct 29 00:33:17.032348 (ntainerd)[1619]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 29 00:33:17.033077 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 29 00:33:17.150295 update_engine[1589]: I20251029 00:33:17.082895 1589 update_check_scheduler.cc:74] Next update check in 7m37s Oct 29 00:33:17.061374 dbus-daemon[1570]: [system] SELinux support is enabled Oct 29 00:33:17.062932 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 29 00:33:17.068570 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 29 00:33:17.068611 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 29 00:33:17.074553 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 29 00:33:17.074573 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 29 00:33:17.084227 systemd[1]: Started update-engine.service - Update Engine. Oct 29 00:33:17.088785 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 29 00:33:17.150756 systemd-logind[1587]: Watching system buttons on /dev/input/event2 (Power Button) Oct 29 00:33:17.150779 systemd-logind[1587]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 29 00:33:17.154530 extend-filesystems[1612]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 29 00:33:17.154530 extend-filesystems[1612]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 29 00:33:17.154530 extend-filesystems[1612]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Oct 29 00:33:17.151101 systemd-logind[1587]: New seat seat0. Oct 29 00:33:17.251915 extend-filesystems[1573]: Resized filesystem in /dev/vda9 Oct 29 00:33:17.153363 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 29 00:33:17.156742 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 29 00:33:17.166195 systemd[1]: Started systemd-logind.service - User Login Management. Oct 29 00:33:17.263627 bash[1639]: Updated "/home/core/.ssh/authorized_keys" Oct 29 00:33:17.267343 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 29 00:33:17.270152 systemd-networkd[1518]: eth0: Gained IPv6LL Oct 29 00:33:17.270623 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 29 00:33:17.276959 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 29 00:33:17.279718 systemd[1]: Reached target network-online.target - Network is Online. Oct 29 00:33:17.284771 locksmithd[1632]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 29 00:33:17.284850 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 29 00:33:17.291099 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 00:33:17.304409 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 29 00:33:17.407044 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 29 00:33:17.407358 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 29 00:33:17.410288 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 29 00:33:17.509122 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 29 00:33:17.628475 sshd_keygen[1597]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 29 00:33:17.700129 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 29 00:33:17.705511 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 29 00:33:17.708181 containerd[1619]: time="2025-10-29T00:33:17Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 29 00:33:17.708175 systemd[1]: Started sshd@0-10.0.0.10:22-10.0.0.1:43976.service - OpenSSH per-connection server daemon (10.0.0.1:43976). Oct 29 00:33:17.708931 containerd[1619]: time="2025-10-29T00:33:17.708894723Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 29 00:33:17.722850 containerd[1619]: time="2025-10-29T00:33:17.722644028Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.181µs" Oct 29 00:33:17.722850 containerd[1619]: time="2025-10-29T00:33:17.722688912Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 29 00:33:17.722850 containerd[1619]: time="2025-10-29T00:33:17.722714380Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 29 00:33:17.723135 containerd[1619]: time="2025-10-29T00:33:17.723107838Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 29 00:33:17.723135 containerd[1619]: time="2025-10-29T00:33:17.723134879Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 29 00:33:17.723204 containerd[1619]: time="2025-10-29T00:33:17.723176627Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 29 00:33:17.723281 containerd[1619]: time="2025-10-29T00:33:17.723256607Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 29 00:33:17.723281 containerd[1619]: time="2025-10-29T00:33:17.723275162Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 29 00:33:17.723582 containerd[1619]: time="2025-10-29T00:33:17.723554556Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 29 00:33:17.723582 containerd[1619]: time="2025-10-29T00:33:17.723576086Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 29 00:33:17.723649 containerd[1619]: time="2025-10-29T00:33:17.723587708Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 29 00:33:17.723649 containerd[1619]: time="2025-10-29T00:33:17.723596825Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 29 00:33:17.723776 containerd[1619]: time="2025-10-29T00:33:17.723751785Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 29 00:33:17.724080 containerd[1619]: time="2025-10-29T00:33:17.724053141Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 29 00:33:17.724114 containerd[1619]: time="2025-10-29T00:33:17.724098466Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 29 00:33:17.724114 containerd[1619]: time="2025-10-29T00:33:17.724108855Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 29 00:33:17.724178 containerd[1619]: time="2025-10-29T00:33:17.724157607Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 29 00:33:17.724553 containerd[1619]: time="2025-10-29T00:33:17.724525767Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 29 00:33:17.724801 containerd[1619]: time="2025-10-29T00:33:17.724779092Z" level=info msg="metadata content store policy set" policy=shared Oct 29 00:33:17.732163 containerd[1619]: time="2025-10-29T00:33:17.732118716Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 29 00:33:17.732312 containerd[1619]: time="2025-10-29T00:33:17.732280420Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 29 00:33:17.732351 containerd[1619]: time="2025-10-29T00:33:17.732316277Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 29 00:33:17.732351 containerd[1619]: time="2025-10-29T00:33:17.732332577Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 29 00:33:17.732351 containerd[1619]: time="2025-10-29T00:33:17.732345662Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 29 00:33:17.732411 containerd[1619]: time="2025-10-29T00:33:17.732358316Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 29 00:33:17.732411 containerd[1619]: time="2025-10-29T00:33:17.732371701Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 29 00:33:17.732411 containerd[1619]: time="2025-10-29T00:33:17.732383523Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 29 00:33:17.732411 containerd[1619]: time="2025-10-29T00:33:17.732396507Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 29 00:33:17.732411 containerd[1619]: time="2025-10-29T00:33:17.732406586Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 29 00:33:17.732496 containerd[1619]: time="2025-10-29T00:33:17.732421073Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 29 00:33:17.732496 containerd[1619]: time="2025-10-29T00:33:17.732439408Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 29 00:33:17.732669 containerd[1619]: time="2025-10-29T00:33:17.732621820Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 29 00:33:17.732727 containerd[1619]: time="2025-10-29T00:33:17.732697442Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 29 00:33:17.732754 containerd[1619]: time="2025-10-29T00:33:17.732732117Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 29 00:33:17.732754 containerd[1619]: time="2025-10-29T00:33:17.732746534Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 29 00:33:17.732791 containerd[1619]: time="2025-10-29T00:33:17.732757344Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 29 00:33:17.732791 containerd[1619]: time="2025-10-29T00:33:17.732768395Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 29 00:33:17.732791 containerd[1619]: time="2025-10-29T00:33:17.732779796Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 29 00:33:17.732872 containerd[1619]: time="2025-10-29T00:33:17.732796487Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 29 00:33:17.732872 containerd[1619]: time="2025-10-29T00:33:17.732807678Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 29 00:33:17.732872 containerd[1619]: time="2025-10-29T00:33:17.732850679Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 29 00:33:17.732872 containerd[1619]: time="2025-10-29T00:33:17.732866419Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 29 00:33:17.733172 containerd[1619]: time="2025-10-29T00:33:17.732994669Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 29 00:33:17.733172 containerd[1619]: time="2025-10-29T00:33:17.733016771Z" level=info msg="Start snapshots syncer" Oct 29 00:33:17.733172 containerd[1619]: time="2025-10-29T00:33:17.733074699Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 29 00:33:17.734316 containerd[1619]: time="2025-10-29T00:33:17.734250825Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 29 00:33:17.734721 containerd[1619]: time="2025-10-29T00:33:17.734355391Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 29 00:33:17.735064 systemd[1]: issuegen.service: Deactivated successfully. Oct 29 00:33:17.735411 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 29 00:33:17.737626 containerd[1619]: time="2025-10-29T00:33:17.737462899Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 29 00:33:17.738015 containerd[1619]: time="2025-10-29T00:33:17.737888547Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 29 00:33:17.738015 containerd[1619]: time="2025-10-29T00:33:17.737976953Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 29 00:33:17.738064 containerd[1619]: time="2025-10-29T00:33:17.738040622Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 29 00:33:17.738064 containerd[1619]: time="2025-10-29T00:33:17.738054729Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 29 00:33:17.738103 containerd[1619]: time="2025-10-29T00:33:17.738070308Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 29 00:33:17.740483 containerd[1619]: time="2025-10-29T00:33:17.738150448Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 29 00:33:17.740483 containerd[1619]: time="2025-10-29T00:33:17.738173571Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 29 00:33:17.740483 containerd[1619]: time="2025-10-29T00:33:17.738262298Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 29 00:33:17.740483 containerd[1619]: time="2025-10-29T00:33:17.738275613Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 29 00:33:17.740483 containerd[1619]: time="2025-10-29T00:33:17.738285972Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 29 00:33:17.740483 containerd[1619]: time="2025-10-29T00:33:17.739593104Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 29 00:33:17.740483 containerd[1619]: time="2025-10-29T00:33:17.739743025Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 29 00:33:17.740483 containerd[1619]: time="2025-10-29T00:33:17.739754456Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 29 00:33:17.740483 containerd[1619]: time="2025-10-29T00:33:17.739763854Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 29 00:33:17.740483 containerd[1619]: time="2025-10-29T00:33:17.739771899Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 29 00:33:17.740483 containerd[1619]: time="2025-10-29T00:33:17.739781276Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 29 00:33:17.740483 containerd[1619]: time="2025-10-29T00:33:17.739817164Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 29 00:33:17.740483 containerd[1619]: time="2025-10-29T00:33:17.739861156Z" level=info msg="runtime interface created" Oct 29 00:33:17.740483 containerd[1619]: time="2025-10-29T00:33:17.739884520Z" level=info msg="created NRI interface" Oct 29 00:33:17.740483 containerd[1619]: time="2025-10-29T00:33:17.739893156Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 29 00:33:17.740870 containerd[1619]: time="2025-10-29T00:33:17.739909307Z" level=info msg="Connect containerd service" Oct 29 00:33:17.740870 containerd[1619]: time="2025-10-29T00:33:17.739939874Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 29 00:33:17.741339 containerd[1619]: time="2025-10-29T00:33:17.741305485Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 29 00:33:17.742862 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 29 00:33:17.891611 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 29 00:33:17.974210 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 29 00:33:17.981396 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 29 00:33:17.984960 systemd[1]: Reached target getty.target - Login Prompts. Oct 29 00:33:18.078184 containerd[1619]: time="2025-10-29T00:33:18.078091299Z" level=info msg="Start subscribing containerd event" Oct 29 00:33:18.078380 containerd[1619]: time="2025-10-29T00:33:18.078213939Z" level=info msg="Start recovering state" Oct 29 00:33:18.078510 containerd[1619]: time="2025-10-29T00:33:18.078444731Z" level=info msg="Start event monitor" Oct 29 00:33:18.078510 containerd[1619]: time="2025-10-29T00:33:18.078478234Z" level=info msg="Start cni network conf syncer for default" Oct 29 00:33:18.078510 containerd[1619]: time="2025-10-29T00:33:18.078500346Z" level=info msg="Start streaming server" Oct 29 00:33:18.078605 containerd[1619]: time="2025-10-29T00:33:18.078524531Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 29 00:33:18.078605 containerd[1619]: time="2025-10-29T00:33:18.078542805Z" level=info msg="runtime interface starting up..." Oct 29 00:33:18.078605 containerd[1619]: time="2025-10-29T00:33:18.078553285Z" level=info msg="starting plugins..." Oct 29 00:33:18.078605 containerd[1619]: time="2025-10-29T00:33:18.078591136Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 29 00:33:18.082314 containerd[1619]: time="2025-10-29T00:33:18.082099285Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 29 00:33:18.082314 containerd[1619]: time="2025-10-29T00:33:18.082196708Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 29 00:33:18.084129 systemd[1]: Started containerd.service - containerd container runtime. Oct 29 00:33:18.084538 containerd[1619]: time="2025-10-29T00:33:18.084484288Z" level=info msg="containerd successfully booted in 0.377033s" Oct 29 00:33:18.091960 sshd[1682]: Accepted publickey for core from 10.0.0.1 port 43976 ssh2: RSA SHA256:NOSddcycRuuQ0Zp9cdpGYZy5vFByHCSYLp01T7glzwM Oct 29 00:33:18.095423 sshd-session[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:33:18.105057 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 29 00:33:18.121927 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 29 00:33:18.136053 systemd-logind[1587]: New session 1 of user core. Oct 29 00:33:18.152417 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 29 00:33:18.167778 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 29 00:33:18.187573 (systemd)[1710]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:33:18.191759 systemd-logind[1587]: New session c1 of user core. Oct 29 00:33:18.235224 tar[1600]: linux-amd64/README.md Oct 29 00:33:18.373153 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 29 00:33:18.523889 systemd[1710]: Queued start job for default target default.target. Oct 29 00:33:18.700434 systemd[1710]: Created slice app.slice - User Application Slice. Oct 29 00:33:18.700467 systemd[1710]: Reached target paths.target - Paths. Oct 29 00:33:18.700515 systemd[1710]: Reached target timers.target - Timers. Oct 29 00:33:18.702490 systemd[1710]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 29 00:33:18.719027 systemd[1710]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 29 00:33:18.719187 systemd[1710]: Reached target sockets.target - Sockets. Oct 29 00:33:18.719246 systemd[1710]: Reached target basic.target - Basic System. Oct 29 00:33:18.719294 systemd[1710]: Reached target default.target - Main User Target. Oct 29 00:33:18.719335 systemd[1710]: Startup finished in 510ms. Oct 29 00:33:18.719683 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 29 00:33:18.723321 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 29 00:33:18.792178 systemd[1]: Started sshd@1-10.0.0.10:22-10.0.0.1:43982.service - OpenSSH per-connection server daemon (10.0.0.1:43982). Oct 29 00:33:18.850351 sshd[1724]: Accepted publickey for core from 10.0.0.1 port 43982 ssh2: RSA SHA256:NOSddcycRuuQ0Zp9cdpGYZy5vFByHCSYLp01T7glzwM Oct 29 00:33:18.852291 sshd-session[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:33:18.858192 systemd-logind[1587]: New session 2 of user core. Oct 29 00:33:18.872889 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 29 00:33:19.117847 sshd[1727]: Connection closed by 10.0.0.1 port 43982 Oct 29 00:33:19.119809 sshd-session[1724]: pam_unix(sshd:session): session closed for user core Oct 29 00:33:19.129881 systemd[1]: sshd@1-10.0.0.10:22-10.0.0.1:43982.service: Deactivated successfully. Oct 29 00:33:19.132424 systemd[1]: session-2.scope: Deactivated successfully. Oct 29 00:33:19.133269 systemd-logind[1587]: Session 2 logged out. Waiting for processes to exit. Oct 29 00:33:19.137252 systemd[1]: Started sshd@2-10.0.0.10:22-10.0.0.1:43986.service - OpenSSH per-connection server daemon (10.0.0.1:43986). Oct 29 00:33:19.140495 systemd-logind[1587]: Removed session 2. Oct 29 00:33:19.210904 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 43986 ssh2: RSA SHA256:NOSddcycRuuQ0Zp9cdpGYZy5vFByHCSYLp01T7glzwM Oct 29 00:33:19.213014 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:33:19.218966 systemd-logind[1587]: New session 3 of user core. Oct 29 00:33:19.225874 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 29 00:33:19.325853 sshd[1736]: Connection closed by 10.0.0.1 port 43986 Oct 29 00:33:19.326228 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Oct 29 00:33:19.335100 systemd[1]: sshd@2-10.0.0.10:22-10.0.0.1:43986.service: Deactivated successfully. Oct 29 00:33:19.338756 systemd[1]: session-3.scope: Deactivated successfully. Oct 29 00:33:19.340046 systemd-logind[1587]: Session 3 logged out. Waiting for processes to exit. Oct 29 00:33:19.342189 systemd-logind[1587]: Removed session 3. Oct 29 00:33:19.813063 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 00:33:19.815871 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 29 00:33:19.817830 systemd[1]: Startup finished in 3.288s (kernel) + 7.877s (initrd) + 6.768s (userspace) = 17.935s. Oct 29 00:33:19.827152 (kubelet)[1746]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 29 00:33:20.813766 kubelet[1746]: E1029 00:33:20.813647 1746 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 00:33:20.818171 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 00:33:20.818401 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 00:33:20.818924 systemd[1]: kubelet.service: Consumed 2.873s CPU time, 268.9M memory peak. Oct 29 00:33:29.338853 systemd[1]: Started sshd@3-10.0.0.10:22-10.0.0.1:42662.service - OpenSSH per-connection server daemon (10.0.0.1:42662). Oct 29 00:33:29.398758 sshd[1759]: Accepted publickey for core from 10.0.0.1 port 42662 ssh2: RSA SHA256:NOSddcycRuuQ0Zp9cdpGYZy5vFByHCSYLp01T7glzwM Oct 29 00:33:29.400683 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:33:29.406177 systemd-logind[1587]: New session 4 of user core. Oct 29 00:33:29.415802 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 29 00:33:29.470519 sshd[1762]: Connection closed by 10.0.0.1 port 42662 Oct 29 00:33:29.470914 sshd-session[1759]: pam_unix(sshd:session): session closed for user core Oct 29 00:33:29.486051 systemd[1]: sshd@3-10.0.0.10:22-10.0.0.1:42662.service: Deactivated successfully. Oct 29 00:33:29.488047 systemd[1]: session-4.scope: Deactivated successfully. Oct 29 00:33:29.489068 systemd-logind[1587]: Session 4 logged out. Waiting for processes to exit. Oct 29 00:33:29.491920 systemd[1]: Started sshd@4-10.0.0.10:22-10.0.0.1:42674.service - OpenSSH per-connection server daemon (10.0.0.1:42674). Oct 29 00:33:29.492469 systemd-logind[1587]: Removed session 4. Oct 29 00:33:29.557433 sshd[1768]: Accepted publickey for core from 10.0.0.1 port 42674 ssh2: RSA SHA256:NOSddcycRuuQ0Zp9cdpGYZy5vFByHCSYLp01T7glzwM Oct 29 00:33:29.559162 sshd-session[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:33:29.564340 systemd-logind[1587]: New session 5 of user core. Oct 29 00:33:29.574815 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 29 00:33:29.626885 sshd[1771]: Connection closed by 10.0.0.1 port 42674 Oct 29 00:33:29.627256 sshd-session[1768]: pam_unix(sshd:session): session closed for user core Oct 29 00:33:29.640357 systemd[1]: sshd@4-10.0.0.10:22-10.0.0.1:42674.service: Deactivated successfully. Oct 29 00:33:29.642264 systemd[1]: session-5.scope: Deactivated successfully. Oct 29 00:33:29.643127 systemd-logind[1587]: Session 5 logged out. Waiting for processes to exit. Oct 29 00:33:29.646026 systemd[1]: Started sshd@5-10.0.0.10:22-10.0.0.1:42682.service - OpenSSH per-connection server daemon (10.0.0.1:42682). Oct 29 00:33:29.646652 systemd-logind[1587]: Removed session 5. Oct 29 00:33:29.696525 sshd[1777]: Accepted publickey for core from 10.0.0.1 port 42682 ssh2: RSA SHA256:NOSddcycRuuQ0Zp9cdpGYZy5vFByHCSYLp01T7glzwM Oct 29 00:33:29.698056 sshd-session[1777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:33:29.703096 systemd-logind[1587]: New session 6 of user core. Oct 29 00:33:29.712757 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 29 00:33:29.769188 sshd[1780]: Connection closed by 10.0.0.1 port 42682 Oct 29 00:33:29.769673 sshd-session[1777]: pam_unix(sshd:session): session closed for user core Oct 29 00:33:29.787560 systemd[1]: sshd@5-10.0.0.10:22-10.0.0.1:42682.service: Deactivated successfully. Oct 29 00:33:29.789615 systemd[1]: session-6.scope: Deactivated successfully. Oct 29 00:33:29.790540 systemd-logind[1587]: Session 6 logged out. Waiting for processes to exit. Oct 29 00:33:29.793575 systemd[1]: Started sshd@6-10.0.0.10:22-10.0.0.1:42696.service - OpenSSH per-connection server daemon (10.0.0.1:42696). Oct 29 00:33:29.794499 systemd-logind[1587]: Removed session 6. Oct 29 00:33:29.861789 sshd[1786]: Accepted publickey for core from 10.0.0.1 port 42696 ssh2: RSA SHA256:NOSddcycRuuQ0Zp9cdpGYZy5vFByHCSYLp01T7glzwM Oct 29 00:33:29.863415 sshd-session[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:33:29.868476 systemd-logind[1587]: New session 7 of user core. Oct 29 00:33:29.883795 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 29 00:33:29.950247 sudo[1791]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 29 00:33:29.950576 sudo[1791]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 29 00:33:29.972119 sudo[1791]: pam_unix(sudo:session): session closed for user root Oct 29 00:33:29.974708 sshd[1790]: Connection closed by 10.0.0.1 port 42696 Oct 29 00:33:29.975196 sshd-session[1786]: pam_unix(sshd:session): session closed for user core Oct 29 00:33:29.989371 systemd[1]: sshd@6-10.0.0.10:22-10.0.0.1:42696.service: Deactivated successfully. Oct 29 00:33:29.991565 systemd[1]: session-7.scope: Deactivated successfully. Oct 29 00:33:29.992705 systemd-logind[1587]: Session 7 logged out. Waiting for processes to exit. Oct 29 00:33:29.996662 systemd[1]: Started sshd@7-10.0.0.10:22-10.0.0.1:42704.service - OpenSSH per-connection server daemon (10.0.0.1:42704). Oct 29 00:33:29.997343 systemd-logind[1587]: Removed session 7. Oct 29 00:33:30.059386 sshd[1797]: Accepted publickey for core from 10.0.0.1 port 42704 ssh2: RSA SHA256:NOSddcycRuuQ0Zp9cdpGYZy5vFByHCSYLp01T7glzwM Oct 29 00:33:30.060942 sshd-session[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:33:30.066090 systemd-logind[1587]: New session 8 of user core. Oct 29 00:33:30.075839 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 29 00:33:30.132864 sudo[1802]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 29 00:33:30.133208 sudo[1802]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 29 00:33:30.141261 sudo[1802]: pam_unix(sudo:session): session closed for user root Oct 29 00:33:30.150721 sudo[1801]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 29 00:33:30.151078 sudo[1801]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 29 00:33:30.164348 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 29 00:33:30.211734 augenrules[1824]: No rules Oct 29 00:33:30.213931 systemd[1]: audit-rules.service: Deactivated successfully. Oct 29 00:33:30.214329 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 29 00:33:30.215608 sudo[1801]: pam_unix(sudo:session): session closed for user root Oct 29 00:33:30.217558 sshd[1800]: Connection closed by 10.0.0.1 port 42704 Oct 29 00:33:30.217995 sshd-session[1797]: pam_unix(sshd:session): session closed for user core Oct 29 00:33:30.227681 systemd[1]: sshd@7-10.0.0.10:22-10.0.0.1:42704.service: Deactivated successfully. Oct 29 00:33:30.229879 systemd[1]: session-8.scope: Deactivated successfully. Oct 29 00:33:30.230925 systemd-logind[1587]: Session 8 logged out. Waiting for processes to exit. Oct 29 00:33:30.234428 systemd[1]: Started sshd@8-10.0.0.10:22-10.0.0.1:42706.service - OpenSSH per-connection server daemon (10.0.0.1:42706). Oct 29 00:33:30.235203 systemd-logind[1587]: Removed session 8. Oct 29 00:33:30.287274 sshd[1833]: Accepted publickey for core from 10.0.0.1 port 42706 ssh2: RSA SHA256:NOSddcycRuuQ0Zp9cdpGYZy5vFByHCSYLp01T7glzwM Oct 29 00:33:30.289114 sshd-session[1833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:33:30.294340 systemd-logind[1587]: New session 9 of user core. Oct 29 00:33:30.303865 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 29 00:33:30.358912 sudo[1837]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 29 00:33:30.359231 sudo[1837]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 29 00:33:31.069188 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 29 00:33:31.071535 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 00:33:31.119423 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 29 00:33:31.139215 (dockerd)[1860]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 29 00:33:31.378705 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 00:33:31.441277 (kubelet)[1872]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 29 00:33:31.525311 kubelet[1872]: E1029 00:33:31.525179 1872 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 00:33:31.534039 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 00:33:31.534262 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 00:33:31.534716 systemd[1]: kubelet.service: Consumed 349ms CPU time, 110.4M memory peak. Oct 29 00:33:31.666217 dockerd[1860]: time="2025-10-29T00:33:31.666006279Z" level=info msg="Starting up" Oct 29 00:33:31.667158 dockerd[1860]: time="2025-10-29T00:33:31.667089982Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 29 00:33:31.707129 dockerd[1860]: time="2025-10-29T00:33:31.707066802Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 29 00:33:32.107104 dockerd[1860]: time="2025-10-29T00:33:32.106911484Z" level=info msg="Loading containers: start." Oct 29 00:33:32.121675 kernel: Initializing XFRM netlink socket Oct 29 00:33:32.439612 systemd-networkd[1518]: docker0: Link UP Oct 29 00:33:32.446297 dockerd[1860]: time="2025-10-29T00:33:32.446233674Z" level=info msg="Loading containers: done." Oct 29 00:33:32.554626 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck519453456-merged.mount: Deactivated successfully. Oct 29 00:33:32.557709 dockerd[1860]: time="2025-10-29T00:33:32.557617048Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 29 00:33:32.557823 dockerd[1860]: time="2025-10-29T00:33:32.557788730Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 29 00:33:32.557946 dockerd[1860]: time="2025-10-29T00:33:32.557915007Z" level=info msg="Initializing buildkit" Oct 29 00:33:32.596047 dockerd[1860]: time="2025-10-29T00:33:32.595990963Z" level=info msg="Completed buildkit initialization" Oct 29 00:33:32.602765 dockerd[1860]: time="2025-10-29T00:33:32.602722096Z" level=info msg="Daemon has completed initialization" Oct 29 00:33:32.602892 dockerd[1860]: time="2025-10-29T00:33:32.602797327Z" level=info msg="API listen on /run/docker.sock" Oct 29 00:33:32.603121 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 29 00:33:33.612539 containerd[1619]: time="2025-10-29T00:33:33.612422615Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Oct 29 00:33:36.022832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount138137366.mount: Deactivated successfully. Oct 29 00:33:37.729460 containerd[1619]: time="2025-10-29T00:33:37.729360102Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:33:37.730210 containerd[1619]: time="2025-10-29T00:33:37.730140906Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Oct 29 00:33:37.731482 containerd[1619]: time="2025-10-29T00:33:37.731425946Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:33:37.734412 containerd[1619]: time="2025-10-29T00:33:37.734361041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:33:37.735451 containerd[1619]: time="2025-10-29T00:33:37.735389259Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 4.12284169s" Oct 29 00:33:37.735451 containerd[1619]: time="2025-10-29T00:33:37.735448280Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Oct 29 00:33:37.736564 containerd[1619]: time="2025-10-29T00:33:37.736377723Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Oct 29 00:33:39.705846 containerd[1619]: time="2025-10-29T00:33:39.705746432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:33:39.706545 containerd[1619]: time="2025-10-29T00:33:39.706490347Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Oct 29 00:33:39.707659 containerd[1619]: time="2025-10-29T00:33:39.707610979Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:33:39.710277 containerd[1619]: time="2025-10-29T00:33:39.710245519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:33:39.711773 containerd[1619]: time="2025-10-29T00:33:39.711734231Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.975328526s" Oct 29 00:33:39.711773 containerd[1619]: time="2025-10-29T00:33:39.711765710Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Oct 29 00:33:39.712251 containerd[1619]: time="2025-10-29T00:33:39.712224410Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Oct 29 00:33:41.283054 containerd[1619]: time="2025-10-29T00:33:41.282948268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:33:41.284093 containerd[1619]: time="2025-10-29T00:33:41.284033904Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Oct 29 00:33:41.285571 containerd[1619]: time="2025-10-29T00:33:41.285528187Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:33:41.288465 containerd[1619]: time="2025-10-29T00:33:41.288381628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:33:41.289650 containerd[1619]: time="2025-10-29T00:33:41.289581298Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.577322834s" Oct 29 00:33:41.289650 containerd[1619]: time="2025-10-29T00:33:41.289628045Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Oct 29 00:33:41.290184 containerd[1619]: time="2025-10-29T00:33:41.290142049Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Oct 29 00:33:41.576866 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 29 00:33:41.578730 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 00:33:41.840453 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 00:33:41.867208 (kubelet)[2165]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 29 00:33:41.946833 kubelet[2165]: E1029 00:33:41.946647 2165 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 00:33:41.951892 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 00:33:41.952321 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 00:33:41.953071 systemd[1]: kubelet.service: Consumed 336ms CPU time, 110.5M memory peak. Oct 29 00:33:43.652054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount16924938.mount: Deactivated successfully. Oct 29 00:33:44.245509 containerd[1619]: time="2025-10-29T00:33:44.245426653Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:33:44.246308 containerd[1619]: time="2025-10-29T00:33:44.246230270Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Oct 29 00:33:44.247341 containerd[1619]: time="2025-10-29T00:33:44.247295277Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:33:44.249299 containerd[1619]: time="2025-10-29T00:33:44.249241717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:33:44.249794 containerd[1619]: time="2025-10-29T00:33:44.249727859Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 2.959531728s" Oct 29 00:33:44.249794 containerd[1619]: time="2025-10-29T00:33:44.249777452Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Oct 29 00:33:44.250493 containerd[1619]: time="2025-10-29T00:33:44.250297487Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Oct 29 00:33:44.788274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount537075642.mount: Deactivated successfully. Oct 29 00:33:46.824039 containerd[1619]: time="2025-10-29T00:33:46.823956877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:33:46.824790 containerd[1619]: time="2025-10-29T00:33:46.824709128Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Oct 29 00:33:46.826214 containerd[1619]: time="2025-10-29T00:33:46.826156292Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:33:46.829293 containerd[1619]: time="2025-10-29T00:33:46.829238623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:33:46.830450 containerd[1619]: time="2025-10-29T00:33:46.830396444Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.580067007s" Oct 29 00:33:46.830450 containerd[1619]: time="2025-10-29T00:33:46.830432812Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Oct 29 00:33:46.830968 containerd[1619]: time="2025-10-29T00:33:46.830947267Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 29 00:33:47.270572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount723135822.mount: Deactivated successfully. Oct 29 00:33:47.278984 containerd[1619]: time="2025-10-29T00:33:47.278896361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 29 00:33:47.279761 containerd[1619]: time="2025-10-29T00:33:47.279734243Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 29 00:33:47.281016 containerd[1619]: time="2025-10-29T00:33:47.280983305Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 29 00:33:47.283715 containerd[1619]: time="2025-10-29T00:33:47.283688017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 29 00:33:47.284720 containerd[1619]: time="2025-10-29T00:33:47.284662866Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 453.660535ms" Oct 29 00:33:47.284773 containerd[1619]: time="2025-10-29T00:33:47.284724932Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 29 00:33:47.285327 containerd[1619]: time="2025-10-29T00:33:47.285296694Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Oct 29 00:33:47.814319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4106795838.mount: Deactivated successfully. Oct 29 00:33:50.910811 containerd[1619]: time="2025-10-29T00:33:50.910744776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:33:50.911807 containerd[1619]: time="2025-10-29T00:33:50.911760836Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Oct 29 00:33:50.913387 containerd[1619]: time="2025-10-29T00:33:50.913326669Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:33:50.916343 containerd[1619]: time="2025-10-29T00:33:50.916288631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:33:50.917559 containerd[1619]: time="2025-10-29T00:33:50.917522437Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.632187722s" Oct 29 00:33:50.917623 containerd[1619]: time="2025-10-29T00:33:50.917558527Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Oct 29 00:33:52.077022 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 29 00:33:52.079342 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 00:33:52.319532 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 00:33:52.323785 (kubelet)[2327]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 29 00:33:52.361649 kubelet[2327]: E1029 00:33:52.361443 2327 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 00:33:52.365836 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 00:33:52.366073 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 00:33:52.366543 systemd[1]: kubelet.service: Consumed 224ms CPU time, 109.5M memory peak. Oct 29 00:33:53.766233 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 00:33:53.766455 systemd[1]: kubelet.service: Consumed 224ms CPU time, 109.5M memory peak. Oct 29 00:33:53.768794 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 00:33:53.795457 systemd[1]: Reload requested from client PID 2340 ('systemctl') (unit session-9.scope)... Oct 29 00:33:53.795487 systemd[1]: Reloading... Oct 29 00:33:53.886709 zram_generator::config[2385]: No configuration found. Oct 29 00:33:54.770195 systemd[1]: Reloading finished in 974 ms. Oct 29 00:33:54.827396 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 29 00:33:54.827504 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 29 00:33:54.827868 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 00:33:54.827924 systemd[1]: kubelet.service: Consumed 180ms CPU time, 98.3M memory peak. Oct 29 00:33:54.829770 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 00:33:55.050338 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 00:33:55.072063 (kubelet)[2433]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 29 00:33:55.122030 kubelet[2433]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 00:33:55.122030 kubelet[2433]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 29 00:33:55.122030 kubelet[2433]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 00:33:55.122520 kubelet[2433]: I1029 00:33:55.122063 2433 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 29 00:33:56.187828 kubelet[2433]: I1029 00:33:56.187754 2433 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 29 00:33:56.187828 kubelet[2433]: I1029 00:33:56.187792 2433 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 29 00:33:56.188445 kubelet[2433]: I1029 00:33:56.188010 2433 server.go:956] "Client rotation is on, will bootstrap in background" Oct 29 00:33:57.184781 kubelet[2433]: E1029 00:33:57.184667 2433 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 29 00:33:57.185171 kubelet[2433]: I1029 00:33:57.185138 2433 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 29 00:33:57.191911 kubelet[2433]: I1029 00:33:57.191873 2433 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 29 00:33:57.201106 kubelet[2433]: I1029 00:33:57.201065 2433 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 29 00:33:57.201397 kubelet[2433]: I1029 00:33:57.201363 2433 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 29 00:33:57.201548 kubelet[2433]: I1029 00:33:57.201391 2433 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 29 00:33:57.201721 kubelet[2433]: I1029 00:33:57.201560 2433 topology_manager.go:138] "Creating topology manager with none policy" Oct 29 00:33:57.201721 kubelet[2433]: I1029 00:33:57.201572 2433 container_manager_linux.go:303] "Creating device plugin manager" Oct 29 00:33:57.202490 kubelet[2433]: I1029 00:33:57.202458 2433 state_mem.go:36] "Initialized new in-memory state store" Oct 29 00:33:57.204892 kubelet[2433]: I1029 00:33:57.204859 2433 kubelet.go:480] "Attempting to sync node with API server" Oct 29 00:33:57.204953 kubelet[2433]: I1029 00:33:57.204899 2433 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 29 00:33:57.207658 kubelet[2433]: I1029 00:33:57.206417 2433 kubelet.go:386] "Adding apiserver pod source" Oct 29 00:33:57.207658 kubelet[2433]: I1029 00:33:57.206447 2433 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 29 00:33:57.212033 kubelet[2433]: E1029 00:33:57.211885 2433 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 29 00:33:57.214765 kubelet[2433]: I1029 00:33:57.214520 2433 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 29 00:33:57.215065 kubelet[2433]: I1029 00:33:57.215031 2433 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 29 00:33:57.215546 kubelet[2433]: E1029 00:33:57.215516 2433 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 29 00:33:57.216066 kubelet[2433]: W1029 00:33:57.216032 2433 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 29 00:33:57.218924 kubelet[2433]: I1029 00:33:57.218904 2433 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 29 00:33:57.218990 kubelet[2433]: I1029 00:33:57.218952 2433 server.go:1289] "Started kubelet" Oct 29 00:33:57.219123 kubelet[2433]: I1029 00:33:57.219076 2433 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 29 00:33:57.324027 kubelet[2433]: I1029 00:33:57.323865 2433 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 29 00:33:57.324027 kubelet[2433]: I1029 00:33:57.323930 2433 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 29 00:33:57.324936 kubelet[2433]: I1029 00:33:57.324905 2433 server.go:317] "Adding debug handlers to kubelet server" Oct 29 00:33:57.325136 kubelet[2433]: I1029 00:33:57.325000 2433 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 29 00:33:57.326671 kubelet[2433]: I1029 00:33:57.325803 2433 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 29 00:33:57.326671 kubelet[2433]: E1029 00:33:57.326499 2433 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 00:33:57.326671 kubelet[2433]: I1029 00:33:57.326527 2433 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 29 00:33:57.327521 kubelet[2433]: I1029 00:33:57.327506 2433 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 29 00:33:57.327681 kubelet[2433]: I1029 00:33:57.327668 2433 reconciler.go:26] "Reconciler: start to sync state" Oct 29 00:33:57.328180 kubelet[2433]: E1029 00:33:57.328109 2433 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 29 00:33:57.328180 kubelet[2433]: E1029 00:33:57.326804 2433 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.10:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.10:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1872cf07edefcafa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-29 00:33:57.21892121 +0000 UTC m=+2.141844390,LastTimestamp:2025-10-29 00:33:57.21892121 +0000 UTC m=+2.141844390,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 29 00:33:57.328588 kubelet[2433]: E1029 00:33:57.328474 2433 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="200ms" Oct 29 00:33:57.328736 kubelet[2433]: I1029 00:33:57.328703 2433 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 29 00:33:57.328946 kubelet[2433]: E1029 00:33:57.328911 2433 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 29 00:33:57.329674 kubelet[2433]: I1029 00:33:57.329623 2433 factory.go:223] Registration of the containerd container factory successfully Oct 29 00:33:57.329674 kubelet[2433]: I1029 00:33:57.329666 2433 factory.go:223] Registration of the systemd container factory successfully Oct 29 00:33:57.350345 kubelet[2433]: I1029 00:33:57.350294 2433 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 29 00:33:57.350345 kubelet[2433]: I1029 00:33:57.350328 2433 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 29 00:33:57.350345 kubelet[2433]: I1029 00:33:57.350344 2433 state_mem.go:36] "Initialized new in-memory state store" Oct 29 00:33:57.383152 kubelet[2433]: E1029 00:33:57.382972 2433 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.10:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.10:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1872cf07edefcafa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-29 00:33:57.21892121 +0000 UTC m=+2.141844390,LastTimestamp:2025-10-29 00:33:57.21892121 +0000 UTC m=+2.141844390,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 29 00:33:57.416998 kubelet[2433]: I1029 00:33:57.416911 2433 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 29 00:33:57.420230 kubelet[2433]: I1029 00:33:57.418928 2433 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 29 00:33:57.420230 kubelet[2433]: I1029 00:33:57.418958 2433 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 29 00:33:57.420230 kubelet[2433]: I1029 00:33:57.418988 2433 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 29 00:33:57.420230 kubelet[2433]: I1029 00:33:57.418999 2433 kubelet.go:2436] "Starting kubelet main sync loop" Oct 29 00:33:57.420230 kubelet[2433]: E1029 00:33:57.419060 2433 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 29 00:33:57.420230 kubelet[2433]: E1029 00:33:57.419704 2433 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 29 00:33:57.427255 kubelet[2433]: E1029 00:33:57.427199 2433 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 00:33:57.436728 kubelet[2433]: I1029 00:33:57.436580 2433 policy_none.go:49] "None policy: Start" Oct 29 00:33:57.436728 kubelet[2433]: I1029 00:33:57.436719 2433 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 29 00:33:57.436836 kubelet[2433]: I1029 00:33:57.436744 2433 state_mem.go:35] "Initializing new in-memory state store" Oct 29 00:33:57.446395 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 29 00:33:57.461103 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 29 00:33:57.465548 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 29 00:33:57.477236 kubelet[2433]: E1029 00:33:57.477171 2433 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 29 00:33:57.477573 kubelet[2433]: I1029 00:33:57.477547 2433 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 29 00:33:57.477670 kubelet[2433]: I1029 00:33:57.477569 2433 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 29 00:33:57.478078 kubelet[2433]: I1029 00:33:57.477985 2433 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 29 00:33:57.479391 kubelet[2433]: E1029 00:33:57.479354 2433 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 29 00:33:57.479472 kubelet[2433]: E1029 00:33:57.479414 2433 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 29 00:33:57.529189 kubelet[2433]: I1029 00:33:57.529110 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cf62dcbcc1549b97832d179b981cc7ea-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cf62dcbcc1549b97832d179b981cc7ea\") " pod="kube-system/kube-apiserver-localhost" Oct 29 00:33:57.529189 kubelet[2433]: I1029 00:33:57.529151 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:33:57.529382 kubelet[2433]: I1029 00:33:57.529213 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:33:57.529382 kubelet[2433]: I1029 00:33:57.529240 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:33:57.529382 kubelet[2433]: I1029 00:33:57.529267 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:33:57.529382 kubelet[2433]: I1029 00:33:57.529290 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:33:57.529382 kubelet[2433]: I1029 00:33:57.529320 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cf62dcbcc1549b97832d179b981cc7ea-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cf62dcbcc1549b97832d179b981cc7ea\") " pod="kube-system/kube-apiserver-localhost" Oct 29 00:33:57.529601 kubelet[2433]: I1029 00:33:57.529342 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cf62dcbcc1549b97832d179b981cc7ea-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cf62dcbcc1549b97832d179b981cc7ea\") " pod="kube-system/kube-apiserver-localhost" Oct 29 00:33:57.529964 kubelet[2433]: E1029 00:33:57.529865 2433 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="400ms" Oct 29 00:33:57.532983 systemd[1]: Created slice kubepods-burstable-podcf62dcbcc1549b97832d179b981cc7ea.slice - libcontainer container kubepods-burstable-podcf62dcbcc1549b97832d179b981cc7ea.slice. Oct 29 00:33:57.541912 kubelet[2433]: E1029 00:33:57.541854 2433 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 00:33:57.545826 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice - libcontainer container kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Oct 29 00:33:57.557394 kubelet[2433]: E1029 00:33:57.557349 2433 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 00:33:57.562083 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice - libcontainer container kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Oct 29 00:33:57.565839 kubelet[2433]: E1029 00:33:57.565771 2433 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 00:33:57.579487 kubelet[2433]: I1029 00:33:57.579415 2433 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 00:33:57.579898 kubelet[2433]: E1029 00:33:57.579831 2433 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Oct 29 00:33:57.630505 kubelet[2433]: I1029 00:33:57.630423 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Oct 29 00:33:57.782220 kubelet[2433]: I1029 00:33:57.782066 2433 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 00:33:57.782680 kubelet[2433]: E1029 00:33:57.782507 2433 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Oct 29 00:33:57.842895 kubelet[2433]: E1029 00:33:57.842820 2433 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:33:57.843789 containerd[1619]: time="2025-10-29T00:33:57.843714670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cf62dcbcc1549b97832d179b981cc7ea,Namespace:kube-system,Attempt:0,}" Oct 29 00:33:57.858486 kubelet[2433]: E1029 00:33:57.858087 2433 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:33:57.858983 containerd[1619]: time="2025-10-29T00:33:57.858929177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Oct 29 00:33:57.867127 kubelet[2433]: E1029 00:33:57.867068 2433 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:33:57.868098 containerd[1619]: time="2025-10-29T00:33:57.868017147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Oct 29 00:33:57.897057 containerd[1619]: time="2025-10-29T00:33:57.896971497Z" level=info msg="connecting to shim 600c79e7f5d681d8513d32b5b5b9907efdb4dfd4790acd4fda6a875ca3822b79" address="unix:///run/containerd/s/6b9dc248d4fd467836b6ea16106e20698a3f479644a08e648e295f40e775b0ee" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:33:57.906794 containerd[1619]: time="2025-10-29T00:33:57.905822476Z" level=info msg="connecting to shim ebb10ccf0f4a65f34147f9331dd598399ab70c6f61c10e1ba28215bf7ba3f11a" address="unix:///run/containerd/s/a38d61e7150c18b52db7bad0f01810c01a5309f1231d7e5a8c5a81f9a41c39d1" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:33:57.930104 containerd[1619]: time="2025-10-29T00:33:57.930036786Z" level=info msg="connecting to shim dda86f059b5151c166c4acf94ee73d26db91f529582105e330b67c07cff7ce1b" address="unix:///run/containerd/s/35063bcd3ad2f9d0d95475f4bb355558000dcedeb6d67ff2b972ddada97832ef" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:33:57.931209 kubelet[2433]: E1029 00:33:57.931122 2433 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="800ms" Oct 29 00:33:57.946822 systemd[1]: Started cri-containerd-600c79e7f5d681d8513d32b5b5b9907efdb4dfd4790acd4fda6a875ca3822b79.scope - libcontainer container 600c79e7f5d681d8513d32b5b5b9907efdb4dfd4790acd4fda6a875ca3822b79. Oct 29 00:33:57.951895 systemd[1]: Started cri-containerd-ebb10ccf0f4a65f34147f9331dd598399ab70c6f61c10e1ba28215bf7ba3f11a.scope - libcontainer container ebb10ccf0f4a65f34147f9331dd598399ab70c6f61c10e1ba28215bf7ba3f11a. Oct 29 00:33:58.007809 systemd[1]: Started cri-containerd-dda86f059b5151c166c4acf94ee73d26db91f529582105e330b67c07cff7ce1b.scope - libcontainer container dda86f059b5151c166c4acf94ee73d26db91f529582105e330b67c07cff7ce1b. Oct 29 00:33:58.035516 containerd[1619]: time="2025-10-29T00:33:58.035317816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cf62dcbcc1549b97832d179b981cc7ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"600c79e7f5d681d8513d32b5b5b9907efdb4dfd4790acd4fda6a875ca3822b79\"" Oct 29 00:33:58.038041 kubelet[2433]: E1029 00:33:58.038017 2433 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:33:58.046091 containerd[1619]: time="2025-10-29T00:33:58.045946871Z" level=info msg="CreateContainer within sandbox \"600c79e7f5d681d8513d32b5b5b9907efdb4dfd4790acd4fda6a875ca3822b79\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 29 00:33:58.058661 containerd[1619]: time="2025-10-29T00:33:58.057746832Z" level=info msg="Container 86967909faac2f69a36b52d77ec4b170cd9787ec285a9820bbd3bceb0d4986f3: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:33:58.059529 kubelet[2433]: E1029 00:33:58.059483 2433 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 29 00:33:58.061151 containerd[1619]: time="2025-10-29T00:33:58.061091791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"ebb10ccf0f4a65f34147f9331dd598399ab70c6f61c10e1ba28215bf7ba3f11a\"" Oct 29 00:33:58.062609 kubelet[2433]: E1029 00:33:58.062580 2433 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:33:58.070152 containerd[1619]: time="2025-10-29T00:33:58.070090376Z" level=info msg="CreateContainer within sandbox \"ebb10ccf0f4a65f34147f9331dd598399ab70c6f61c10e1ba28215bf7ba3f11a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 29 00:33:58.070466 containerd[1619]: time="2025-10-29T00:33:58.070400145Z" level=info msg="CreateContainer within sandbox \"600c79e7f5d681d8513d32b5b5b9907efdb4dfd4790acd4fda6a875ca3822b79\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"86967909faac2f69a36b52d77ec4b170cd9787ec285a9820bbd3bceb0d4986f3\"" Oct 29 00:33:58.071259 containerd[1619]: time="2025-10-29T00:33:58.071227146Z" level=info msg="StartContainer for \"86967909faac2f69a36b52d77ec4b170cd9787ec285a9820bbd3bceb0d4986f3\"" Oct 29 00:33:58.072791 containerd[1619]: time="2025-10-29T00:33:58.072757045Z" level=info msg="connecting to shim 86967909faac2f69a36b52d77ec4b170cd9787ec285a9820bbd3bceb0d4986f3" address="unix:///run/containerd/s/6b9dc248d4fd467836b6ea16106e20698a3f479644a08e648e295f40e775b0ee" protocol=ttrpc version=3 Oct 29 00:33:58.079629 containerd[1619]: time="2025-10-29T00:33:58.079587819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"dda86f059b5151c166c4acf94ee73d26db91f529582105e330b67c07cff7ce1b\"" Oct 29 00:33:58.080432 kubelet[2433]: E1029 00:33:58.080402 2433 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:33:58.084184 containerd[1619]: time="2025-10-29T00:33:58.084126388Z" level=info msg="Container 495b2a6e46b25186f4b8f40dd34740b44e1db62afbec2d61b298d763e893d736: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:33:58.086033 containerd[1619]: time="2025-10-29T00:33:58.084919415Z" level=info msg="CreateContainer within sandbox \"dda86f059b5151c166c4acf94ee73d26db91f529582105e330b67c07cff7ce1b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 29 00:33:58.101136 containerd[1619]: time="2025-10-29T00:33:58.101081046Z" level=info msg="CreateContainer within sandbox \"ebb10ccf0f4a65f34147f9331dd598399ab70c6f61c10e1ba28215bf7ba3f11a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"495b2a6e46b25186f4b8f40dd34740b44e1db62afbec2d61b298d763e893d736\"" Oct 29 00:33:58.101998 systemd[1]: Started cri-containerd-86967909faac2f69a36b52d77ec4b170cd9787ec285a9820bbd3bceb0d4986f3.scope - libcontainer container 86967909faac2f69a36b52d77ec4b170cd9787ec285a9820bbd3bceb0d4986f3. Oct 29 00:33:58.102544 containerd[1619]: time="2025-10-29T00:33:58.102480837Z" level=info msg="StartContainer for \"495b2a6e46b25186f4b8f40dd34740b44e1db62afbec2d61b298d763e893d736\"" Oct 29 00:33:58.103909 containerd[1619]: time="2025-10-29T00:33:58.103859228Z" level=info msg="connecting to shim 495b2a6e46b25186f4b8f40dd34740b44e1db62afbec2d61b298d763e893d736" address="unix:///run/containerd/s/a38d61e7150c18b52db7bad0f01810c01a5309f1231d7e5a8c5a81f9a41c39d1" protocol=ttrpc version=3 Oct 29 00:33:58.112971 containerd[1619]: time="2025-10-29T00:33:58.112890525Z" level=info msg="Container f0bdb361c279362b9fe66cb790502eb572de356217e354c2d2a8da29cf2dd363: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:33:58.125570 containerd[1619]: time="2025-10-29T00:33:58.125514793Z" level=info msg="CreateContainer within sandbox \"dda86f059b5151c166c4acf94ee73d26db91f529582105e330b67c07cff7ce1b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f0bdb361c279362b9fe66cb790502eb572de356217e354c2d2a8da29cf2dd363\"" Oct 29 00:33:58.126709 containerd[1619]: time="2025-10-29T00:33:58.126682232Z" level=info msg="StartContainer for \"f0bdb361c279362b9fe66cb790502eb572de356217e354c2d2a8da29cf2dd363\"" Oct 29 00:33:58.127610 containerd[1619]: time="2025-10-29T00:33:58.127571573Z" level=info msg="connecting to shim f0bdb361c279362b9fe66cb790502eb572de356217e354c2d2a8da29cf2dd363" address="unix:///run/containerd/s/35063bcd3ad2f9d0d95475f4bb355558000dcedeb6d67ff2b972ddada97832ef" protocol=ttrpc version=3 Oct 29 00:33:58.131834 systemd[1]: Started cri-containerd-495b2a6e46b25186f4b8f40dd34740b44e1db62afbec2d61b298d763e893d736.scope - libcontainer container 495b2a6e46b25186f4b8f40dd34740b44e1db62afbec2d61b298d763e893d736. Oct 29 00:33:58.167895 systemd[1]: Started cri-containerd-f0bdb361c279362b9fe66cb790502eb572de356217e354c2d2a8da29cf2dd363.scope - libcontainer container f0bdb361c279362b9fe66cb790502eb572de356217e354c2d2a8da29cf2dd363. Oct 29 00:33:58.186547 kubelet[2433]: I1029 00:33:58.186495 2433 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 00:33:58.186871 kubelet[2433]: E1029 00:33:58.186841 2433 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Oct 29 00:33:58.187276 containerd[1619]: time="2025-10-29T00:33:58.187196855Z" level=info msg="StartContainer for \"86967909faac2f69a36b52d77ec4b170cd9787ec285a9820bbd3bceb0d4986f3\" returns successfully" Oct 29 00:33:58.227941 containerd[1619]: time="2025-10-29T00:33:58.227873968Z" level=info msg="StartContainer for \"495b2a6e46b25186f4b8f40dd34740b44e1db62afbec2d61b298d763e893d736\" returns successfully" Oct 29 00:33:58.260125 containerd[1619]: time="2025-10-29T00:33:58.260027498Z" level=info msg="StartContainer for \"f0bdb361c279362b9fe66cb790502eb572de356217e354c2d2a8da29cf2dd363\" returns successfully" Oct 29 00:33:58.444022 kubelet[2433]: E1029 00:33:58.443965 2433 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 00:33:58.445627 kubelet[2433]: E1029 00:33:58.445601 2433 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:33:58.446177 kubelet[2433]: E1029 00:33:58.446147 2433 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 00:33:58.446346 kubelet[2433]: E1029 00:33:58.446324 2433 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:33:58.452110 kubelet[2433]: E1029 00:33:58.452085 2433 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 00:33:58.452298 kubelet[2433]: E1029 00:33:58.452275 2433 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:33:58.990011 kubelet[2433]: I1029 00:33:58.989953 2433 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 00:33:59.456676 kubelet[2433]: E1029 00:33:59.456042 2433 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 00:33:59.456676 kubelet[2433]: E1029 00:33:59.456219 2433 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:33:59.460259 kubelet[2433]: E1029 00:33:59.460219 2433 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 00:33:59.460423 kubelet[2433]: E1029 00:33:59.460396 2433 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:33:59.462187 kubelet[2433]: E1029 00:33:59.462155 2433 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 00:33:59.462581 kubelet[2433]: E1029 00:33:59.462530 2433 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:33:59.988922 kubelet[2433]: E1029 00:33:59.988863 2433 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 29 00:34:00.143750 kubelet[2433]: I1029 00:34:00.143686 2433 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 29 00:34:00.143750 kubelet[2433]: E1029 00:34:00.143731 2433 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 29 00:34:00.208459 kubelet[2433]: I1029 00:34:00.208410 2433 apiserver.go:52] "Watching apiserver" Oct 29 00:34:00.228036 kubelet[2433]: I1029 00:34:00.227984 2433 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 29 00:34:00.229113 kubelet[2433]: I1029 00:34:00.229068 2433 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 29 00:34:00.312154 kubelet[2433]: E1029 00:34:00.312011 2433 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 29 00:34:00.312154 kubelet[2433]: I1029 00:34:00.312049 2433 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 29 00:34:00.313965 kubelet[2433]: E1029 00:34:00.313933 2433 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 29 00:34:00.313965 kubelet[2433]: I1029 00:34:00.313963 2433 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 29 00:34:00.315108 kubelet[2433]: E1029 00:34:00.315086 2433 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 29 00:34:01.883032 update_engine[1589]: I20251029 00:34:01.882906 1589 update_attempter.cc:509] Updating boot flags... Oct 29 00:34:02.519272 systemd[1]: Reload requested from client PID 2739 ('systemctl') (unit session-9.scope)... Oct 29 00:34:02.519303 systemd[1]: Reloading... Oct 29 00:34:02.665688 zram_generator::config[2783]: No configuration found. Oct 29 00:34:03.017827 systemd[1]: Reloading finished in 497 ms. Oct 29 00:34:03.050222 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 00:34:03.071299 systemd[1]: kubelet.service: Deactivated successfully. Oct 29 00:34:03.071726 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 00:34:03.071798 systemd[1]: kubelet.service: Consumed 2.013s CPU time, 131.6M memory peak. Oct 29 00:34:03.074946 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 00:34:03.343850 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 00:34:03.354099 (kubelet)[2828]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 29 00:34:03.409195 kubelet[2828]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 00:34:03.409195 kubelet[2828]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 29 00:34:03.409195 kubelet[2828]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 00:34:03.409700 kubelet[2828]: I1029 00:34:03.409253 2828 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 29 00:34:03.419180 kubelet[2828]: I1029 00:34:03.419105 2828 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 29 00:34:03.419180 kubelet[2828]: I1029 00:34:03.419139 2828 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 29 00:34:03.419457 kubelet[2828]: I1029 00:34:03.419437 2828 server.go:956] "Client rotation is on, will bootstrap in background" Oct 29 00:34:03.420917 kubelet[2828]: I1029 00:34:03.420886 2828 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 29 00:34:03.423418 kubelet[2828]: I1029 00:34:03.423373 2828 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 29 00:34:03.427762 kubelet[2828]: I1029 00:34:03.427733 2828 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 29 00:34:03.433173 kubelet[2828]: I1029 00:34:03.433136 2828 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 29 00:34:03.433381 kubelet[2828]: I1029 00:34:03.433349 2828 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 29 00:34:03.433540 kubelet[2828]: I1029 00:34:03.433371 2828 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 29 00:34:03.433677 kubelet[2828]: I1029 00:34:03.433546 2828 topology_manager.go:138] "Creating topology manager with none policy" Oct 29 00:34:03.433677 kubelet[2828]: I1029 00:34:03.433554 2828 container_manager_linux.go:303] "Creating device plugin manager" Oct 29 00:34:03.433677 kubelet[2828]: I1029 00:34:03.433622 2828 state_mem.go:36] "Initialized new in-memory state store" Oct 29 00:34:03.433845 kubelet[2828]: I1029 00:34:03.433818 2828 kubelet.go:480] "Attempting to sync node with API server" Oct 29 00:34:03.433845 kubelet[2828]: I1029 00:34:03.433833 2828 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 29 00:34:03.433924 kubelet[2828]: I1029 00:34:03.433875 2828 kubelet.go:386] "Adding apiserver pod source" Oct 29 00:34:03.433924 kubelet[2828]: I1029 00:34:03.433890 2828 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 29 00:34:03.437660 kubelet[2828]: I1029 00:34:03.435917 2828 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 29 00:34:03.437660 kubelet[2828]: I1029 00:34:03.436604 2828 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 29 00:34:03.601286 kubelet[2828]: I1029 00:34:03.599758 2828 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 29 00:34:03.601286 kubelet[2828]: I1029 00:34:03.599844 2828 server.go:1289] "Started kubelet" Oct 29 00:34:03.601459 kubelet[2828]: I1029 00:34:03.601301 2828 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 29 00:34:03.601579 kubelet[2828]: I1029 00:34:03.601521 2828 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 29 00:34:03.601982 kubelet[2828]: I1029 00:34:03.601665 2828 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 29 00:34:03.601982 kubelet[2828]: I1029 00:34:03.601716 2828 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 29 00:34:03.603400 kubelet[2828]: I1029 00:34:03.603191 2828 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 29 00:34:03.604250 kubelet[2828]: I1029 00:34:03.604229 2828 server.go:317] "Adding debug handlers to kubelet server" Oct 29 00:34:03.605208 kubelet[2828]: I1029 00:34:03.605029 2828 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 29 00:34:03.605254 kubelet[2828]: I1029 00:34:03.605216 2828 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 29 00:34:03.605453 kubelet[2828]: I1029 00:34:03.605415 2828 reconciler.go:26] "Reconciler: start to sync state" Oct 29 00:34:03.606523 kubelet[2828]: I1029 00:34:03.606477 2828 factory.go:223] Registration of the systemd container factory successfully Oct 29 00:34:03.606691 kubelet[2828]: I1029 00:34:03.606596 2828 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 29 00:34:03.609062 kubelet[2828]: E1029 00:34:03.609038 2828 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 29 00:34:03.610113 kubelet[2828]: I1029 00:34:03.610056 2828 factory.go:223] Registration of the containerd container factory successfully Oct 29 00:34:03.619158 kubelet[2828]: I1029 00:34:03.619087 2828 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 29 00:34:03.628066 kubelet[2828]: I1029 00:34:03.628033 2828 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 29 00:34:03.628144 kubelet[2828]: I1029 00:34:03.628076 2828 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 29 00:34:03.628144 kubelet[2828]: I1029 00:34:03.628098 2828 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 29 00:34:03.628144 kubelet[2828]: I1029 00:34:03.628110 2828 kubelet.go:2436] "Starting kubelet main sync loop" Oct 29 00:34:03.628239 kubelet[2828]: E1029 00:34:03.628170 2828 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 29 00:34:03.660691 kubelet[2828]: I1029 00:34:03.660625 2828 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 29 00:34:03.660691 kubelet[2828]: I1029 00:34:03.660676 2828 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 29 00:34:03.660850 kubelet[2828]: I1029 00:34:03.660711 2828 state_mem.go:36] "Initialized new in-memory state store" Oct 29 00:34:03.660932 kubelet[2828]: I1029 00:34:03.660908 2828 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 29 00:34:03.660960 kubelet[2828]: I1029 00:34:03.660931 2828 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 29 00:34:03.660960 kubelet[2828]: I1029 00:34:03.660956 2828 policy_none.go:49] "None policy: Start" Oct 29 00:34:03.661028 kubelet[2828]: I1029 00:34:03.660968 2828 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 29 00:34:03.661028 kubelet[2828]: I1029 00:34:03.660983 2828 state_mem.go:35] "Initializing new in-memory state store" Oct 29 00:34:03.661168 kubelet[2828]: I1029 00:34:03.661136 2828 state_mem.go:75] "Updated machine memory state" Oct 29 00:34:03.667759 kubelet[2828]: E1029 00:34:03.667718 2828 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 29 00:34:03.668023 kubelet[2828]: I1029 00:34:03.667995 2828 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 29 00:34:03.668055 kubelet[2828]: I1029 00:34:03.668014 2828 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 29 00:34:03.668568 kubelet[2828]: I1029 00:34:03.668537 2828 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 29 00:34:03.670006 kubelet[2828]: E1029 00:34:03.669859 2828 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 29 00:34:03.730673 kubelet[2828]: I1029 00:34:03.729934 2828 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 29 00:34:03.731708 kubelet[2828]: I1029 00:34:03.730056 2828 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 29 00:34:03.731787 kubelet[2828]: I1029 00:34:03.730238 2828 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 29 00:34:03.777267 kubelet[2828]: I1029 00:34:03.776736 2828 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 00:34:03.786103 kubelet[2828]: I1029 00:34:03.786053 2828 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 29 00:34:03.786280 kubelet[2828]: I1029 00:34:03.786163 2828 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 29 00:34:03.806716 kubelet[2828]: I1029 00:34:03.806623 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cf62dcbcc1549b97832d179b981cc7ea-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cf62dcbcc1549b97832d179b981cc7ea\") " pod="kube-system/kube-apiserver-localhost" Oct 29 00:34:03.806716 kubelet[2828]: I1029 00:34:03.806711 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cf62dcbcc1549b97832d179b981cc7ea-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cf62dcbcc1549b97832d179b981cc7ea\") " pod="kube-system/kube-apiserver-localhost" Oct 29 00:34:03.806716 kubelet[2828]: I1029 00:34:03.806737 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:34:03.806716 kubelet[2828]: I1029 00:34:03.806762 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:34:03.806995 kubelet[2828]: I1029 00:34:03.806782 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:34:03.806995 kubelet[2828]: I1029 00:34:03.806806 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:34:03.806995 kubelet[2828]: I1029 00:34:03.806833 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Oct 29 00:34:03.806995 kubelet[2828]: I1029 00:34:03.806856 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cf62dcbcc1549b97832d179b981cc7ea-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cf62dcbcc1549b97832d179b981cc7ea\") " pod="kube-system/kube-apiserver-localhost" Oct 29 00:34:03.806995 kubelet[2828]: I1029 00:34:03.806879 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:34:04.037563 kubelet[2828]: E1029 00:34:04.037392 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:34:04.041587 kubelet[2828]: E1029 00:34:04.041535 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:34:04.043746 kubelet[2828]: E1029 00:34:04.043705 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:34:04.435598 kubelet[2828]: I1029 00:34:04.435518 2828 apiserver.go:52] "Watching apiserver" Oct 29 00:34:04.489267 kubelet[2828]: I1029 00:34:04.488990 2828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.488970825 podStartE2EDuration="1.488970825s" podCreationTimestamp="2025-10-29 00:34:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 00:34:04.487807202 +0000 UTC m=+1.126803797" watchObservedRunningTime="2025-10-29 00:34:04.488970825 +0000 UTC m=+1.127967420" Oct 29 00:34:04.506667 kubelet[2828]: I1029 00:34:04.505574 2828 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 29 00:34:04.514445 kubelet[2828]: I1029 00:34:04.514337 2828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.514219513 podStartE2EDuration="1.514219513s" podCreationTimestamp="2025-10-29 00:34:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 00:34:04.514266682 +0000 UTC m=+1.153263268" watchObservedRunningTime="2025-10-29 00:34:04.514219513 +0000 UTC m=+1.153216108" Oct 29 00:34:04.514801 kubelet[2828]: I1029 00:34:04.514749 2828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.5147431839999999 podStartE2EDuration="1.514743184s" podCreationTimestamp="2025-10-29 00:34:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 00:34:04.501573397 +0000 UTC m=+1.140569983" watchObservedRunningTime="2025-10-29 00:34:04.514743184 +0000 UTC m=+1.153739779" Oct 29 00:34:04.646841 kubelet[2828]: E1029 00:34:04.646656 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:34:04.647854 kubelet[2828]: E1029 00:34:04.647832 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:34:04.648181 kubelet[2828]: E1029 00:34:04.648125 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:34:05.647680 kubelet[2828]: E1029 00:34:05.647571 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:34:05.648467 kubelet[2828]: E1029 00:34:05.647814 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:34:06.198246 kubelet[2828]: E1029 00:34:06.198199 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:34:07.391772 kubelet[2828]: I1029 00:34:07.391727 2828 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 29 00:34:07.392322 kubelet[2828]: I1029 00:34:07.392216 2828 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 29 00:34:07.392391 containerd[1619]: time="2025-10-29T00:34:07.392024041Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 29 00:34:08.354339 systemd[1]: Created slice kubepods-besteffort-pod4a1be4e0_ef4f_41d9_bfdc_9b281fccb143.slice - libcontainer container kubepods-besteffort-pod4a1be4e0_ef4f_41d9_bfdc_9b281fccb143.slice. Oct 29 00:34:08.433769 kubelet[2828]: I1029 00:34:08.433693 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4a1be4e0-ef4f-41d9-bfdc-9b281fccb143-kube-proxy\") pod \"kube-proxy-2tvnv\" (UID: \"4a1be4e0-ef4f-41d9-bfdc-9b281fccb143\") " pod="kube-system/kube-proxy-2tvnv" Oct 29 00:34:08.434436 kubelet[2828]: I1029 00:34:08.434390 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4a1be4e0-ef4f-41d9-bfdc-9b281fccb143-xtables-lock\") pod \"kube-proxy-2tvnv\" (UID: \"4a1be4e0-ef4f-41d9-bfdc-9b281fccb143\") " pod="kube-system/kube-proxy-2tvnv" Oct 29 00:34:08.434670 kubelet[2828]: I1029 00:34:08.434482 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4a1be4e0-ef4f-41d9-bfdc-9b281fccb143-lib-modules\") pod \"kube-proxy-2tvnv\" (UID: \"4a1be4e0-ef4f-41d9-bfdc-9b281fccb143\") " pod="kube-system/kube-proxy-2tvnv" Oct 29 00:34:08.434670 kubelet[2828]: I1029 00:34:08.434505 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q46kl\" (UniqueName: \"kubernetes.io/projected/4a1be4e0-ef4f-41d9-bfdc-9b281fccb143-kube-api-access-q46kl\") pod \"kube-proxy-2tvnv\" (UID: \"4a1be4e0-ef4f-41d9-bfdc-9b281fccb143\") " pod="kube-system/kube-proxy-2tvnv" Oct 29 00:34:08.608412 systemd[1]: Created slice kubepods-besteffort-pod456ef65a_f4db_4b6c_b03f_9eeb815612b0.slice - libcontainer container kubepods-besteffort-pod456ef65a_f4db_4b6c_b03f_9eeb815612b0.slice. Oct 29 00:34:08.636258 kubelet[2828]: I1029 00:34:08.636207 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhfl5\" (UniqueName: \"kubernetes.io/projected/456ef65a-f4db-4b6c-b03f-9eeb815612b0-kube-api-access-fhfl5\") pod \"tigera-operator-7dcd859c48-8xddh\" (UID: \"456ef65a-f4db-4b6c-b03f-9eeb815612b0\") " pod="tigera-operator/tigera-operator-7dcd859c48-8xddh" Oct 29 00:34:08.636258 kubelet[2828]: I1029 00:34:08.636250 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/456ef65a-f4db-4b6c-b03f-9eeb815612b0-var-lib-calico\") pod \"tigera-operator-7dcd859c48-8xddh\" (UID: \"456ef65a-f4db-4b6c-b03f-9eeb815612b0\") " pod="tigera-operator/tigera-operator-7dcd859c48-8xddh" Oct 29 00:34:08.674801 kubelet[2828]: E1029 00:34:08.674749 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:34:08.675478 containerd[1619]: time="2025-10-29T00:34:08.675438552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2tvnv,Uid:4a1be4e0-ef4f-41d9-bfdc-9b281fccb143,Namespace:kube-system,Attempt:0,}" Oct 29 00:34:08.714249 containerd[1619]: time="2025-10-29T00:34:08.714167612Z" level=info msg="connecting to shim 13a379028f10aa1ce298292c72b7782a784dbce74b4e841dee7a5d2857d5de94" address="unix:///run/containerd/s/9953df2659ecaadcfeeb8824a7e139e12b0e1e59fdfd6162ea88a04a20fa9ba4" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:34:08.793019 systemd[1]: Started cri-containerd-13a379028f10aa1ce298292c72b7782a784dbce74b4e841dee7a5d2857d5de94.scope - libcontainer container 13a379028f10aa1ce298292c72b7782a784dbce74b4e841dee7a5d2857d5de94. Oct 29 00:34:08.825609 containerd[1619]: time="2025-10-29T00:34:08.825536711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2tvnv,Uid:4a1be4e0-ef4f-41d9-bfdc-9b281fccb143,Namespace:kube-system,Attempt:0,} returns sandbox id \"13a379028f10aa1ce298292c72b7782a784dbce74b4e841dee7a5d2857d5de94\"" Oct 29 00:34:08.826433 kubelet[2828]: E1029 00:34:08.826394 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:34:08.832846 containerd[1619]: time="2025-10-29T00:34:08.832777407Z" level=info msg="CreateContainer within sandbox \"13a379028f10aa1ce298292c72b7782a784dbce74b4e841dee7a5d2857d5de94\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 29 00:34:08.847581 containerd[1619]: time="2025-10-29T00:34:08.847496718Z" level=info msg="Container 97751157543596f7429b02e05fba8284909c2c7410013a97aca84fef2c1acc43: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:34:08.858932 containerd[1619]: time="2025-10-29T00:34:08.858746827Z" level=info msg="CreateContainer within sandbox \"13a379028f10aa1ce298292c72b7782a784dbce74b4e841dee7a5d2857d5de94\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"97751157543596f7429b02e05fba8284909c2c7410013a97aca84fef2c1acc43\"" Oct 29 00:34:08.861133 containerd[1619]: time="2025-10-29T00:34:08.859624575Z" level=info msg="StartContainer for \"97751157543596f7429b02e05fba8284909c2c7410013a97aca84fef2c1acc43\"" Oct 29 00:34:08.861133 containerd[1619]: time="2025-10-29T00:34:08.861065807Z" level=info msg="connecting to shim 97751157543596f7429b02e05fba8284909c2c7410013a97aca84fef2c1acc43" address="unix:///run/containerd/s/9953df2659ecaadcfeeb8824a7e139e12b0e1e59fdfd6162ea88a04a20fa9ba4" protocol=ttrpc version=3 Oct 29 00:34:08.895030 systemd[1]: Started cri-containerd-97751157543596f7429b02e05fba8284909c2c7410013a97aca84fef2c1acc43.scope - libcontainer container 97751157543596f7429b02e05fba8284909c2c7410013a97aca84fef2c1acc43. Oct 29 00:34:08.912212 containerd[1619]: time="2025-10-29T00:34:08.912136517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-8xddh,Uid:456ef65a-f4db-4b6c-b03f-9eeb815612b0,Namespace:tigera-operator,Attempt:0,}" Oct 29 00:34:08.936905 containerd[1619]: time="2025-10-29T00:34:08.936785463Z" level=info msg="connecting to shim aa0bf2075484210854314065c0540ba5605f5bac74633f399a5ef21f4742a85d" address="unix:///run/containerd/s/a8571dfc223f5edfaff75effc1667bc85ad2c5ba768697c229fff60cce53023c" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:34:08.957240 containerd[1619]: time="2025-10-29T00:34:08.957153424Z" level=info msg="StartContainer for \"97751157543596f7429b02e05fba8284909c2c7410013a97aca84fef2c1acc43\" returns successfully" Oct 29 00:34:08.979897 systemd[1]: Started cri-containerd-aa0bf2075484210854314065c0540ba5605f5bac74633f399a5ef21f4742a85d.scope - libcontainer container aa0bf2075484210854314065c0540ba5605f5bac74633f399a5ef21f4742a85d. Oct 29 00:34:09.063864 containerd[1619]: time="2025-10-29T00:34:09.063790914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-8xddh,Uid:456ef65a-f4db-4b6c-b03f-9eeb815612b0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"aa0bf2075484210854314065c0540ba5605f5bac74633f399a5ef21f4742a85d\"" Oct 29 00:34:09.065991 containerd[1619]: time="2025-10-29T00:34:09.065731287Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 29 00:34:09.094388 kubelet[2828]: E1029 00:34:09.094321 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:34:09.553853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2924057833.mount: Deactivated successfully. Oct 29 00:34:09.656440 kubelet[2828]: E1029 00:34:09.656357 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:34:09.656994 kubelet[2828]: E1029 00:34:09.656742 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:34:09.675750 kubelet[2828]: I1029 00:34:09.675664 2828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2tvnv" podStartSLOduration=1.6756276730000002 podStartE2EDuration="1.675627673s" podCreationTimestamp="2025-10-29 00:34:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 00:34:09.675523386 +0000 UTC m=+6.314519981" watchObservedRunningTime="2025-10-29 00:34:09.675627673 +0000 UTC m=+6.314624268" Oct 29 00:34:10.586617 kubelet[2828]: E1029 00:34:10.586488 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:34:10.657747 kubelet[2828]: E1029 00:34:10.657704 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:34:11.659644 kubelet[2828]: E1029 00:34:11.659572 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:34:11.992410 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2422448942.mount: Deactivated successfully. Oct 29 00:34:13.083522 containerd[1619]: time="2025-10-29T00:34:13.083447255Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:34:13.084305 containerd[1619]: time="2025-10-29T00:34:13.084266890Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Oct 29 00:34:13.085682 containerd[1619]: time="2025-10-29T00:34:13.085607508Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:34:13.088041 containerd[1619]: time="2025-10-29T00:34:13.088000618Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:34:13.088578 containerd[1619]: time="2025-10-29T00:34:13.088544765Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 4.022778061s" Oct 29 00:34:13.088578 containerd[1619]: time="2025-10-29T00:34:13.088584990Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 29 00:34:13.093966 containerd[1619]: time="2025-10-29T00:34:13.093929876Z" level=info msg="CreateContainer within sandbox \"aa0bf2075484210854314065c0540ba5605f5bac74633f399a5ef21f4742a85d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 29 00:34:13.103537 containerd[1619]: time="2025-10-29T00:34:13.103475420Z" level=info msg="Container 92b8b23b0228f54027566853d760be94a0efbc8170b13300eb370dc91e5e7843: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:34:13.112103 containerd[1619]: time="2025-10-29T00:34:13.112023544Z" level=info msg="CreateContainer within sandbox \"aa0bf2075484210854314065c0540ba5605f5bac74633f399a5ef21f4742a85d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"92b8b23b0228f54027566853d760be94a0efbc8170b13300eb370dc91e5e7843\"" Oct 29 00:34:13.112788 containerd[1619]: time="2025-10-29T00:34:13.112731648Z" level=info msg="StartContainer for \"92b8b23b0228f54027566853d760be94a0efbc8170b13300eb370dc91e5e7843\"" Oct 29 00:34:13.113945 containerd[1619]: time="2025-10-29T00:34:13.113913145Z" level=info msg="connecting to shim 92b8b23b0228f54027566853d760be94a0efbc8170b13300eb370dc91e5e7843" address="unix:///run/containerd/s/a8571dfc223f5edfaff75effc1667bc85ad2c5ba768697c229fff60cce53023c" protocol=ttrpc version=3 Oct 29 00:34:13.157903 systemd[1]: Started cri-containerd-92b8b23b0228f54027566853d760be94a0efbc8170b13300eb370dc91e5e7843.scope - libcontainer container 92b8b23b0228f54027566853d760be94a0efbc8170b13300eb370dc91e5e7843. Oct 29 00:34:13.194564 containerd[1619]: time="2025-10-29T00:34:13.194514394Z" level=info msg="StartContainer for \"92b8b23b0228f54027566853d760be94a0efbc8170b13300eb370dc91e5e7843\" returns successfully" Oct 29 00:34:16.205668 kubelet[2828]: E1029 00:34:16.204132 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:34:16.222483 kubelet[2828]: I1029 00:34:16.222398 2828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-8xddh" podStartSLOduration=4.198259258 podStartE2EDuration="8.222379719s" podCreationTimestamp="2025-10-29 00:34:08 +0000 UTC" firstStartedPulling="2025-10-29 00:34:09.065337755 +0000 UTC m=+5.704334350" lastFinishedPulling="2025-10-29 00:34:13.089458216 +0000 UTC m=+9.728454811" observedRunningTime="2025-10-29 00:34:13.673338126 +0000 UTC m=+10.312334722" watchObservedRunningTime="2025-10-29 00:34:16.222379719 +0000 UTC m=+12.861376314" Oct 29 00:34:18.573260 sudo[1837]: pam_unix(sudo:session): session closed for user root Oct 29 00:34:18.575588 sshd[1836]: Connection closed by 10.0.0.1 port 42706 Oct 29 00:34:18.576786 sshd-session[1833]: pam_unix(sshd:session): session closed for user core Oct 29 00:34:18.582542 systemd-logind[1587]: Session 9 logged out. Waiting for processes to exit. Oct 29 00:34:18.583570 systemd[1]: sshd@8-10.0.0.10:22-10.0.0.1:42706.service: Deactivated successfully. Oct 29 00:34:18.588523 systemd[1]: session-9.scope: Deactivated successfully. Oct 29 00:34:18.588870 systemd[1]: session-9.scope: Consumed 5.794s CPU time, 213.5M memory peak. Oct 29 00:34:18.593079 systemd-logind[1587]: Removed session 9. Oct 29 00:34:22.983473 systemd[1]: Created slice kubepods-besteffort-pod1ba2fa6d_4710_44df_8724_3677d7f06fe0.slice - libcontainer container kubepods-besteffort-pod1ba2fa6d_4710_44df_8724_3677d7f06fe0.slice. Oct 29 00:34:23.033204 kubelet[2828]: I1029 00:34:23.033085 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1ba2fa6d-4710-44df-8724-3677d7f06fe0-typha-certs\") pod \"calico-typha-77b7b85c64-5ls94\" (UID: \"1ba2fa6d-4710-44df-8724-3677d7f06fe0\") " pod="calico-system/calico-typha-77b7b85c64-5ls94" Oct 29 00:34:23.033204 kubelet[2828]: I1029 00:34:23.033181 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ba2fa6d-4710-44df-8724-3677d7f06fe0-tigera-ca-bundle\") pod \"calico-typha-77b7b85c64-5ls94\" (UID: \"1ba2fa6d-4710-44df-8724-3677d7f06fe0\") " pod="calico-system/calico-typha-77b7b85c64-5ls94" Oct 29 00:34:23.033765 kubelet[2828]: I1029 00:34:23.033368 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7ghv\" (UniqueName: \"kubernetes.io/projected/1ba2fa6d-4710-44df-8724-3677d7f06fe0-kube-api-access-s7ghv\") pod \"calico-typha-77b7b85c64-5ls94\" (UID: \"1ba2fa6d-4710-44df-8724-3677d7f06fe0\") " pod="calico-system/calico-typha-77b7b85c64-5ls94" Oct 29 00:34:23.102088 systemd[1]: Created slice kubepods-besteffort-pod6e37dff8_e7a5_412b_b826_9db4c38362ef.slice - libcontainer container kubepods-besteffort-pod6e37dff8_e7a5_412b_b826_9db4c38362ef.slice. Oct 29 00:34:23.134039 kubelet[2828]: I1029 00:34:23.133973 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e37dff8-e7a5-412b-b826-9db4c38362ef-xtables-lock\") pod \"calico-node-gt4r5\" (UID: \"6e37dff8-e7a5-412b-b826-9db4c38362ef\") " pod="calico-system/calico-node-gt4r5" Oct 29 00:34:23.134039 kubelet[2828]: I1029 00:34:23.134033 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6e37dff8-e7a5-412b-b826-9db4c38362ef-cni-bin-dir\") pod \"calico-node-gt4r5\" (UID: \"6e37dff8-e7a5-412b-b826-9db4c38362ef\") " pod="calico-system/calico-node-gt4r5" Oct 29 00:34:23.134292 kubelet[2828]: I1029 00:34:23.134059 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6e37dff8-e7a5-412b-b826-9db4c38362ef-var-lib-calico\") pod \"calico-node-gt4r5\" (UID: \"6e37dff8-e7a5-412b-b826-9db4c38362ef\") " pod="calico-system/calico-node-gt4r5" Oct 29 00:34:23.134292 kubelet[2828]: I1029 00:34:23.134096 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6e37dff8-e7a5-412b-b826-9db4c38362ef-cni-log-dir\") pod \"calico-node-gt4r5\" (UID: \"6e37dff8-e7a5-412b-b826-9db4c38362ef\") " pod="calico-system/calico-node-gt4r5" Oct 29 00:34:23.134292 kubelet[2828]: I1029 00:34:23.134128 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e37dff8-e7a5-412b-b826-9db4c38362ef-lib-modules\") pod \"calico-node-gt4r5\" (UID: \"6e37dff8-e7a5-412b-b826-9db4c38362ef\") " pod="calico-system/calico-node-gt4r5" Oct 29 00:34:23.134292 kubelet[2828]: I1029 00:34:23.134146 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6e37dff8-e7a5-412b-b826-9db4c38362ef-policysync\") pod \"calico-node-gt4r5\" (UID: \"6e37dff8-e7a5-412b-b826-9db4c38362ef\") " pod="calico-system/calico-node-gt4r5" Oct 29 00:34:23.134292 kubelet[2828]: I1029 00:34:23.134164 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7svwg\" (UniqueName: \"kubernetes.io/projected/6e37dff8-e7a5-412b-b826-9db4c38362ef-kube-api-access-7svwg\") pod \"calico-node-gt4r5\" (UID: \"6e37dff8-e7a5-412b-b826-9db4c38362ef\") " pod="calico-system/calico-node-gt4r5" Oct 29 00:34:23.134474 kubelet[2828]: I1029 00:34:23.134186 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6e37dff8-e7a5-412b-b826-9db4c38362ef-var-run-calico\") pod \"calico-node-gt4r5\" (UID: \"6e37dff8-e7a5-412b-b826-9db4c38362ef\") " pod="calico-system/calico-node-gt4r5" Oct 29 00:34:23.134474 kubelet[2828]: I1029 00:34:23.134209 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6e37dff8-e7a5-412b-b826-9db4c38362ef-cni-net-dir\") pod \"calico-node-gt4r5\" (UID: \"6e37dff8-e7a5-412b-b826-9db4c38362ef\") " pod="calico-system/calico-node-gt4r5" Oct 29 00:34:23.134474 kubelet[2828]: I1029 00:34:23.134225 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e37dff8-e7a5-412b-b826-9db4c38362ef-tigera-ca-bundle\") pod \"calico-node-gt4r5\" (UID: \"6e37dff8-e7a5-412b-b826-9db4c38362ef\") " pod="calico-system/calico-node-gt4r5" Oct 29 00:34:23.134474 kubelet[2828]: I1029 00:34:23.134260 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6e37dff8-e7a5-412b-b826-9db4c38362ef-flexvol-driver-host\") pod \"calico-node-gt4r5\" (UID: \"6e37dff8-e7a5-412b-b826-9db4c38362ef\") " pod="calico-system/calico-node-gt4r5" Oct 29 00:34:23.134474 kubelet[2828]: I1029 00:34:23.134283 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6e37dff8-e7a5-412b-b826-9db4c38362ef-node-certs\") pod \"calico-node-gt4r5\" (UID: \"6e37dff8-e7a5-412b-b826-9db4c38362ef\") " pod="calico-system/calico-node-gt4r5" Oct 29 00:34:23.235708 kubelet[2828]: E1029 00:34:23.235491 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.235708 kubelet[2828]: W1029 00:34:23.235518 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.236764 kubelet[2828]: E1029 00:34:23.236740 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.237260 kubelet[2828]: E1029 00:34:23.237233 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.237260 kubelet[2828]: W1029 00:34:23.237254 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.237345 kubelet[2828]: E1029 00:34:23.237270 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.238999 kubelet[2828]: E1029 00:34:23.238851 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.238999 kubelet[2828]: W1029 00:34:23.238870 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.238999 kubelet[2828]: E1029 00:34:23.238895 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.239269 kubelet[2828]: E1029 00:34:23.239254 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.239346 kubelet[2828]: W1029 00:34:23.239328 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.239422 kubelet[2828]: E1029 00:34:23.239406 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.241818 kubelet[2828]: E1029 00:34:23.241790 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.241818 kubelet[2828]: W1029 00:34:23.241812 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.241910 kubelet[2828]: E1029 00:34:23.241851 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.242724 kubelet[2828]: E1029 00:34:23.242445 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.242724 kubelet[2828]: W1029 00:34:23.242462 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.242724 kubelet[2828]: E1029 00:34:23.242474 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.242830 kubelet[2828]: E1029 00:34:23.242812 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.242857 kubelet[2828]: W1029 00:34:23.242829 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.242857 kubelet[2828]: E1029 00:34:23.242840 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.243118 kubelet[2828]: E1029 00:34:23.243101 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.243118 kubelet[2828]: W1029 00:34:23.243114 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.243188 kubelet[2828]: E1029 00:34:23.243125 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.243363 kubelet[2828]: E1029 00:34:23.243348 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.243363 kubelet[2828]: W1029 00:34:23.243361 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.243417 kubelet[2828]: E1029 00:34:23.243373 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.243646 kubelet[2828]: E1029 00:34:23.243618 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.243688 kubelet[2828]: W1029 00:34:23.243631 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.243688 kubelet[2828]: E1029 00:34:23.243659 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.243884 kubelet[2828]: E1029 00:34:23.243861 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.243920 kubelet[2828]: W1029 00:34:23.243884 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.243920 kubelet[2828]: E1029 00:34:23.243897 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.244163 kubelet[2828]: E1029 00:34:23.244148 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.244163 kubelet[2828]: W1029 00:34:23.244161 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.244224 kubelet[2828]: E1029 00:34:23.244172 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.244382 kubelet[2828]: E1029 00:34:23.244367 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.244382 kubelet[2828]: W1029 00:34:23.244380 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.244435 kubelet[2828]: E1029 00:34:23.244391 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.244698 kubelet[2828]: E1029 00:34:23.244682 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.244698 kubelet[2828]: W1029 00:34:23.244697 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.244772 kubelet[2828]: E1029 00:34:23.244708 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.244962 kubelet[2828]: E1029 00:34:23.244947 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.244962 kubelet[2828]: W1029 00:34:23.244960 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.245036 kubelet[2828]: E1029 00:34:23.244971 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.245209 kubelet[2828]: E1029 00:34:23.245194 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.245209 kubelet[2828]: W1029 00:34:23.245206 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.245274 kubelet[2828]: E1029 00:34:23.245216 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.245746 kubelet[2828]: E1029 00:34:23.245481 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.245746 kubelet[2828]: W1029 00:34:23.245495 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.245746 kubelet[2828]: E1029 00:34:23.245506 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.245846 kubelet[2828]: E1029 00:34:23.245792 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.245846 kubelet[2828]: W1029 00:34:23.245803 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.245846 kubelet[2828]: E1029 00:34:23.245814 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.246062 kubelet[2828]: E1029 00:34:23.246047 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.246098 kubelet[2828]: W1029 00:34:23.246063 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.246098 kubelet[2828]: E1029 00:34:23.246074 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.246301 kubelet[2828]: E1029 00:34:23.246284 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.246301 kubelet[2828]: W1029 00:34:23.246297 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.246375 kubelet[2828]: E1029 00:34:23.246307 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.246566 kubelet[2828]: E1029 00:34:23.246551 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.246566 kubelet[2828]: W1029 00:34:23.246563 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.246623 kubelet[2828]: E1029 00:34:23.246574 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.246820 kubelet[2828]: E1029 00:34:23.246807 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.246820 kubelet[2828]: W1029 00:34:23.246819 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.246888 kubelet[2828]: E1029 00:34:23.246829 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.247134 kubelet[2828]: E1029 00:34:23.247119 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.247134 kubelet[2828]: W1029 00:34:23.247132 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.247205 kubelet[2828]: E1029 00:34:23.247144 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.247379 kubelet[2828]: E1029 00:34:23.247366 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.247379 kubelet[2828]: W1029 00:34:23.247377 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.247465 kubelet[2828]: E1029 00:34:23.247397 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.247720 kubelet[2828]: E1029 00:34:23.247705 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.247720 kubelet[2828]: W1029 00:34:23.247719 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.247774 kubelet[2828]: E1029 00:34:23.247730 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.248058 kubelet[2828]: E1029 00:34:23.248044 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.248058 kubelet[2828]: W1029 00:34:23.248056 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.248120 kubelet[2828]: E1029 00:34:23.248066 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.248288 kubelet[2828]: E1029 00:34:23.248272 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.248288 kubelet[2828]: W1029 00:34:23.248284 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.248343 kubelet[2828]: E1029 00:34:23.248294 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.248562 kubelet[2828]: E1029 00:34:23.248547 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.248562 kubelet[2828]: W1029 00:34:23.248560 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.248624 kubelet[2828]: E1029 00:34:23.248571 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.250038 kubelet[2828]: E1029 00:34:23.250005 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.250038 kubelet[2828]: W1029 00:34:23.250020 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.250038 kubelet[2828]: E1029 00:34:23.250032 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.250307 kubelet[2828]: E1029 00:34:23.250287 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.250348 kubelet[2828]: W1029 00:34:23.250307 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.250348 kubelet[2828]: E1029 00:34:23.250321 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.250557 kubelet[2828]: E1029 00:34:23.250534 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.250557 kubelet[2828]: W1029 00:34:23.250549 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.250557 kubelet[2828]: E1029 00:34:23.250560 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.250891 kubelet[2828]: E1029 00:34:23.250859 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.250891 kubelet[2828]: W1029 00:34:23.250890 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.250973 kubelet[2828]: E1029 00:34:23.250904 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.287715 kubelet[2828]: E1029 00:34:23.287548 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:34:23.289577 containerd[1619]: time="2025-10-29T00:34:23.288615476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77b7b85c64-5ls94,Uid:1ba2fa6d-4710-44df-8724-3677d7f06fe0,Namespace:calico-system,Attempt:0,}" Oct 29 00:34:23.296071 kubelet[2828]: E1029 00:34:23.296020 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qqtll" podUID="31820ff6-c2b5-4f1e-b097-0b66b5dd1baa" Oct 29 00:34:23.316859 kubelet[2828]: E1029 00:34:23.316775 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.316859 kubelet[2828]: W1029 00:34:23.316802 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.316859 kubelet[2828]: E1029 00:34:23.316822 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.317533 kubelet[2828]: E1029 00:34:23.317160 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.317533 kubelet[2828]: W1029 00:34:23.317200 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.317533 kubelet[2828]: E1029 00:34:23.317210 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.317533 kubelet[2828]: E1029 00:34:23.317534 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.317761 kubelet[2828]: W1029 00:34:23.317545 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.317761 kubelet[2828]: E1029 00:34:23.317555 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.317974 kubelet[2828]: E1029 00:34:23.317951 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.317974 kubelet[2828]: W1029 00:34:23.317967 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.318032 kubelet[2828]: E1029 00:34:23.317997 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.318260 kubelet[2828]: E1029 00:34:23.318238 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.318260 kubelet[2828]: W1029 00:34:23.318253 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.318260 kubelet[2828]: E1029 00:34:23.318263 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.318506 kubelet[2828]: E1029 00:34:23.318479 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.318506 kubelet[2828]: W1029 00:34:23.318494 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.318506 kubelet[2828]: E1029 00:34:23.318503 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.318975 kubelet[2828]: E1029 00:34:23.318949 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.318975 kubelet[2828]: W1029 00:34:23.318964 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.318975 kubelet[2828]: E1029 00:34:23.318974 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.319237 kubelet[2828]: E1029 00:34:23.319216 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.319237 kubelet[2828]: W1029 00:34:23.319229 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.319237 kubelet[2828]: E1029 00:34:23.319238 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.319528 kubelet[2828]: E1029 00:34:23.319500 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.319567 kubelet[2828]: W1029 00:34:23.319515 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.319567 kubelet[2828]: E1029 00:34:23.319548 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.319814 kubelet[2828]: E1029 00:34:23.319792 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.319865 kubelet[2828]: W1029 00:34:23.319826 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.319865 kubelet[2828]: E1029 00:34:23.319838 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.320285 kubelet[2828]: E1029 00:34:23.320099 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.320285 kubelet[2828]: W1029 00:34:23.320112 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.320285 kubelet[2828]: E1029 00:34:23.320132 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.320370 kubelet[2828]: E1029 00:34:23.320361 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.320403 kubelet[2828]: W1029 00:34:23.320370 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.320403 kubelet[2828]: E1029 00:34:23.320379 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.320606 kubelet[2828]: E1029 00:34:23.320584 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.320606 kubelet[2828]: W1029 00:34:23.320598 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.320679 kubelet[2828]: E1029 00:34:23.320607 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.320838 kubelet[2828]: E1029 00:34:23.320817 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.320838 kubelet[2828]: W1029 00:34:23.320834 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.320913 kubelet[2828]: E1029 00:34:23.320845 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.321127 kubelet[2828]: E1029 00:34:23.321100 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.321127 kubelet[2828]: W1029 00:34:23.321115 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.321127 kubelet[2828]: E1029 00:34:23.321123 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.321349 kubelet[2828]: E1029 00:34:23.321330 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.321349 kubelet[2828]: W1029 00:34:23.321343 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.321406 kubelet[2828]: E1029 00:34:23.321352 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.321560 kubelet[2828]: E1029 00:34:23.321542 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.321560 kubelet[2828]: W1029 00:34:23.321554 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.321560 kubelet[2828]: E1029 00:34:23.321562 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.321755 containerd[1619]: time="2025-10-29T00:34:23.321671837Z" level=info msg="connecting to shim 21ff93399ccd0968ccdcaaf3336ae04884852c69a406191c0d789a2c550e3556" address="unix:///run/containerd/s/49173533e8568510a11857ed1b08f1642a22116b38871675c64a390c236940fa" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:34:23.321810 kubelet[2828]: E1029 00:34:23.321788 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.321810 kubelet[2828]: W1029 00:34:23.321797 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.321810 kubelet[2828]: E1029 00:34:23.321806 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.322024 kubelet[2828]: E1029 00:34:23.322006 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.322024 kubelet[2828]: W1029 00:34:23.322018 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.322076 kubelet[2828]: E1029 00:34:23.322027 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.322231 kubelet[2828]: E1029 00:34:23.322216 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.322231 kubelet[2828]: W1029 00:34:23.322227 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.322279 kubelet[2828]: E1029 00:34:23.322235 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.335725 kubelet[2828]: E1029 00:34:23.335700 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.335725 kubelet[2828]: W1029 00:34:23.335719 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.335802 kubelet[2828]: E1029 00:34:23.335732 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.335802 kubelet[2828]: I1029 00:34:23.335769 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/31820ff6-c2b5-4f1e-b097-0b66b5dd1baa-registration-dir\") pod \"csi-node-driver-qqtll\" (UID: \"31820ff6-c2b5-4f1e-b097-0b66b5dd1baa\") " pod="calico-system/csi-node-driver-qqtll" Oct 29 00:34:23.336029 kubelet[2828]: E1029 00:34:23.336009 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.336029 kubelet[2828]: W1029 00:34:23.336023 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.336087 kubelet[2828]: E1029 00:34:23.336032 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.336087 kubelet[2828]: I1029 00:34:23.336053 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz22c\" (UniqueName: \"kubernetes.io/projected/31820ff6-c2b5-4f1e-b097-0b66b5dd1baa-kube-api-access-bz22c\") pod \"csi-node-driver-qqtll\" (UID: \"31820ff6-c2b5-4f1e-b097-0b66b5dd1baa\") " pod="calico-system/csi-node-driver-qqtll" Oct 29 00:34:23.336320 kubelet[2828]: E1029 00:34:23.336301 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.336320 kubelet[2828]: W1029 00:34:23.336315 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.336381 kubelet[2828]: E1029 00:34:23.336325 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.336381 kubelet[2828]: I1029 00:34:23.336353 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/31820ff6-c2b5-4f1e-b097-0b66b5dd1baa-socket-dir\") pod \"csi-node-driver-qqtll\" (UID: \"31820ff6-c2b5-4f1e-b097-0b66b5dd1baa\") " pod="calico-system/csi-node-driver-qqtll" Oct 29 00:34:23.336705 kubelet[2828]: E1029 00:34:23.336683 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.336705 kubelet[2828]: W1029 00:34:23.336699 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.336787 kubelet[2828]: E1029 00:34:23.336710 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.336787 kubelet[2828]: I1029 00:34:23.336727 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/31820ff6-c2b5-4f1e-b097-0b66b5dd1baa-varrun\") pod \"csi-node-driver-qqtll\" (UID: \"31820ff6-c2b5-4f1e-b097-0b66b5dd1baa\") " pod="calico-system/csi-node-driver-qqtll" Oct 29 00:34:23.337064 kubelet[2828]: E1029 00:34:23.337030 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.337104 kubelet[2828]: W1029 00:34:23.337061 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.337104 kubelet[2828]: E1029 00:34:23.337086 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.337299 kubelet[2828]: E1029 00:34:23.337284 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.337299 kubelet[2828]: W1029 00:34:23.337295 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.337349 kubelet[2828]: E1029 00:34:23.337309 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.337519 kubelet[2828]: E1029 00:34:23.337502 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.337519 kubelet[2828]: W1029 00:34:23.337513 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.337566 kubelet[2828]: E1029 00:34:23.337524 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.337732 kubelet[2828]: E1029 00:34:23.337716 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.337732 kubelet[2828]: W1029 00:34:23.337727 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.337791 kubelet[2828]: E1029 00:34:23.337740 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.337949 kubelet[2828]: E1029 00:34:23.337933 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.337949 kubelet[2828]: W1029 00:34:23.337943 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.337993 kubelet[2828]: E1029 00:34:23.337951 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.338127 kubelet[2828]: E1029 00:34:23.338112 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.338127 kubelet[2828]: W1029 00:34:23.338123 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.338186 kubelet[2828]: E1029 00:34:23.338130 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.338186 kubelet[2828]: I1029 00:34:23.338161 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/31820ff6-c2b5-4f1e-b097-0b66b5dd1baa-kubelet-dir\") pod \"csi-node-driver-qqtll\" (UID: \"31820ff6-c2b5-4f1e-b097-0b66b5dd1baa\") " pod="calico-system/csi-node-driver-qqtll" Oct 29 00:34:23.338370 kubelet[2828]: E1029 00:34:23.338351 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.338370 kubelet[2828]: W1029 00:34:23.338363 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.338438 kubelet[2828]: E1029 00:34:23.338374 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.338606 kubelet[2828]: E1029 00:34:23.338586 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.338606 kubelet[2828]: W1029 00:34:23.338597 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.338606 kubelet[2828]: E1029 00:34:23.338605 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.338821 kubelet[2828]: E1029 00:34:23.338802 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.338821 kubelet[2828]: W1029 00:34:23.338813 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.338821 kubelet[2828]: E1029 00:34:23.338821 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.339034 kubelet[2828]: E1029 00:34:23.339013 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.339034 kubelet[2828]: W1029 00:34:23.339024 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.339034 kubelet[2828]: E1029 00:34:23.339034 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.339219 kubelet[2828]: E1029 00:34:23.339201 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.339219 kubelet[2828]: W1029 00:34:23.339212 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.339219 kubelet[2828]: E1029 00:34:23.339219 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.356906 systemd[1]: Started cri-containerd-21ff93399ccd0968ccdcaaf3336ae04884852c69a406191c0d789a2c550e3556.scope - libcontainer container 21ff93399ccd0968ccdcaaf3336ae04884852c69a406191c0d789a2c550e3556. Oct 29 00:34:23.407690 kubelet[2828]: E1029 00:34:23.406962 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:34:23.408454 containerd[1619]: time="2025-10-29T00:34:23.408405284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gt4r5,Uid:6e37dff8-e7a5-412b-b826-9db4c38362ef,Namespace:calico-system,Attempt:0,}" Oct 29 00:34:23.432327 containerd[1619]: time="2025-10-29T00:34:23.432262426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77b7b85c64-5ls94,Uid:1ba2fa6d-4710-44df-8724-3677d7f06fe0,Namespace:calico-system,Attempt:0,} returns sandbox id \"21ff93399ccd0968ccdcaaf3336ae04884852c69a406191c0d789a2c550e3556\"" Oct 29 00:34:23.433298 kubelet[2828]: E1029 00:34:23.433254 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:34:23.434255 containerd[1619]: time="2025-10-29T00:34:23.434223003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 29 00:34:23.439415 kubelet[2828]: E1029 00:34:23.439363 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.439415 kubelet[2828]: W1029 00:34:23.439388 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.439415 kubelet[2828]: E1029 00:34:23.439409 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.439702 kubelet[2828]: E1029 00:34:23.439680 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.439702 kubelet[2828]: W1029 00:34:23.439693 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.439702 kubelet[2828]: E1029 00:34:23.439703 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.439971 kubelet[2828]: E1029 00:34:23.439936 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.439971 kubelet[2828]: W1029 00:34:23.439954 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.439971 kubelet[2828]: E1029 00:34:23.439966 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.440264 kubelet[2828]: E1029 00:34:23.440230 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.440264 kubelet[2828]: W1029 00:34:23.440256 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.440369 kubelet[2828]: E1029 00:34:23.440281 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.440701 kubelet[2828]: E1029 00:34:23.440670 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.440701 kubelet[2828]: W1029 00:34:23.440684 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.440701 kubelet[2828]: E1029 00:34:23.440694 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.441088 kubelet[2828]: E1029 00:34:23.441069 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.441088 kubelet[2828]: W1029 00:34:23.441082 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.441088 kubelet[2828]: E1029 00:34:23.441093 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.441689 kubelet[2828]: E1029 00:34:23.441666 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.441689 kubelet[2828]: W1029 00:34:23.441679 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.441689 kubelet[2828]: E1029 00:34:23.441688 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.442179 kubelet[2828]: E1029 00:34:23.442157 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.442179 kubelet[2828]: W1029 00:34:23.442169 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.442179 kubelet[2828]: E1029 00:34:23.442179 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.443251 kubelet[2828]: E1029 00:34:23.443229 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.443251 kubelet[2828]: W1029 00:34:23.443243 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.443251 kubelet[2828]: E1029 00:34:23.443254 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.443494 kubelet[2828]: E1029 00:34:23.443478 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.443494 kubelet[2828]: W1029 00:34:23.443490 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.443575 kubelet[2828]: E1029 00:34:23.443499 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.443815 kubelet[2828]: E1029 00:34:23.443790 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.443815 kubelet[2828]: W1029 00:34:23.443803 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.443815 kubelet[2828]: E1029 00:34:23.443811 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.444113 kubelet[2828]: E1029 00:34:23.444065 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.444113 kubelet[2828]: W1029 00:34:23.444080 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.444113 kubelet[2828]: E1029 00:34:23.444090 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.444372 kubelet[2828]: E1029 00:34:23.444354 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.444372 kubelet[2828]: W1029 00:34:23.444365 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.444418 kubelet[2828]: E1029 00:34:23.444375 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.444887 kubelet[2828]: E1029 00:34:23.444860 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.444887 kubelet[2828]: W1029 00:34:23.444884 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.444953 kubelet[2828]: E1029 00:34:23.444897 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.445729 kubelet[2828]: E1029 00:34:23.445709 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.445729 kubelet[2828]: W1029 00:34:23.445723 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.445804 kubelet[2828]: E1029 00:34:23.445734 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.446007 kubelet[2828]: E1029 00:34:23.445989 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.446042 kubelet[2828]: W1029 00:34:23.446003 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.446042 kubelet[2828]: E1029 00:34:23.446021 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.446940 kubelet[2828]: E1029 00:34:23.446895 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.446940 kubelet[2828]: W1029 00:34:23.446914 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.446940 kubelet[2828]: E1029 00:34:23.446926 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.447278 kubelet[2828]: E1029 00:34:23.447249 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.447278 kubelet[2828]: W1029 00:34:23.447263 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.447278 kubelet[2828]: E1029 00:34:23.447273 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.447518 kubelet[2828]: E1029 00:34:23.447489 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.447518 kubelet[2828]: W1029 00:34:23.447502 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.447709 kubelet[2828]: E1029 00:34:23.447547 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.448142 kubelet[2828]: E1029 00:34:23.448118 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.448238 kubelet[2828]: W1029 00:34:23.448216 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.448357 kubelet[2828]: E1029 00:34:23.448334 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.448916 kubelet[2828]: E1029 00:34:23.448819 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.449005 kubelet[2828]: W1029 00:34:23.448985 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.449099 kubelet[2828]: E1029 00:34:23.449077 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.449589 kubelet[2828]: E1029 00:34:23.449431 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.449589 kubelet[2828]: W1029 00:34:23.449448 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.449589 kubelet[2828]: E1029 00:34:23.449464 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.449845 kubelet[2828]: E1029 00:34:23.449826 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.449931 kubelet[2828]: W1029 00:34:23.449916 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.450000 kubelet[2828]: E1029 00:34:23.449986 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.450440 kubelet[2828]: E1029 00:34:23.450423 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.450538 kubelet[2828]: W1029 00:34:23.450521 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.450806 kubelet[2828]: E1029 00:34:23.450712 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.451198 kubelet[2828]: E1029 00:34:23.451152 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.451198 kubelet[2828]: W1029 00:34:23.451166 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.451198 kubelet[2828]: E1029 00:34:23.451177 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.454496 kubelet[2828]: E1029 00:34:23.454474 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:23.454496 kubelet[2828]: W1029 00:34:23.454491 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:23.454597 kubelet[2828]: E1029 00:34:23.454505 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:23.456549 containerd[1619]: time="2025-10-29T00:34:23.456092308Z" level=info msg="connecting to shim 75838c85d6e5c6583e85b5ae0059f2aa01308aad85988f57b318a66c2f995cfe" address="unix:///run/containerd/s/e3f7914dbad9881641361b58d60e49a11e0d7117dc7053071a9a3838595ffa6e" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:34:23.490934 systemd[1]: Started cri-containerd-75838c85d6e5c6583e85b5ae0059f2aa01308aad85988f57b318a66c2f995cfe.scope - libcontainer container 75838c85d6e5c6583e85b5ae0059f2aa01308aad85988f57b318a66c2f995cfe. Oct 29 00:34:23.530159 containerd[1619]: time="2025-10-29T00:34:23.530091419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gt4r5,Uid:6e37dff8-e7a5-412b-b826-9db4c38362ef,Namespace:calico-system,Attempt:0,} returns sandbox id \"75838c85d6e5c6583e85b5ae0059f2aa01308aad85988f57b318a66c2f995cfe\"" Oct 29 00:34:23.531183 kubelet[2828]: E1029 00:34:23.531153 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:34:24.629127 kubelet[2828]: E1029 00:34:24.629058 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qqtll" podUID="31820ff6-c2b5-4f1e-b097-0b66b5dd1baa" Oct 29 00:34:24.909541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3006066641.mount: Deactivated successfully. Oct 29 00:34:25.846942 containerd[1619]: time="2025-10-29T00:34:25.846867755Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:34:25.847858 containerd[1619]: time="2025-10-29T00:34:25.847778437Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Oct 29 00:34:25.849134 containerd[1619]: time="2025-10-29T00:34:25.849085574Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:34:25.851423 containerd[1619]: time="2025-10-29T00:34:25.851365209Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:34:25.852044 containerd[1619]: time="2025-10-29T00:34:25.851996416Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.417741553s" Oct 29 00:34:25.852044 containerd[1619]: time="2025-10-29T00:34:25.852035449Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 29 00:34:25.853407 containerd[1619]: time="2025-10-29T00:34:25.853248629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 29 00:34:25.868514 containerd[1619]: time="2025-10-29T00:34:25.868463311Z" level=info msg="CreateContainer within sandbox \"21ff93399ccd0968ccdcaaf3336ae04884852c69a406191c0d789a2c550e3556\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 29 00:34:25.877528 containerd[1619]: time="2025-10-29T00:34:25.877477906Z" level=info msg="Container c3c359db25e17befc6b9715eca95a5f26ace71afdf2c5d0a76ae7086d05ec012: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:34:25.885116 containerd[1619]: time="2025-10-29T00:34:25.885067624Z" level=info msg="CreateContainer within sandbox \"21ff93399ccd0968ccdcaaf3336ae04884852c69a406191c0d789a2c550e3556\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c3c359db25e17befc6b9715eca95a5f26ace71afdf2c5d0a76ae7086d05ec012\"" Oct 29 00:34:25.885652 containerd[1619]: time="2025-10-29T00:34:25.885605134Z" level=info msg="StartContainer for \"c3c359db25e17befc6b9715eca95a5f26ace71afdf2c5d0a76ae7086d05ec012\"" Oct 29 00:34:25.887010 containerd[1619]: time="2025-10-29T00:34:25.886960682Z" level=info msg="connecting to shim c3c359db25e17befc6b9715eca95a5f26ace71afdf2c5d0a76ae7086d05ec012" address="unix:///run/containerd/s/49173533e8568510a11857ed1b08f1642a22116b38871675c64a390c236940fa" protocol=ttrpc version=3 Oct 29 00:34:25.910857 systemd[1]: Started cri-containerd-c3c359db25e17befc6b9715eca95a5f26ace71afdf2c5d0a76ae7086d05ec012.scope - libcontainer container c3c359db25e17befc6b9715eca95a5f26ace71afdf2c5d0a76ae7086d05ec012. Oct 29 00:34:25.978064 containerd[1619]: time="2025-10-29T00:34:25.977991685Z" level=info msg="StartContainer for \"c3c359db25e17befc6b9715eca95a5f26ace71afdf2c5d0a76ae7086d05ec012\" returns successfully" Oct 29 00:34:26.629433 kubelet[2828]: E1029 00:34:26.629369 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qqtll" podUID="31820ff6-c2b5-4f1e-b097-0b66b5dd1baa" Oct 29 00:34:26.700971 kubelet[2828]: E1029 00:34:26.700926 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:34:26.745072 kubelet[2828]: E1029 00:34:26.745007 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:26.745072 kubelet[2828]: W1029 00:34:26.745035 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:26.745072 kubelet[2828]: E1029 00:34:26.745061 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:26.745381 kubelet[2828]: E1029 00:34:26.745353 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:26.745381 kubelet[2828]: W1029 00:34:26.745367 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:26.745381 kubelet[2828]: E1029 00:34:26.745379 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:26.745617 kubelet[2828]: E1029 00:34:26.745588 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:26.745617 kubelet[2828]: W1029 00:34:26.745601 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:26.745617 kubelet[2828]: E1029 00:34:26.745612 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:26.745948 kubelet[2828]: E1029 00:34:26.745917 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:26.745948 kubelet[2828]: W1029 00:34:26.745930 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:26.745948 kubelet[2828]: E1029 00:34:26.745941 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:26.746192 kubelet[2828]: E1029 00:34:26.746164 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:26.746192 kubelet[2828]: W1029 00:34:26.746176 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:26.746192 kubelet[2828]: E1029 00:34:26.746190 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:26.746454 kubelet[2828]: E1029 00:34:26.746416 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:26.746454 kubelet[2828]: W1029 00:34:26.746433 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:26.746454 kubelet[2828]: E1029 00:34:26.746449 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:26.746715 kubelet[2828]: E1029 00:34:26.746691 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:26.746715 kubelet[2828]: W1029 00:34:26.746704 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:26.746715 kubelet[2828]: E1029 00:34:26.746715 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:26.746971 kubelet[2828]: E1029 00:34:26.746946 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:26.746971 kubelet[2828]: W1029 00:34:26.746958 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:26.746971 kubelet[2828]: E1029 00:34:26.746969 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:26.747205 kubelet[2828]: E1029 00:34:26.747181 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:26.747205 kubelet[2828]: W1029 00:34:26.747192 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:26.747205 kubelet[2828]: E1029 00:34:26.747203 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:26.747464 kubelet[2828]: E1029 00:34:26.747440 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:26.747464 kubelet[2828]: W1029 00:34:26.747451 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:26.747464 kubelet[2828]: E1029 00:34:26.747462 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:26.747704 kubelet[2828]: E1029 00:34:26.747684 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:26.747704 kubelet[2828]: W1029 00:34:26.747696 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:26.747862 kubelet[2828]: E1029 00:34:26.747706 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:26.747955 kubelet[2828]: E1029 00:34:26.747934 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:26.747955 kubelet[2828]: W1029 00:34:26.747948 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:26.748029 kubelet[2828]: E1029 00:34:26.747960 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:26.748212 kubelet[2828]: E1029 00:34:26.748192 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:26.748212 kubelet[2828]: W1029 00:34:26.748204 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:26.748212 kubelet[2828]: E1029 00:34:26.748215 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:26.748451 kubelet[2828]: E1029 00:34:26.748425 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:26.748451 kubelet[2828]: W1029 00:34:26.748438 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:26.748451 kubelet[2828]: E1029 00:34:26.748450 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:26.748688 kubelet[2828]: E1029 00:34:26.748667 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:26.748688 kubelet[2828]: W1029 00:34:26.748679 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:26.748688 kubelet[2828]: E1029 00:34:26.748689 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:26.768213 kubelet[2828]: E1029 00:34:26.768161 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:26.768213 kubelet[2828]: W1029 00:34:26.768187 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:26.768213 kubelet[2828]: E1029 00:34:26.768209 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:26.768584 kubelet[2828]: E1029 00:34:26.768526 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:26.768584 kubelet[2828]: W1029 00:34:26.768568 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:26.768683 kubelet[2828]: E1029 00:34:26.768603 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:26.769081 kubelet[2828]: E1029 00:34:26.769052 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:26.769081 kubelet[2828]: W1029 00:34:26.769068 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:26.769081 kubelet[2828]: E1029 00:34:26.769079 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:26.769421 kubelet[2828]: E1029 00:34:26.769388 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:26.769421 kubelet[2828]: W1029 00:34:26.769407 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:26.769421 kubelet[2828]: E1029 00:34:26.769420 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:26.769692 kubelet[2828]: E1029 00:34:26.769677 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:26.769692 kubelet[2828]: W1029 00:34:26.769689 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:26.769779 kubelet[2828]: E1029 00:34:26.769700 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:26.769957 kubelet[2828]: E1029 00:34:26.769931 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:26.769957 kubelet[2828]: W1029 00:34:26.769946 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:26.770059 kubelet[2828]: E1029 00:34:26.769959 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:26.770282 kubelet[2828]: E1029 00:34:26.770258 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:26.770282 kubelet[2828]: W1029 00:34:26.770270 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:26.770282 kubelet[2828]: E1029 00:34:26.770282 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:26.770545 kubelet[2828]: E1029 00:34:26.770515 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:26.770545 kubelet[2828]: W1029 00:34:26.770529 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:26.770545 kubelet[2828]: E1029 00:34:26.770542 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:26.770822 kubelet[2828]: E1029 00:34:26.770789 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:26.770822 kubelet[2828]: W1029 00:34:26.770814 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:26.770904 kubelet[2828]: E1029 00:34:26.770829 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:26.771060 kubelet[2828]: E1029 00:34:26.771039 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:26.771060 kubelet[2828]: W1029 00:34:26.771051 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:26.771144 kubelet[2828]: E1029 00:34:26.771062 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:26.771286 kubelet[2828]: E1029 00:34:26.771269 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:26.771286 kubelet[2828]: W1029 00:34:26.771283 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:26.771357 kubelet[2828]: E1029 00:34:26.771295 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:26.771534 kubelet[2828]: E1029 00:34:26.771514 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:26.771534 kubelet[2828]: W1029 00:34:26.771526 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:26.771611 kubelet[2828]: E1029 00:34:26.771537 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:26.771950 kubelet[2828]: E1029 00:34:26.771886 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:26.771950 kubelet[2828]: W1029 00:34:26.771898 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:26.771950 kubelet[2828]: E1029 00:34:26.771910 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:26.772172 kubelet[2828]: E1029 00:34:26.772152 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:26.772172 kubelet[2828]: W1029 00:34:26.772164 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:26.772249 kubelet[2828]: E1029 00:34:26.772175 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:26.772394 kubelet[2828]: E1029 00:34:26.772375 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:26.772394 kubelet[2828]: W1029 00:34:26.772387 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:26.772471 kubelet[2828]: E1029 00:34:26.772397 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:26.772675 kubelet[2828]: E1029 00:34:26.772655 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:26.772675 kubelet[2828]: W1029 00:34:26.772668 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:26.772769 kubelet[2828]: E1029 00:34:26.772679 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:26.772987 kubelet[2828]: E1029 00:34:26.772967 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:26.772987 kubelet[2828]: W1029 00:34:26.772983 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:26.773065 kubelet[2828]: E1029 00:34:26.772995 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:26.773266 kubelet[2828]: E1029 00:34:26.773246 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:34:26.773266 kubelet[2828]: W1029 00:34:26.773258 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:34:26.773355 kubelet[2828]: E1029 00:34:26.773269 2828 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:34:27.532491 containerd[1619]: time="2025-10-29T00:34:27.532424409Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:34:27.533119 containerd[1619]: time="2025-10-29T00:34:27.533093477Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Oct 29 00:34:27.534342 containerd[1619]: time="2025-10-29T00:34:27.534299783Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:34:27.536475 containerd[1619]: time="2025-10-29T00:34:27.536431159Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:34:27.537035 containerd[1619]: time="2025-10-29T00:34:27.536992524Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.683708819s" Oct 29 00:34:27.537035 containerd[1619]: time="2025-10-29T00:34:27.537022250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 29 00:34:27.545611 containerd[1619]: time="2025-10-29T00:34:27.545533295Z" level=info msg="CreateContainer within sandbox \"75838c85d6e5c6583e85b5ae0059f2aa01308aad85988f57b318a66c2f995cfe\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 29 00:34:27.555059 containerd[1619]: time="2025-10-29T00:34:27.554991990Z" level=info msg="Container 45d445f0bd77ca001919f9887ec968aa8b21c7a4b73feac7d6ee1049acb0d97b: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:34:27.563902 containerd[1619]: time="2025-10-29T00:34:27.563851590Z" level=info msg="CreateContainer within sandbox \"75838c85d6e5c6583e85b5ae0059f2aa01308aad85988f57b318a66c2f995cfe\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"45d445f0bd77ca001919f9887ec968aa8b21c7a4b73feac7d6ee1049acb0d97b\"" Oct 29 00:34:27.564653 containerd[1619]: time="2025-10-29T00:34:27.564573677Z" level=info msg="StartContainer for \"45d445f0bd77ca001919f9887ec968aa8b21c7a4b73feac7d6ee1049acb0d97b\"" Oct 29 00:34:27.566261 containerd[1619]: time="2025-10-29T00:34:27.566234478Z" level=info msg="connecting to shim 45d445f0bd77ca001919f9887ec968aa8b21c7a4b73feac7d6ee1049acb0d97b" address="unix:///run/containerd/s/e3f7914dbad9881641361b58d60e49a11e0d7117dc7053071a9a3838595ffa6e" protocol=ttrpc version=3 Oct 29 00:34:27.593885 systemd[1]: Started cri-containerd-45d445f0bd77ca001919f9887ec968aa8b21c7a4b73feac7d6ee1049acb0d97b.scope - libcontainer container 45d445f0bd77ca001919f9887ec968aa8b21c7a4b73feac7d6ee1049acb0d97b. Oct 29 00:34:27.645652 containerd[1619]: time="2025-10-29T00:34:27.645543894Z" level=info msg="StartContainer for \"45d445f0bd77ca001919f9887ec968aa8b21c7a4b73feac7d6ee1049acb0d97b\" returns successfully" Oct 29 00:34:27.659960 systemd[1]: cri-containerd-45d445f0bd77ca001919f9887ec968aa8b21c7a4b73feac7d6ee1049acb0d97b.scope: Deactivated successfully. Oct 29 00:34:27.663875 containerd[1619]: time="2025-10-29T00:34:27.663817004Z" level=info msg="received exit event container_id:\"45d445f0bd77ca001919f9887ec968aa8b21c7a4b73feac7d6ee1049acb0d97b\" id:\"45d445f0bd77ca001919f9887ec968aa8b21c7a4b73feac7d6ee1049acb0d97b\" pid:3547 exited_at:{seconds:1761698067 nanos:663254827}" Oct 29 00:34:27.664348 containerd[1619]: time="2025-10-29T00:34:27.664154638Z" level=info msg="TaskExit event in podsandbox handler container_id:\"45d445f0bd77ca001919f9887ec968aa8b21c7a4b73feac7d6ee1049acb0d97b\" id:\"45d445f0bd77ca001919f9887ec968aa8b21c7a4b73feac7d6ee1049acb0d97b\" pid:3547 exited_at:{seconds:1761698067 nanos:663254827}" Oct 29 00:34:27.690370 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45d445f0bd77ca001919f9887ec968aa8b21c7a4b73feac7d6ee1049acb0d97b-rootfs.mount: Deactivated successfully. Oct 29 00:34:27.706241 kubelet[2828]: I1029 00:34:27.706173 2828 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 29 00:34:27.723261 kubelet[2828]: E1029 00:34:27.706530 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:34:27.723261 kubelet[2828]: E1029 00:34:27.706829 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:34:27.723261 kubelet[2828]: I1029 00:34:27.721319 2828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-77b7b85c64-5ls94" podStartSLOduration=3.302182592 podStartE2EDuration="5.721304239s" podCreationTimestamp="2025-10-29 00:34:22 +0000 UTC" firstStartedPulling="2025-10-29 00:34:23.433864749 +0000 UTC m=+20.072861344" lastFinishedPulling="2025-10-29 00:34:25.852986396 +0000 UTC m=+22.491982991" observedRunningTime="2025-10-29 00:34:26.712457899 +0000 UTC m=+23.351454504" watchObservedRunningTime="2025-10-29 00:34:27.721304239 +0000 UTC m=+24.360300824" Oct 29 00:34:28.628854 kubelet[2828]: E1029 00:34:28.628754 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qqtll" podUID="31820ff6-c2b5-4f1e-b097-0b66b5dd1baa" Oct 29 00:34:28.711417 kubelet[2828]: E1029 00:34:28.710711 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:34:28.714211 containerd[1619]: time="2025-10-29T00:34:28.711390582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 29 00:34:30.629197 kubelet[2828]: E1029 00:34:30.629126 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qqtll" podUID="31820ff6-c2b5-4f1e-b097-0b66b5dd1baa" Oct 29 00:34:32.629024 kubelet[2828]: E1029 00:34:32.628959 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qqtll" podUID="31820ff6-c2b5-4f1e-b097-0b66b5dd1baa" Oct 29 00:34:33.422452 containerd[1619]: time="2025-10-29T00:34:33.422384401Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:34:33.436707 containerd[1619]: time="2025-10-29T00:34:33.436612873Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Oct 29 00:34:33.440305 containerd[1619]: time="2025-10-29T00:34:33.440255334Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:34:33.442732 containerd[1619]: time="2025-10-29T00:34:33.442691610Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:34:33.443493 containerd[1619]: time="2025-10-29T00:34:33.443451436Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.73201612s" Oct 29 00:34:33.443493 containerd[1619]: time="2025-10-29T00:34:33.443487183Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 29 00:34:33.448249 containerd[1619]: time="2025-10-29T00:34:33.448196607Z" level=info msg="CreateContainer within sandbox \"75838c85d6e5c6583e85b5ae0059f2aa01308aad85988f57b318a66c2f995cfe\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 29 00:34:33.459379 containerd[1619]: time="2025-10-29T00:34:33.459336148Z" level=info msg="Container 98ade82186af69282f2945b320a645b24a7d11ae7bc216f383a104fa16112dee: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:34:33.471222 containerd[1619]: time="2025-10-29T00:34:33.471157158Z" level=info msg="CreateContainer within sandbox \"75838c85d6e5c6583e85b5ae0059f2aa01308aad85988f57b318a66c2f995cfe\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"98ade82186af69282f2945b320a645b24a7d11ae7bc216f383a104fa16112dee\"" Oct 29 00:34:33.472670 containerd[1619]: time="2025-10-29T00:34:33.471689057Z" level=info msg="StartContainer for \"98ade82186af69282f2945b320a645b24a7d11ae7bc216f383a104fa16112dee\"" Oct 29 00:34:33.473436 containerd[1619]: time="2025-10-29T00:34:33.473387046Z" level=info msg="connecting to shim 98ade82186af69282f2945b320a645b24a7d11ae7bc216f383a104fa16112dee" address="unix:///run/containerd/s/e3f7914dbad9881641361b58d60e49a11e0d7117dc7053071a9a3838595ffa6e" protocol=ttrpc version=3 Oct 29 00:34:33.499818 systemd[1]: Started cri-containerd-98ade82186af69282f2945b320a645b24a7d11ae7bc216f383a104fa16112dee.scope - libcontainer container 98ade82186af69282f2945b320a645b24a7d11ae7bc216f383a104fa16112dee. Oct 29 00:34:33.546180 containerd[1619]: time="2025-10-29T00:34:33.546131391Z" level=info msg="StartContainer for \"98ade82186af69282f2945b320a645b24a7d11ae7bc216f383a104fa16112dee\" returns successfully" Oct 29 00:34:33.741697 kubelet[2828]: E1029 00:34:33.741405 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:34:34.628893 kubelet[2828]: E1029 00:34:34.628834 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qqtll" podUID="31820ff6-c2b5-4f1e-b097-0b66b5dd1baa" Oct 29 00:34:34.723969 kubelet[2828]: E1029 00:34:34.723920 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:34:35.200991 systemd[1]: cri-containerd-98ade82186af69282f2945b320a645b24a7d11ae7bc216f383a104fa16112dee.scope: Deactivated successfully. Oct 29 00:34:35.201763 systemd[1]: cri-containerd-98ade82186af69282f2945b320a645b24a7d11ae7bc216f383a104fa16112dee.scope: Consumed 659ms CPU time, 178.5M memory peak, 3.4M read from disk, 171.3M written to disk. Oct 29 00:34:35.202570 containerd[1619]: time="2025-10-29T00:34:35.202046043Z" level=info msg="received exit event container_id:\"98ade82186af69282f2945b320a645b24a7d11ae7bc216f383a104fa16112dee\" id:\"98ade82186af69282f2945b320a645b24a7d11ae7bc216f383a104fa16112dee\" pid:3605 exited_at:{seconds:1761698075 nanos:201814728}" Oct 29 00:34:35.202570 containerd[1619]: time="2025-10-29T00:34:35.202194671Z" level=info msg="TaskExit event in podsandbox handler container_id:\"98ade82186af69282f2945b320a645b24a7d11ae7bc216f383a104fa16112dee\" id:\"98ade82186af69282f2945b320a645b24a7d11ae7bc216f383a104fa16112dee\" pid:3605 exited_at:{seconds:1761698075 nanos:201814728}" Oct 29 00:34:35.268770 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98ade82186af69282f2945b320a645b24a7d11ae7bc216f383a104fa16112dee-rootfs.mount: Deactivated successfully. Oct 29 00:34:35.336469 kubelet[2828]: I1029 00:34:35.336426 2828 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 29 00:34:35.682283 systemd[1]: Created slice kubepods-besteffort-pod4c9c1840_5938_4d20_aaba_3f102838a251.slice - libcontainer container kubepods-besteffort-pod4c9c1840_5938_4d20_aaba_3f102838a251.slice. Oct 29 00:34:35.691278 systemd[1]: Created slice kubepods-besteffort-pode6d89d0d_8eff_4088_87f1_00579f9e5f1f.slice - libcontainer container kubepods-besteffort-pode6d89d0d_8eff_4088_87f1_00579f9e5f1f.slice. Oct 29 00:34:35.699449 systemd[1]: Created slice kubepods-burstable-pod7410a0be_1e47_4bd5_ad08_82bed9e122fc.slice - libcontainer container kubepods-burstable-pod7410a0be_1e47_4bd5_ad08_82bed9e122fc.slice. Oct 29 00:34:35.710210 systemd[1]: Created slice kubepods-besteffort-pod226993fc_2dd7_48d2_9d26_aaf9fe3f09e4.slice - libcontainer container kubepods-besteffort-pod226993fc_2dd7_48d2_9d26_aaf9fe3f09e4.slice. Oct 29 00:34:35.719815 systemd[1]: Created slice kubepods-burstable-podc54f9d37_9b37_41e0_86e7_2635e688cb85.slice - libcontainer container kubepods-burstable-podc54f9d37_9b37_41e0_86e7_2635e688cb85.slice. Oct 29 00:34:35.728098 systemd[1]: Created slice kubepods-besteffort-pod3ed7287b_bf13_4e24_bffe_065ed4e362df.slice - libcontainer container kubepods-besteffort-pod3ed7287b_bf13_4e24_bffe_065ed4e362df.slice. Oct 29 00:34:35.739347 systemd[1]: Created slice kubepods-besteffort-podc2fb43e4_9dd6_4f00_992a_4f7339772bdb.slice - libcontainer container kubepods-besteffort-podc2fb43e4_9dd6_4f00_992a_4f7339772bdb.slice. Oct 29 00:34:35.741555 kubelet[2828]: E1029 00:34:35.741502 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:34:35.742382 containerd[1619]: time="2025-10-29T00:34:35.742298286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 29 00:34:35.764871 kubelet[2828]: I1029 00:34:35.764810 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gz4w\" (UniqueName: \"kubernetes.io/projected/e6d89d0d-8eff-4088-87f1-00579f9e5f1f-kube-api-access-6gz4w\") pod \"calico-apiserver-99fd685fb-9ss4w\" (UID: \"e6d89d0d-8eff-4088-87f1-00579f9e5f1f\") " pod="calico-apiserver/calico-apiserver-99fd685fb-9ss4w" Oct 29 00:34:35.764871 kubelet[2828]: I1029 00:34:35.764852 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3ed7287b-bf13-4e24-bffe-065ed4e362df-whisker-backend-key-pair\") pod \"whisker-7cc84748df-nfzmw\" (UID: \"3ed7287b-bf13-4e24-bffe-065ed4e362df\") " pod="calico-system/whisker-7cc84748df-nfzmw" Oct 29 00:34:35.764871 kubelet[2828]: I1029 00:34:35.764869 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c2fb43e4-9dd6-4f00-992a-4f7339772bdb-goldmane-key-pair\") pod \"goldmane-666569f655-c97gh\" (UID: \"c2fb43e4-9dd6-4f00-992a-4f7339772bdb\") " pod="calico-system/goldmane-666569f655-c97gh" Oct 29 00:34:35.764871 kubelet[2828]: I1029 00:34:35.764888 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7410a0be-1e47-4bd5-ad08-82bed9e122fc-config-volume\") pod \"coredns-674b8bbfcf-744kg\" (UID: \"7410a0be-1e47-4bd5-ad08-82bed9e122fc\") " pod="kube-system/coredns-674b8bbfcf-744kg" Oct 29 00:34:35.765185 kubelet[2828]: I1029 00:34:35.764919 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e6d89d0d-8eff-4088-87f1-00579f9e5f1f-calico-apiserver-certs\") pod \"calico-apiserver-99fd685fb-9ss4w\" (UID: \"e6d89d0d-8eff-4088-87f1-00579f9e5f1f\") " pod="calico-apiserver/calico-apiserver-99fd685fb-9ss4w" Oct 29 00:34:35.765185 kubelet[2828]: I1029 00:34:35.764934 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klhvn\" (UniqueName: \"kubernetes.io/projected/3ed7287b-bf13-4e24-bffe-065ed4e362df-kube-api-access-klhvn\") pod \"whisker-7cc84748df-nfzmw\" (UID: \"3ed7287b-bf13-4e24-bffe-065ed4e362df\") " pod="calico-system/whisker-7cc84748df-nfzmw" Oct 29 00:34:35.765185 kubelet[2828]: I1029 00:34:35.764949 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/226993fc-2dd7-48d2-9d26-aaf9fe3f09e4-calico-apiserver-certs\") pod \"calico-apiserver-99fd685fb-vhm8w\" (UID: \"226993fc-2dd7-48d2-9d26-aaf9fe3f09e4\") " pod="calico-apiserver/calico-apiserver-99fd685fb-vhm8w" Oct 29 00:34:35.765185 kubelet[2828]: I1029 00:34:35.764971 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxxpm\" (UniqueName: \"kubernetes.io/projected/226993fc-2dd7-48d2-9d26-aaf9fe3f09e4-kube-api-access-dxxpm\") pod \"calico-apiserver-99fd685fb-vhm8w\" (UID: \"226993fc-2dd7-48d2-9d26-aaf9fe3f09e4\") " pod="calico-apiserver/calico-apiserver-99fd685fb-vhm8w" Oct 29 00:34:35.765185 kubelet[2828]: I1029 00:34:35.764985 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzp8r\" (UniqueName: \"kubernetes.io/projected/c54f9d37-9b37-41e0-86e7-2635e688cb85-kube-api-access-dzp8r\") pod \"coredns-674b8bbfcf-zmz5w\" (UID: \"c54f9d37-9b37-41e0-86e7-2635e688cb85\") " pod="kube-system/coredns-674b8bbfcf-zmz5w" Oct 29 00:34:35.765367 kubelet[2828]: I1029 00:34:35.765002 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c9c1840-5938-4d20-aaba-3f102838a251-tigera-ca-bundle\") pod \"calico-kube-controllers-67654757f7-t64h5\" (UID: \"4c9c1840-5938-4d20-aaba-3f102838a251\") " pod="calico-system/calico-kube-controllers-67654757f7-t64h5" Oct 29 00:34:35.765367 kubelet[2828]: I1029 00:34:35.765037 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ed7287b-bf13-4e24-bffe-065ed4e362df-whisker-ca-bundle\") pod \"whisker-7cc84748df-nfzmw\" (UID: \"3ed7287b-bf13-4e24-bffe-065ed4e362df\") " pod="calico-system/whisker-7cc84748df-nfzmw" Oct 29 00:34:35.765367 kubelet[2828]: I1029 00:34:35.765052 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c54f9d37-9b37-41e0-86e7-2635e688cb85-config-volume\") pod \"coredns-674b8bbfcf-zmz5w\" (UID: \"c54f9d37-9b37-41e0-86e7-2635e688cb85\") " pod="kube-system/coredns-674b8bbfcf-zmz5w" Oct 29 00:34:35.765367 kubelet[2828]: I1029 00:34:35.765065 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2fb43e4-9dd6-4f00-992a-4f7339772bdb-goldmane-ca-bundle\") pod \"goldmane-666569f655-c97gh\" (UID: \"c2fb43e4-9dd6-4f00-992a-4f7339772bdb\") " pod="calico-system/goldmane-666569f655-c97gh" Oct 29 00:34:35.765367 kubelet[2828]: I1029 00:34:35.765080 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2fb43e4-9dd6-4f00-992a-4f7339772bdb-config\") pod \"goldmane-666569f655-c97gh\" (UID: \"c2fb43e4-9dd6-4f00-992a-4f7339772bdb\") " pod="calico-system/goldmane-666569f655-c97gh" Oct 29 00:34:35.765542 kubelet[2828]: I1029 00:34:35.765123 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nthc\" (UniqueName: \"kubernetes.io/projected/c2fb43e4-9dd6-4f00-992a-4f7339772bdb-kube-api-access-2nthc\") pod \"goldmane-666569f655-c97gh\" (UID: \"c2fb43e4-9dd6-4f00-992a-4f7339772bdb\") " pod="calico-system/goldmane-666569f655-c97gh" Oct 29 00:34:35.765542 kubelet[2828]: I1029 00:34:35.765150 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kh2x\" (UniqueName: \"kubernetes.io/projected/4c9c1840-5938-4d20-aaba-3f102838a251-kube-api-access-8kh2x\") pod \"calico-kube-controllers-67654757f7-t64h5\" (UID: \"4c9c1840-5938-4d20-aaba-3f102838a251\") " pod="calico-system/calico-kube-controllers-67654757f7-t64h5" Oct 29 00:34:35.765542 kubelet[2828]: I1029 00:34:35.765171 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbtm4\" (UniqueName: \"kubernetes.io/projected/7410a0be-1e47-4bd5-ad08-82bed9e122fc-kube-api-access-jbtm4\") pod \"coredns-674b8bbfcf-744kg\" (UID: \"7410a0be-1e47-4bd5-ad08-82bed9e122fc\") " pod="kube-system/coredns-674b8bbfcf-744kg" Oct 29 00:34:35.987683 containerd[1619]: time="2025-10-29T00:34:35.987502156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67654757f7-t64h5,Uid:4c9c1840-5938-4d20-aaba-3f102838a251,Namespace:calico-system,Attempt:0,}" Oct 29 00:34:35.997192 containerd[1619]: time="2025-10-29T00:34:35.997155132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-99fd685fb-9ss4w,Uid:e6d89d0d-8eff-4088-87f1-00579f9e5f1f,Namespace:calico-apiserver,Attempt:0,}" Oct 29 00:34:36.005620 kubelet[2828]: E1029 00:34:36.005533 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:34:36.014676 containerd[1619]: time="2025-10-29T00:34:36.014512081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-744kg,Uid:7410a0be-1e47-4bd5-ad08-82bed9e122fc,Namespace:kube-system,Attempt:0,}" Oct 29 00:34:36.016584 containerd[1619]: time="2025-10-29T00:34:36.016531522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-99fd685fb-vhm8w,Uid:226993fc-2dd7-48d2-9d26-aaf9fe3f09e4,Namespace:calico-apiserver,Attempt:0,}" Oct 29 00:34:36.024002 kubelet[2828]: E1029 00:34:36.023895 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:34:36.024492 containerd[1619]: time="2025-10-29T00:34:36.024374968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zmz5w,Uid:c54f9d37-9b37-41e0-86e7-2635e688cb85,Namespace:kube-system,Attempt:0,}" Oct 29 00:34:36.041807 containerd[1619]: time="2025-10-29T00:34:36.041344140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7cc84748df-nfzmw,Uid:3ed7287b-bf13-4e24-bffe-065ed4e362df,Namespace:calico-system,Attempt:0,}" Oct 29 00:34:36.045216 containerd[1619]: time="2025-10-29T00:34:36.045014200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-c97gh,Uid:c2fb43e4-9dd6-4f00-992a-4f7339772bdb,Namespace:calico-system,Attempt:0,}" Oct 29 00:34:36.144180 containerd[1619]: time="2025-10-29T00:34:36.143984973Z" level=error msg="Failed to destroy network for sandbox \"c9a529062c0345c5e914d38b6cb27eac59bce4ceb62f1571988e3b469ddd2b1f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:34:36.147084 containerd[1619]: time="2025-10-29T00:34:36.147040310Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-99fd685fb-9ss4w,Uid:e6d89d0d-8eff-4088-87f1-00579f9e5f1f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9a529062c0345c5e914d38b6cb27eac59bce4ceb62f1571988e3b469ddd2b1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:34:36.147505 kubelet[2828]: E1029 00:34:36.147467 2828 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9a529062c0345c5e914d38b6cb27eac59bce4ceb62f1571988e3b469ddd2b1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:34:36.148675 kubelet[2828]: E1029 00:34:36.147626 2828 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9a529062c0345c5e914d38b6cb27eac59bce4ceb62f1571988e3b469ddd2b1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-99fd685fb-9ss4w" Oct 29 00:34:36.148675 kubelet[2828]: E1029 00:34:36.147847 2828 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9a529062c0345c5e914d38b6cb27eac59bce4ceb62f1571988e3b469ddd2b1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-99fd685fb-9ss4w" Oct 29 00:34:36.148675 kubelet[2828]: E1029 00:34:36.147934 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-99fd685fb-9ss4w_calico-apiserver(e6d89d0d-8eff-4088-87f1-00579f9e5f1f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-99fd685fb-9ss4w_calico-apiserver(e6d89d0d-8eff-4088-87f1-00579f9e5f1f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c9a529062c0345c5e914d38b6cb27eac59bce4ceb62f1571988e3b469ddd2b1f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-99fd685fb-9ss4w" podUID="e6d89d0d-8eff-4088-87f1-00579f9e5f1f" Oct 29 00:34:36.160229 containerd[1619]: time="2025-10-29T00:34:36.160183908Z" level=error msg="Failed to destroy network for sandbox \"61c3c638c2fde5f762e9b52a651e6662ddf5a45aad8eef4c183d5c94d3d40635\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:34:36.162860 containerd[1619]: time="2025-10-29T00:34:36.162790872Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-99fd685fb-vhm8w,Uid:226993fc-2dd7-48d2-9d26-aaf9fe3f09e4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"61c3c638c2fde5f762e9b52a651e6662ddf5a45aad8eef4c183d5c94d3d40635\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:34:36.163144 kubelet[2828]: E1029 00:34:36.163096 2828 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61c3c638c2fde5f762e9b52a651e6662ddf5a45aad8eef4c183d5c94d3d40635\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:34:36.164274 kubelet[2828]: E1029 00:34:36.163354 2828 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61c3c638c2fde5f762e9b52a651e6662ddf5a45aad8eef4c183d5c94d3d40635\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-99fd685fb-vhm8w" Oct 29 00:34:36.164274 kubelet[2828]: E1029 00:34:36.163386 2828 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61c3c638c2fde5f762e9b52a651e6662ddf5a45aad8eef4c183d5c94d3d40635\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-99fd685fb-vhm8w" Oct 29 00:34:36.164274 kubelet[2828]: E1029 00:34:36.163454 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-99fd685fb-vhm8w_calico-apiserver(226993fc-2dd7-48d2-9d26-aaf9fe3f09e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-99fd685fb-vhm8w_calico-apiserver(226993fc-2dd7-48d2-9d26-aaf9fe3f09e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"61c3c638c2fde5f762e9b52a651e6662ddf5a45aad8eef4c183d5c94d3d40635\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-99fd685fb-vhm8w" podUID="226993fc-2dd7-48d2-9d26-aaf9fe3f09e4" Oct 29 00:34:36.171915 containerd[1619]: time="2025-10-29T00:34:36.171849731Z" level=error msg="Failed to destroy network for sandbox \"a5ab09fac7fe33d1e6831a978999278548bd2d961991dce4478c94d1afe3a469\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:34:36.172715 containerd[1619]: time="2025-10-29T00:34:36.172675912Z" level=error msg="Failed to destroy network for sandbox \"5da9b6109a400555aa29501aff404d178ea617a9450aec356494d2b022d8e62a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:34:36.173307 containerd[1619]: time="2025-10-29T00:34:36.173265218Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67654757f7-t64h5,Uid:4c9c1840-5938-4d20-aaba-3f102838a251,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5ab09fac7fe33d1e6831a978999278548bd2d961991dce4478c94d1afe3a469\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:34:36.173823 kubelet[2828]: E1029 00:34:36.173784 2828 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5ab09fac7fe33d1e6831a978999278548bd2d961991dce4478c94d1afe3a469\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:34:36.173969 kubelet[2828]: E1029 00:34:36.173945 2828 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5ab09fac7fe33d1e6831a978999278548bd2d961991dce4478c94d1afe3a469\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67654757f7-t64h5" Oct 29 00:34:36.174053 kubelet[2828]: E1029 00:34:36.174034 2828 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5ab09fac7fe33d1e6831a978999278548bd2d961991dce4478c94d1afe3a469\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67654757f7-t64h5" Oct 29 00:34:36.174321 kubelet[2828]: E1029 00:34:36.174176 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-67654757f7-t64h5_calico-system(4c9c1840-5938-4d20-aaba-3f102838a251)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-67654757f7-t64h5_calico-system(4c9c1840-5938-4d20-aaba-3f102838a251)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a5ab09fac7fe33d1e6831a978999278548bd2d961991dce4478c94d1afe3a469\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67654757f7-t64h5" podUID="4c9c1840-5938-4d20-aaba-3f102838a251" Oct 29 00:34:36.174751 containerd[1619]: time="2025-10-29T00:34:36.174565419Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7cc84748df-nfzmw,Uid:3ed7287b-bf13-4e24-bffe-065ed4e362df,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5da9b6109a400555aa29501aff404d178ea617a9450aec356494d2b022d8e62a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:34:36.175473 kubelet[2828]: E1029 00:34:36.174860 2828 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5da9b6109a400555aa29501aff404d178ea617a9450aec356494d2b022d8e62a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:34:36.175473 kubelet[2828]: E1029 00:34:36.174933 2828 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5da9b6109a400555aa29501aff404d178ea617a9450aec356494d2b022d8e62a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7cc84748df-nfzmw" Oct 29 00:34:36.175473 kubelet[2828]: E1029 00:34:36.174959 2828 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5da9b6109a400555aa29501aff404d178ea617a9450aec356494d2b022d8e62a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7cc84748df-nfzmw" Oct 29 00:34:36.175665 kubelet[2828]: E1029 00:34:36.175007 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7cc84748df-nfzmw_calico-system(3ed7287b-bf13-4e24-bffe-065ed4e362df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7cc84748df-nfzmw_calico-system(3ed7287b-bf13-4e24-bffe-065ed4e362df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5da9b6109a400555aa29501aff404d178ea617a9450aec356494d2b022d8e62a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7cc84748df-nfzmw" podUID="3ed7287b-bf13-4e24-bffe-065ed4e362df" Oct 29 00:34:36.185048 containerd[1619]: time="2025-10-29T00:34:36.184989742Z" level=error msg="Failed to destroy network for sandbox \"de95001f680c01538090ca06b6a361af6efd63c563361e46f8b73f05ca2447dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:34:36.185227 containerd[1619]: time="2025-10-29T00:34:36.185160443Z" level=error msg="Failed to destroy network for sandbox \"479f1063e071740aa8e4a3b02d8edce30474a21367a269011561255a83fd60d2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:34:36.186465 containerd[1619]: time="2025-10-29T00:34:36.186418304Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-744kg,Uid:7410a0be-1e47-4bd5-ad08-82bed9e122fc,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"de95001f680c01538090ca06b6a361af6efd63c563361e46f8b73f05ca2447dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:34:36.186708 kubelet[2828]: E1029 00:34:36.186671 2828 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de95001f680c01538090ca06b6a361af6efd63c563361e46f8b73f05ca2447dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:34:36.186781 kubelet[2828]: E1029 00:34:36.186744 2828 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de95001f680c01538090ca06b6a361af6efd63c563361e46f8b73f05ca2447dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-744kg" Oct 29 00:34:36.186781 kubelet[2828]: E1029 00:34:36.186766 2828 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de95001f680c01538090ca06b6a361af6efd63c563361e46f8b73f05ca2447dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-744kg" Oct 29 00:34:36.186866 kubelet[2828]: E1029 00:34:36.186836 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-744kg_kube-system(7410a0be-1e47-4bd5-ad08-82bed9e122fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-744kg_kube-system(7410a0be-1e47-4bd5-ad08-82bed9e122fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de95001f680c01538090ca06b6a361af6efd63c563361e46f8b73f05ca2447dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-744kg" podUID="7410a0be-1e47-4bd5-ad08-82bed9e122fc" Oct 29 00:34:36.187547 containerd[1619]: time="2025-10-29T00:34:36.187504554Z" level=error msg="Failed to destroy network for sandbox \"c52ac48ac718298d7f70968975831eec3ba999586699b8ccbd18d78ab0afb207\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:34:36.187693 containerd[1619]: time="2025-10-29T00:34:36.187564637Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-c97gh,Uid:c2fb43e4-9dd6-4f00-992a-4f7339772bdb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"479f1063e071740aa8e4a3b02d8edce30474a21367a269011561255a83fd60d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:34:36.188760 kubelet[2828]: E1029 00:34:36.188716 2828 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"479f1063e071740aa8e4a3b02d8edce30474a21367a269011561255a83fd60d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:34:36.188892 containerd[1619]: time="2025-10-29T00:34:36.188853136Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zmz5w,Uid:c54f9d37-9b37-41e0-86e7-2635e688cb85,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c52ac48ac718298d7f70968975831eec3ba999586699b8ccbd18d78ab0afb207\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:34:36.189533 kubelet[2828]: E1029 00:34:36.189060 2828 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c52ac48ac718298d7f70968975831eec3ba999586699b8ccbd18d78ab0afb207\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:34:36.189533 kubelet[2828]: E1029 00:34:36.189112 2828 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c52ac48ac718298d7f70968975831eec3ba999586699b8ccbd18d78ab0afb207\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-zmz5w" Oct 29 00:34:36.189533 kubelet[2828]: E1029 00:34:36.189134 2828 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c52ac48ac718298d7f70968975831eec3ba999586699b8ccbd18d78ab0afb207\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-zmz5w" Oct 29 00:34:36.189678 kubelet[2828]: E1029 00:34:36.189192 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-zmz5w_kube-system(c54f9d37-9b37-41e0-86e7-2635e688cb85)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-zmz5w_kube-system(c54f9d37-9b37-41e0-86e7-2635e688cb85)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c52ac48ac718298d7f70968975831eec3ba999586699b8ccbd18d78ab0afb207\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-zmz5w" podUID="c54f9d37-9b37-41e0-86e7-2635e688cb85" Oct 29 00:34:36.189678 kubelet[2828]: E1029 00:34:36.189324 2828 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"479f1063e071740aa8e4a3b02d8edce30474a21367a269011561255a83fd60d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-c97gh" Oct 29 00:34:36.189678 kubelet[2828]: E1029 00:34:36.189378 2828 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"479f1063e071740aa8e4a3b02d8edce30474a21367a269011561255a83fd60d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-c97gh" Oct 29 00:34:36.189772 kubelet[2828]: E1029 00:34:36.189429 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-c97gh_calico-system(c2fb43e4-9dd6-4f00-992a-4f7339772bdb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-c97gh_calico-system(c2fb43e4-9dd6-4f00-992a-4f7339772bdb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"479f1063e071740aa8e4a3b02d8edce30474a21367a269011561255a83fd60d2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-c97gh" podUID="c2fb43e4-9dd6-4f00-992a-4f7339772bdb" Oct 29 00:34:36.637499 systemd[1]: Created slice kubepods-besteffort-pod31820ff6_c2b5_4f1e_b097_0b66b5dd1baa.slice - libcontainer container kubepods-besteffort-pod31820ff6_c2b5_4f1e_b097_0b66b5dd1baa.slice. Oct 29 00:34:36.640495 containerd[1619]: time="2025-10-29T00:34:36.640450856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qqtll,Uid:31820ff6-c2b5-4f1e-b097-0b66b5dd1baa,Namespace:calico-system,Attempt:0,}" Oct 29 00:34:36.696683 containerd[1619]: time="2025-10-29T00:34:36.696575963Z" level=error msg="Failed to destroy network for sandbox \"54738631b23ce190d5528499d228d4df2781dfa719f29a0a75c66f851e3fc486\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:34:36.698390 containerd[1619]: time="2025-10-29T00:34:36.698328985Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qqtll,Uid:31820ff6-c2b5-4f1e-b097-0b66b5dd1baa,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"54738631b23ce190d5528499d228d4df2781dfa719f29a0a75c66f851e3fc486\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:34:36.698710 kubelet[2828]: E1029 00:34:36.698657 2828 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54738631b23ce190d5528499d228d4df2781dfa719f29a0a75c66f851e3fc486\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:34:36.699074 kubelet[2828]: E1029 00:34:36.698730 2828 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54738631b23ce190d5528499d228d4df2781dfa719f29a0a75c66f851e3fc486\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qqtll" Oct 29 00:34:36.699074 kubelet[2828]: E1029 00:34:36.698758 2828 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54738631b23ce190d5528499d228d4df2781dfa719f29a0a75c66f851e3fc486\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qqtll" Oct 29 00:34:36.699074 kubelet[2828]: E1029 00:34:36.698817 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qqtll_calico-system(31820ff6-c2b5-4f1e-b097-0b66b5dd1baa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qqtll_calico-system(31820ff6-c2b5-4f1e-b097-0b66b5dd1baa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"54738631b23ce190d5528499d228d4df2781dfa719f29a0a75c66f851e3fc486\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qqtll" podUID="31820ff6-c2b5-4f1e-b097-0b66b5dd1baa" Oct 29 00:34:36.699711 systemd[1]: run-netns-cni\x2dd917a6ba\x2d1b3d\x2d5745\x2d7fbe\x2d89b2130a7f07.mount: Deactivated successfully. Oct 29 00:34:42.521969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4226648747.mount: Deactivated successfully. Oct 29 00:34:43.779908 containerd[1619]: time="2025-10-29T00:34:43.779828006Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:34:43.780742 containerd[1619]: time="2025-10-29T00:34:43.780705302Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Oct 29 00:34:43.783987 containerd[1619]: time="2025-10-29T00:34:43.783938260Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:34:43.789189 containerd[1619]: time="2025-10-29T00:34:43.789122470Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:34:43.790050 containerd[1619]: time="2025-10-29T00:34:43.789977885Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.047623384s" Oct 29 00:34:43.790050 containerd[1619]: time="2025-10-29T00:34:43.790041614Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 29 00:34:43.818253 containerd[1619]: time="2025-10-29T00:34:43.818199941Z" level=info msg="CreateContainer within sandbox \"75838c85d6e5c6583e85b5ae0059f2aa01308aad85988f57b318a66c2f995cfe\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 29 00:34:43.830414 containerd[1619]: time="2025-10-29T00:34:43.830354783Z" level=info msg="Container c1a7ee94ba46a7f0e5e04c3e91fb566ff78e45022f6034d61fcc29cc6175d707: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:34:43.844994 containerd[1619]: time="2025-10-29T00:34:43.844922773Z" level=info msg="CreateContainer within sandbox \"75838c85d6e5c6583e85b5ae0059f2aa01308aad85988f57b318a66c2f995cfe\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c1a7ee94ba46a7f0e5e04c3e91fb566ff78e45022f6034d61fcc29cc6175d707\"" Oct 29 00:34:43.845873 containerd[1619]: time="2025-10-29T00:34:43.845821460Z" level=info msg="StartContainer for \"c1a7ee94ba46a7f0e5e04c3e91fb566ff78e45022f6034d61fcc29cc6175d707\"" Oct 29 00:34:43.847804 containerd[1619]: time="2025-10-29T00:34:43.847768764Z" level=info msg="connecting to shim c1a7ee94ba46a7f0e5e04c3e91fb566ff78e45022f6034d61fcc29cc6175d707" address="unix:///run/containerd/s/e3f7914dbad9881641361b58d60e49a11e0d7117dc7053071a9a3838595ffa6e" protocol=ttrpc version=3 Oct 29 00:34:43.881937 systemd[1]: Started cri-containerd-c1a7ee94ba46a7f0e5e04c3e91fb566ff78e45022f6034d61fcc29cc6175d707.scope - libcontainer container c1a7ee94ba46a7f0e5e04c3e91fb566ff78e45022f6034d61fcc29cc6175d707. Oct 29 00:34:44.015883 containerd[1619]: time="2025-10-29T00:34:44.015823642Z" level=info msg="StartContainer for \"c1a7ee94ba46a7f0e5e04c3e91fb566ff78e45022f6034d61fcc29cc6175d707\" returns successfully" Oct 29 00:34:44.104192 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 29 00:34:44.105337 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 29 00:34:44.231626 kubelet[2828]: I1029 00:34:44.230854 2828 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3ed7287b-bf13-4e24-bffe-065ed4e362df-whisker-backend-key-pair\") pod \"3ed7287b-bf13-4e24-bffe-065ed4e362df\" (UID: \"3ed7287b-bf13-4e24-bffe-065ed4e362df\") " Oct 29 00:34:44.231626 kubelet[2828]: I1029 00:34:44.230910 2828 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ed7287b-bf13-4e24-bffe-065ed4e362df-whisker-ca-bundle\") pod \"3ed7287b-bf13-4e24-bffe-065ed4e362df\" (UID: \"3ed7287b-bf13-4e24-bffe-065ed4e362df\") " Oct 29 00:34:44.231626 kubelet[2828]: I1029 00:34:44.230929 2828 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klhvn\" (UniqueName: \"kubernetes.io/projected/3ed7287b-bf13-4e24-bffe-065ed4e362df-kube-api-access-klhvn\") pod \"3ed7287b-bf13-4e24-bffe-065ed4e362df\" (UID: \"3ed7287b-bf13-4e24-bffe-065ed4e362df\") " Oct 29 00:34:44.232105 kubelet[2828]: I1029 00:34:44.231751 2828 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ed7287b-bf13-4e24-bffe-065ed4e362df-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "3ed7287b-bf13-4e24-bffe-065ed4e362df" (UID: "3ed7287b-bf13-4e24-bffe-065ed4e362df"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 29 00:34:44.235292 kubelet[2828]: I1029 00:34:44.235230 2828 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ed7287b-bf13-4e24-bffe-065ed4e362df-kube-api-access-klhvn" (OuterVolumeSpecName: "kube-api-access-klhvn") pod "3ed7287b-bf13-4e24-bffe-065ed4e362df" (UID: "3ed7287b-bf13-4e24-bffe-065ed4e362df"). InnerVolumeSpecName "kube-api-access-klhvn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 29 00:34:44.235583 kubelet[2828]: I1029 00:34:44.235560 2828 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ed7287b-bf13-4e24-bffe-065ed4e362df-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "3ed7287b-bf13-4e24-bffe-065ed4e362df" (UID: "3ed7287b-bf13-4e24-bffe-065ed4e362df"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 29 00:34:44.332043 kubelet[2828]: I1029 00:34:44.331988 2828 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3ed7287b-bf13-4e24-bffe-065ed4e362df-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 29 00:34:44.332043 kubelet[2828]: I1029 00:34:44.332026 2828 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ed7287b-bf13-4e24-bffe-065ed4e362df-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 29 00:34:44.332043 kubelet[2828]: I1029 00:34:44.332035 2828 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-klhvn\" (UniqueName: \"kubernetes.io/projected/3ed7287b-bf13-4e24-bffe-065ed4e362df-kube-api-access-klhvn\") on node \"localhost\" DevicePath \"\"" Oct 29 00:34:44.512823 systemd[1]: Started sshd@9-10.0.0.10:22-10.0.0.1:54128.service - OpenSSH per-connection server daemon (10.0.0.1:54128). Oct 29 00:34:44.603081 sshd[3987]: Accepted publickey for core from 10.0.0.1 port 54128 ssh2: RSA SHA256:NOSddcycRuuQ0Zp9cdpGYZy5vFByHCSYLp01T7glzwM Oct 29 00:34:44.604914 sshd-session[3987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:34:44.610039 systemd-logind[1587]: New session 10 of user core. Oct 29 00:34:44.617796 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 29 00:34:44.766454 sshd[3990]: Connection closed by 10.0.0.1 port 54128 Oct 29 00:34:44.767838 sshd-session[3987]: pam_unix(sshd:session): session closed for user core Oct 29 00:34:44.768217 kubelet[2828]: E1029 00:34:44.768179 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:34:44.772462 systemd[1]: sshd@9-10.0.0.10:22-10.0.0.1:54128.service: Deactivated successfully. Oct 29 00:34:44.775483 systemd[1]: session-10.scope: Deactivated successfully. Oct 29 00:34:44.776554 systemd-logind[1587]: Session 10 logged out. Waiting for processes to exit. Oct 29 00:34:44.780001 systemd[1]: Removed slice kubepods-besteffort-pod3ed7287b_bf13_4e24_bffe_065ed4e362df.slice - libcontainer container kubepods-besteffort-pod3ed7287b_bf13_4e24_bffe_065ed4e362df.slice. Oct 29 00:34:44.780704 systemd-logind[1587]: Removed session 10. Oct 29 00:34:44.788284 kubelet[2828]: I1029 00:34:44.788217 2828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gt4r5" podStartSLOduration=1.529331889 podStartE2EDuration="21.788199356s" podCreationTimestamp="2025-10-29 00:34:23 +0000 UTC" firstStartedPulling="2025-10-29 00:34:23.532022701 +0000 UTC m=+20.171019296" lastFinishedPulling="2025-10-29 00:34:43.790890168 +0000 UTC m=+40.429886763" observedRunningTime="2025-10-29 00:34:44.787949336 +0000 UTC m=+41.426945931" watchObservedRunningTime="2025-10-29 00:34:44.788199356 +0000 UTC m=+41.427195951" Oct 29 00:34:44.798558 systemd[1]: var-lib-kubelet-pods-3ed7287b\x2dbf13\x2d4e24\x2dbffe\x2d065ed4e362df-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dklhvn.mount: Deactivated successfully. Oct 29 00:34:44.798721 systemd[1]: var-lib-kubelet-pods-3ed7287b\x2dbf13\x2d4e24\x2dbffe\x2d065ed4e362df-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 29 00:34:44.841133 systemd[1]: Created slice kubepods-besteffort-pod4100b610_f524_41bd_8d21_e97b360c25bf.slice - libcontainer container kubepods-besteffort-pod4100b610_f524_41bd_8d21_e97b360c25bf.slice. Oct 29 00:34:44.937542 kubelet[2828]: I1029 00:34:44.937476 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4100b610-f524-41bd-8d21-e97b360c25bf-whisker-ca-bundle\") pod \"whisker-589b55fc85-qrw8q\" (UID: \"4100b610-f524-41bd-8d21-e97b360c25bf\") " pod="calico-system/whisker-589b55fc85-qrw8q" Oct 29 00:34:44.937542 kubelet[2828]: I1029 00:34:44.937531 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4100b610-f524-41bd-8d21-e97b360c25bf-whisker-backend-key-pair\") pod \"whisker-589b55fc85-qrw8q\" (UID: \"4100b610-f524-41bd-8d21-e97b360c25bf\") " pod="calico-system/whisker-589b55fc85-qrw8q" Oct 29 00:34:44.937542 kubelet[2828]: I1029 00:34:44.937549 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz2l9\" (UniqueName: \"kubernetes.io/projected/4100b610-f524-41bd-8d21-e97b360c25bf-kube-api-access-sz2l9\") pod \"whisker-589b55fc85-qrw8q\" (UID: \"4100b610-f524-41bd-8d21-e97b360c25bf\") " pod="calico-system/whisker-589b55fc85-qrw8q" Oct 29 00:34:45.146000 containerd[1619]: time="2025-10-29T00:34:45.145911671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-589b55fc85-qrw8q,Uid:4100b610-f524-41bd-8d21-e97b360c25bf,Namespace:calico-system,Attempt:0,}" Oct 29 00:34:45.530142 systemd-networkd[1518]: calied35c972cf9: Link UP Oct 29 00:34:45.531849 systemd-networkd[1518]: calied35c972cf9: Gained carrier Oct 29 00:34:45.559748 containerd[1619]: 2025-10-29 00:34:45.174 [INFO][4006] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 00:34:45.559748 containerd[1619]: 2025-10-29 00:34:45.206 [INFO][4006] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--589b55fc85--qrw8q-eth0 whisker-589b55fc85- calico-system 4100b610-f524-41bd-8d21-e97b360c25bf 951 0 2025-10-29 00:34:44 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:589b55fc85 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-589b55fc85-qrw8q eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calied35c972cf9 [] [] }} ContainerID="f5fe929c716a50422e99ee2e718658609ff88a7cf5246dc1a8e8ca209449ccb9" Namespace="calico-system" Pod="whisker-589b55fc85-qrw8q" WorkloadEndpoint="localhost-k8s-whisker--589b55fc85--qrw8q-" Oct 29 00:34:45.559748 containerd[1619]: 2025-10-29 00:34:45.207 [INFO][4006] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f5fe929c716a50422e99ee2e718658609ff88a7cf5246dc1a8e8ca209449ccb9" Namespace="calico-system" Pod="whisker-589b55fc85-qrw8q" WorkloadEndpoint="localhost-k8s-whisker--589b55fc85--qrw8q-eth0" Oct 29 00:34:45.559748 containerd[1619]: 2025-10-29 00:34:45.434 [INFO][4020] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f5fe929c716a50422e99ee2e718658609ff88a7cf5246dc1a8e8ca209449ccb9" HandleID="k8s-pod-network.f5fe929c716a50422e99ee2e718658609ff88a7cf5246dc1a8e8ca209449ccb9" Workload="localhost-k8s-whisker--589b55fc85--qrw8q-eth0" Oct 29 00:34:45.560364 containerd[1619]: 2025-10-29 00:34:45.438 [INFO][4020] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f5fe929c716a50422e99ee2e718658609ff88a7cf5246dc1a8e8ca209449ccb9" HandleID="k8s-pod-network.f5fe929c716a50422e99ee2e718658609ff88a7cf5246dc1a8e8ca209449ccb9" Workload="localhost-k8s-whisker--589b55fc85--qrw8q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000505430), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-589b55fc85-qrw8q", "timestamp":"2025-10-29 00:34:45.434544457 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 00:34:45.560364 containerd[1619]: 2025-10-29 00:34:45.438 [INFO][4020] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 00:34:45.560364 containerd[1619]: 2025-10-29 00:34:45.438 [INFO][4020] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 00:34:45.560364 containerd[1619]: 2025-10-29 00:34:45.438 [INFO][4020] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 00:34:45.560364 containerd[1619]: 2025-10-29 00:34:45.459 [INFO][4020] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f5fe929c716a50422e99ee2e718658609ff88a7cf5246dc1a8e8ca209449ccb9" host="localhost" Oct 29 00:34:45.560364 containerd[1619]: 2025-10-29 00:34:45.474 [INFO][4020] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 00:34:45.560364 containerd[1619]: 2025-10-29 00:34:45.479 [INFO][4020] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 00:34:45.560364 containerd[1619]: 2025-10-29 00:34:45.482 [INFO][4020] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 00:34:45.560364 containerd[1619]: 2025-10-29 00:34:45.484 [INFO][4020] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 00:34:45.560364 containerd[1619]: 2025-10-29 00:34:45.484 [INFO][4020] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f5fe929c716a50422e99ee2e718658609ff88a7cf5246dc1a8e8ca209449ccb9" host="localhost" Oct 29 00:34:45.560594 containerd[1619]: 2025-10-29 00:34:45.486 [INFO][4020] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f5fe929c716a50422e99ee2e718658609ff88a7cf5246dc1a8e8ca209449ccb9 Oct 29 00:34:45.560594 containerd[1619]: 2025-10-29 00:34:45.493 [INFO][4020] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f5fe929c716a50422e99ee2e718658609ff88a7cf5246dc1a8e8ca209449ccb9" host="localhost" Oct 29 00:34:45.560594 containerd[1619]: 2025-10-29 00:34:45.503 [INFO][4020] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.f5fe929c716a50422e99ee2e718658609ff88a7cf5246dc1a8e8ca209449ccb9" host="localhost" Oct 29 00:34:45.560594 containerd[1619]: 2025-10-29 00:34:45.503 [INFO][4020] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.f5fe929c716a50422e99ee2e718658609ff88a7cf5246dc1a8e8ca209449ccb9" host="localhost" Oct 29 00:34:45.560594 containerd[1619]: 2025-10-29 00:34:45.504 [INFO][4020] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 00:34:45.560594 containerd[1619]: 2025-10-29 00:34:45.504 [INFO][4020] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="f5fe929c716a50422e99ee2e718658609ff88a7cf5246dc1a8e8ca209449ccb9" HandleID="k8s-pod-network.f5fe929c716a50422e99ee2e718658609ff88a7cf5246dc1a8e8ca209449ccb9" Workload="localhost-k8s-whisker--589b55fc85--qrw8q-eth0" Oct 29 00:34:45.561176 containerd[1619]: 2025-10-29 00:34:45.510 [INFO][4006] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f5fe929c716a50422e99ee2e718658609ff88a7cf5246dc1a8e8ca209449ccb9" Namespace="calico-system" Pod="whisker-589b55fc85-qrw8q" WorkloadEndpoint="localhost-k8s-whisker--589b55fc85--qrw8q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--589b55fc85--qrw8q-eth0", GenerateName:"whisker-589b55fc85-", Namespace:"calico-system", SelfLink:"", UID:"4100b610-f524-41bd-8d21-e97b360c25bf", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 34, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"589b55fc85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-589b55fc85-qrw8q", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calied35c972cf9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:34:45.561176 containerd[1619]: 2025-10-29 00:34:45.511 [INFO][4006] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="f5fe929c716a50422e99ee2e718658609ff88a7cf5246dc1a8e8ca209449ccb9" Namespace="calico-system" Pod="whisker-589b55fc85-qrw8q" WorkloadEndpoint="localhost-k8s-whisker--589b55fc85--qrw8q-eth0" Oct 29 00:34:45.561497 containerd[1619]: 2025-10-29 00:34:45.511 [INFO][4006] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calied35c972cf9 ContainerID="f5fe929c716a50422e99ee2e718658609ff88a7cf5246dc1a8e8ca209449ccb9" Namespace="calico-system" Pod="whisker-589b55fc85-qrw8q" WorkloadEndpoint="localhost-k8s-whisker--589b55fc85--qrw8q-eth0" Oct 29 00:34:45.561497 containerd[1619]: 2025-10-29 00:34:45.533 [INFO][4006] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f5fe929c716a50422e99ee2e718658609ff88a7cf5246dc1a8e8ca209449ccb9" Namespace="calico-system" Pod="whisker-589b55fc85-qrw8q" WorkloadEndpoint="localhost-k8s-whisker--589b55fc85--qrw8q-eth0" Oct 29 00:34:45.561541 containerd[1619]: 2025-10-29 00:34:45.533 [INFO][4006] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f5fe929c716a50422e99ee2e718658609ff88a7cf5246dc1a8e8ca209449ccb9" Namespace="calico-system" Pod="whisker-589b55fc85-qrw8q" WorkloadEndpoint="localhost-k8s-whisker--589b55fc85--qrw8q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--589b55fc85--qrw8q-eth0", GenerateName:"whisker-589b55fc85-", Namespace:"calico-system", SelfLink:"", UID:"4100b610-f524-41bd-8d21-e97b360c25bf", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 34, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"589b55fc85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f5fe929c716a50422e99ee2e718658609ff88a7cf5246dc1a8e8ca209449ccb9", Pod:"whisker-589b55fc85-qrw8q", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calied35c972cf9", MAC:"96:f8:fb:22:cb:bb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:34:45.561598 containerd[1619]: 2025-10-29 00:34:45.550 [INFO][4006] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f5fe929c716a50422e99ee2e718658609ff88a7cf5246dc1a8e8ca209449ccb9" Namespace="calico-system" Pod="whisker-589b55fc85-qrw8q" WorkloadEndpoint="localhost-k8s-whisker--589b55fc85--qrw8q-eth0" Oct 29 00:34:45.885580 kubelet[2828]: I1029 00:34:45.885520 2828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ed7287b-bf13-4e24-bffe-065ed4e362df" path="/var/lib/kubelet/pods/3ed7287b-bf13-4e24-bffe-065ed4e362df/volumes" Oct 29 00:34:45.964422 containerd[1619]: time="2025-10-29T00:34:45.964343561Z" level=info msg="connecting to shim f5fe929c716a50422e99ee2e718658609ff88a7cf5246dc1a8e8ca209449ccb9" address="unix:///run/containerd/s/079fc381c49de90df83d15edfcd2b5aade54c5de809052b288db20af57f9956f" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:34:46.028853 systemd[1]: Started cri-containerd-f5fe929c716a50422e99ee2e718658609ff88a7cf5246dc1a8e8ca209449ccb9.scope - libcontainer container f5fe929c716a50422e99ee2e718658609ff88a7cf5246dc1a8e8ca209449ccb9. Oct 29 00:34:46.043853 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 00:34:46.205213 containerd[1619]: time="2025-10-29T00:34:46.205028532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-589b55fc85-qrw8q,Uid:4100b610-f524-41bd-8d21-e97b360c25bf,Namespace:calico-system,Attempt:0,} returns sandbox id \"f5fe929c716a50422e99ee2e718658609ff88a7cf5246dc1a8e8ca209449ccb9\"" Oct 29 00:34:46.207104 containerd[1619]: time="2025-10-29T00:34:46.207073419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 29 00:34:46.435998 kubelet[2828]: I1029 00:34:46.435840 2828 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 29 00:34:46.436428 kubelet[2828]: E1029 00:34:46.436400 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:34:46.569158 containerd[1619]: time="2025-10-29T00:34:46.568828196Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c1a7ee94ba46a7f0e5e04c3e91fb566ff78e45022f6034d61fcc29cc6175d707\" id:\"caa498b839201a792b1a876c5ac20e02cc3103ac49c0e23ea214c96ca2142164\" pid:4194 exit_status:1 exited_at:{seconds:1761698086 nanos:568388351}" Oct 29 00:34:46.611035 containerd[1619]: time="2025-10-29T00:34:46.610982980Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:34:46.612968 containerd[1619]: time="2025-10-29T00:34:46.612841899Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 29 00:34:46.613198 containerd[1619]: time="2025-10-29T00:34:46.613135249Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 29 00:34:46.613701 kubelet[2828]: E1029 00:34:46.613546 2828 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 00:34:46.613898 kubelet[2828]: E1029 00:34:46.613769 2828 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 00:34:46.614376 kubelet[2828]: E1029 00:34:46.614206 2828 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:207fdd95d4f244cd9bb253bbb4016093,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sz2l9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-589b55fc85-qrw8q_calico-system(4100b610-f524-41bd-8d21-e97b360c25bf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 29 00:34:46.617905 containerd[1619]: time="2025-10-29T00:34:46.617810532Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 29 00:34:46.630055 containerd[1619]: time="2025-10-29T00:34:46.629854863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-99fd685fb-9ss4w,Uid:e6d89d0d-8eff-4088-87f1-00579f9e5f1f,Namespace:calico-apiserver,Attempt:0,}" Oct 29 00:34:46.695707 containerd[1619]: time="2025-10-29T00:34:46.694939936Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c1a7ee94ba46a7f0e5e04c3e91fb566ff78e45022f6034d61fcc29cc6175d707\" id:\"792db011c6267ef74fdb32112c4ad1f158232ecc971e7948f3b7df3b3e63d184\" pid:4227 exit_status:1 exited_at:{seconds:1761698086 nanos:693488372}" Oct 29 00:34:46.767777 systemd-networkd[1518]: cali7cfdae5b6af: Link UP Oct 29 00:34:46.768027 systemd-networkd[1518]: cali7cfdae5b6af: Gained carrier Oct 29 00:34:46.785306 containerd[1619]: 2025-10-29 00:34:46.680 [INFO][4250] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 00:34:46.785306 containerd[1619]: 2025-10-29 00:34:46.695 [INFO][4250] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--99fd685fb--9ss4w-eth0 calico-apiserver-99fd685fb- calico-apiserver e6d89d0d-8eff-4088-87f1-00579f9e5f1f 849 0 2025-10-29 00:34:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:99fd685fb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-99fd685fb-9ss4w eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7cfdae5b6af [] [] }} ContainerID="f1c26a34394c10c717e18d4900ce1b8cdd2c34b62fccc857f155f7139d7d0f76" Namespace="calico-apiserver" Pod="calico-apiserver-99fd685fb-9ss4w" WorkloadEndpoint="localhost-k8s-calico--apiserver--99fd685fb--9ss4w-" Oct 29 00:34:46.785306 containerd[1619]: 2025-10-29 00:34:46.696 [INFO][4250] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f1c26a34394c10c717e18d4900ce1b8cdd2c34b62fccc857f155f7139d7d0f76" Namespace="calico-apiserver" Pod="calico-apiserver-99fd685fb-9ss4w" WorkloadEndpoint="localhost-k8s-calico--apiserver--99fd685fb--9ss4w-eth0" Oct 29 00:34:46.785306 containerd[1619]: 2025-10-29 00:34:46.732 [INFO][4269] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f1c26a34394c10c717e18d4900ce1b8cdd2c34b62fccc857f155f7139d7d0f76" HandleID="k8s-pod-network.f1c26a34394c10c717e18d4900ce1b8cdd2c34b62fccc857f155f7139d7d0f76" Workload="localhost-k8s-calico--apiserver--99fd685fb--9ss4w-eth0" Oct 29 00:34:46.785661 containerd[1619]: 2025-10-29 00:34:46.732 [INFO][4269] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f1c26a34394c10c717e18d4900ce1b8cdd2c34b62fccc857f155f7139d7d0f76" HandleID="k8s-pod-network.f1c26a34394c10c717e18d4900ce1b8cdd2c34b62fccc857f155f7139d7d0f76" Workload="localhost-k8s-calico--apiserver--99fd685fb--9ss4w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034c120), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-99fd685fb-9ss4w", "timestamp":"2025-10-29 00:34:46.732788078 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 00:34:46.785661 containerd[1619]: 2025-10-29 00:34:46.732 [INFO][4269] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 00:34:46.785661 containerd[1619]: 2025-10-29 00:34:46.733 [INFO][4269] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 00:34:46.785661 containerd[1619]: 2025-10-29 00:34:46.733 [INFO][4269] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 00:34:46.785661 containerd[1619]: 2025-10-29 00:34:46.739 [INFO][4269] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f1c26a34394c10c717e18d4900ce1b8cdd2c34b62fccc857f155f7139d7d0f76" host="localhost" Oct 29 00:34:46.785661 containerd[1619]: 2025-10-29 00:34:46.744 [INFO][4269] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 00:34:46.785661 containerd[1619]: 2025-10-29 00:34:46.748 [INFO][4269] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 00:34:46.785661 containerd[1619]: 2025-10-29 00:34:46.750 [INFO][4269] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 00:34:46.785661 containerd[1619]: 2025-10-29 00:34:46.752 [INFO][4269] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 00:34:46.785661 containerd[1619]: 2025-10-29 00:34:46.752 [INFO][4269] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f1c26a34394c10c717e18d4900ce1b8cdd2c34b62fccc857f155f7139d7d0f76" host="localhost" Oct 29 00:34:46.785950 containerd[1619]: 2025-10-29 00:34:46.753 [INFO][4269] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f1c26a34394c10c717e18d4900ce1b8cdd2c34b62fccc857f155f7139d7d0f76 Oct 29 00:34:46.785950 containerd[1619]: 2025-10-29 00:34:46.757 [INFO][4269] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f1c26a34394c10c717e18d4900ce1b8cdd2c34b62fccc857f155f7139d7d0f76" host="localhost" Oct 29 00:34:46.785950 containerd[1619]: 2025-10-29 00:34:46.761 [INFO][4269] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.f1c26a34394c10c717e18d4900ce1b8cdd2c34b62fccc857f155f7139d7d0f76" host="localhost" Oct 29 00:34:46.785950 containerd[1619]: 2025-10-29 00:34:46.762 [INFO][4269] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.f1c26a34394c10c717e18d4900ce1b8cdd2c34b62fccc857f155f7139d7d0f76" host="localhost" Oct 29 00:34:46.785950 containerd[1619]: 2025-10-29 00:34:46.762 [INFO][4269] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 00:34:46.785950 containerd[1619]: 2025-10-29 00:34:46.762 [INFO][4269] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="f1c26a34394c10c717e18d4900ce1b8cdd2c34b62fccc857f155f7139d7d0f76" HandleID="k8s-pod-network.f1c26a34394c10c717e18d4900ce1b8cdd2c34b62fccc857f155f7139d7d0f76" Workload="localhost-k8s-calico--apiserver--99fd685fb--9ss4w-eth0" Oct 29 00:34:46.786107 containerd[1619]: 2025-10-29 00:34:46.765 [INFO][4250] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f1c26a34394c10c717e18d4900ce1b8cdd2c34b62fccc857f155f7139d7d0f76" Namespace="calico-apiserver" Pod="calico-apiserver-99fd685fb-9ss4w" WorkloadEndpoint="localhost-k8s-calico--apiserver--99fd685fb--9ss4w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--99fd685fb--9ss4w-eth0", GenerateName:"calico-apiserver-99fd685fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"e6d89d0d-8eff-4088-87f1-00579f9e5f1f", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 34, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"99fd685fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-99fd685fb-9ss4w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7cfdae5b6af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:34:46.786172 containerd[1619]: 2025-10-29 00:34:46.765 [INFO][4250] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="f1c26a34394c10c717e18d4900ce1b8cdd2c34b62fccc857f155f7139d7d0f76" Namespace="calico-apiserver" Pod="calico-apiserver-99fd685fb-9ss4w" WorkloadEndpoint="localhost-k8s-calico--apiserver--99fd685fb--9ss4w-eth0" Oct 29 00:34:46.786172 containerd[1619]: 2025-10-29 00:34:46.765 [INFO][4250] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7cfdae5b6af ContainerID="f1c26a34394c10c717e18d4900ce1b8cdd2c34b62fccc857f155f7139d7d0f76" Namespace="calico-apiserver" Pod="calico-apiserver-99fd685fb-9ss4w" WorkloadEndpoint="localhost-k8s-calico--apiserver--99fd685fb--9ss4w-eth0" Oct 29 00:34:46.786172 containerd[1619]: 2025-10-29 00:34:46.767 [INFO][4250] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f1c26a34394c10c717e18d4900ce1b8cdd2c34b62fccc857f155f7139d7d0f76" Namespace="calico-apiserver" Pod="calico-apiserver-99fd685fb-9ss4w" WorkloadEndpoint="localhost-k8s-calico--apiserver--99fd685fb--9ss4w-eth0" Oct 29 00:34:46.786243 containerd[1619]: 2025-10-29 00:34:46.768 [INFO][4250] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f1c26a34394c10c717e18d4900ce1b8cdd2c34b62fccc857f155f7139d7d0f76" Namespace="calico-apiserver" Pod="calico-apiserver-99fd685fb-9ss4w" WorkloadEndpoint="localhost-k8s-calico--apiserver--99fd685fb--9ss4w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--99fd685fb--9ss4w-eth0", GenerateName:"calico-apiserver-99fd685fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"e6d89d0d-8eff-4088-87f1-00579f9e5f1f", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 34, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"99fd685fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f1c26a34394c10c717e18d4900ce1b8cdd2c34b62fccc857f155f7139d7d0f76", Pod:"calico-apiserver-99fd685fb-9ss4w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7cfdae5b6af", MAC:"22:9f:87:24:37:00", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:34:46.786299 containerd[1619]: 2025-10-29 00:34:46.781 [INFO][4250] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f1c26a34394c10c717e18d4900ce1b8cdd2c34b62fccc857f155f7139d7d0f76" Namespace="calico-apiserver" Pod="calico-apiserver-99fd685fb-9ss4w" WorkloadEndpoint="localhost-k8s-calico--apiserver--99fd685fb--9ss4w-eth0" Oct 29 00:34:46.806428 containerd[1619]: time="2025-10-29T00:34:46.806355949Z" level=info msg="connecting to shim f1c26a34394c10c717e18d4900ce1b8cdd2c34b62fccc857f155f7139d7d0f76" address="unix:///run/containerd/s/557603cb462993bc91098bff59736cf1c20f480157a139dbd9da7a8b3e111192" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:34:46.835778 systemd[1]: Started cri-containerd-f1c26a34394c10c717e18d4900ce1b8cdd2c34b62fccc857f155f7139d7d0f76.scope - libcontainer container f1c26a34394c10c717e18d4900ce1b8cdd2c34b62fccc857f155f7139d7d0f76. Oct 29 00:34:46.847911 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 00:34:46.880667 containerd[1619]: time="2025-10-29T00:34:46.880600280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-99fd685fb-9ss4w,Uid:e6d89d0d-8eff-4088-87f1-00579f9e5f1f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f1c26a34394c10c717e18d4900ce1b8cdd2c34b62fccc857f155f7139d7d0f76\"" Oct 29 00:34:46.932937 systemd-networkd[1518]: calied35c972cf9: Gained IPv6LL Oct 29 00:34:46.978460 containerd[1619]: time="2025-10-29T00:34:46.978396585Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:34:46.979686 containerd[1619]: time="2025-10-29T00:34:46.979650268Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 29 00:34:46.979777 containerd[1619]: time="2025-10-29T00:34:46.979728325Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 29 00:34:46.979979 kubelet[2828]: E1029 00:34:46.979919 2828 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 00:34:46.980607 kubelet[2828]: E1029 00:34:46.979984 2828 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 00:34:46.980678 containerd[1619]: time="2025-10-29T00:34:46.980324353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 00:34:46.980716 kubelet[2828]: E1029 00:34:46.980221 2828 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sz2l9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-589b55fc85-qrw8q_calico-system(4100b610-f524-41bd-8d21-e97b360c25bf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 29 00:34:46.981612 kubelet[2828]: E1029 00:34:46.981543 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-589b55fc85-qrw8q" podUID="4100b610-f524-41bd-8d21-e97b360c25bf" Oct 29 00:34:47.376692 containerd[1619]: time="2025-10-29T00:34:47.376613877Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:34:47.378869 containerd[1619]: time="2025-10-29T00:34:47.378825568Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 00:34:47.378958 containerd[1619]: time="2025-10-29T00:34:47.378903073Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 00:34:47.379167 kubelet[2828]: E1029 00:34:47.379117 2828 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:34:47.379286 kubelet[2828]: E1029 00:34:47.379182 2828 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:34:47.379462 kubelet[2828]: E1029 00:34:47.379401 2828 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6gz4w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-99fd685fb-9ss4w_calico-apiserver(e6d89d0d-8eff-4088-87f1-00579f9e5f1f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 00:34:47.380644 kubelet[2828]: E1029 00:34:47.380597 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99fd685fb-9ss4w" podUID="e6d89d0d-8eff-4088-87f1-00579f9e5f1f" Oct 29 00:34:47.630380 containerd[1619]: time="2025-10-29T00:34:47.630228110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67654757f7-t64h5,Uid:4c9c1840-5938-4d20-aaba-3f102838a251,Namespace:calico-system,Attempt:0,}" Oct 29 00:34:47.753428 systemd-networkd[1518]: caliae35fae89c9: Link UP Oct 29 00:34:47.753968 systemd-networkd[1518]: caliae35fae89c9: Gained carrier Oct 29 00:34:47.772417 containerd[1619]: 2025-10-29 00:34:47.657 [INFO][4330] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 00:34:47.772417 containerd[1619]: 2025-10-29 00:34:47.671 [INFO][4330] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--67654757f7--t64h5-eth0 calico-kube-controllers-67654757f7- calico-system 4c9c1840-5938-4d20-aaba-3f102838a251 845 0 2025-10-29 00:34:23 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:67654757f7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-67654757f7-t64h5 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliae35fae89c9 [] [] }} ContainerID="3495e970aaecb9041727c8ad41247edd93b205b6f5450b29b5756615581af4d6" Namespace="calico-system" Pod="calico-kube-controllers-67654757f7-t64h5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67654757f7--t64h5-" Oct 29 00:34:47.772417 containerd[1619]: 2025-10-29 00:34:47.671 [INFO][4330] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3495e970aaecb9041727c8ad41247edd93b205b6f5450b29b5756615581af4d6" Namespace="calico-system" Pod="calico-kube-controllers-67654757f7-t64h5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67654757f7--t64h5-eth0" Oct 29 00:34:47.772417 containerd[1619]: 2025-10-29 00:34:47.708 [INFO][4352] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3495e970aaecb9041727c8ad41247edd93b205b6f5450b29b5756615581af4d6" HandleID="k8s-pod-network.3495e970aaecb9041727c8ad41247edd93b205b6f5450b29b5756615581af4d6" Workload="localhost-k8s-calico--kube--controllers--67654757f7--t64h5-eth0" Oct 29 00:34:47.772738 containerd[1619]: 2025-10-29 00:34:47.708 [INFO][4352] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3495e970aaecb9041727c8ad41247edd93b205b6f5450b29b5756615581af4d6" HandleID="k8s-pod-network.3495e970aaecb9041727c8ad41247edd93b205b6f5450b29b5756615581af4d6" Workload="localhost-k8s-calico--kube--controllers--67654757f7--t64h5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139460), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-67654757f7-t64h5", "timestamp":"2025-10-29 00:34:47.708119037 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 00:34:47.772738 containerd[1619]: 2025-10-29 00:34:47.708 [INFO][4352] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 00:34:47.772738 containerd[1619]: 2025-10-29 00:34:47.708 [INFO][4352] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 00:34:47.772738 containerd[1619]: 2025-10-29 00:34:47.708 [INFO][4352] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 00:34:47.772738 containerd[1619]: 2025-10-29 00:34:47.716 [INFO][4352] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3495e970aaecb9041727c8ad41247edd93b205b6f5450b29b5756615581af4d6" host="localhost" Oct 29 00:34:47.772738 containerd[1619]: 2025-10-29 00:34:47.720 [INFO][4352] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 00:34:47.772738 containerd[1619]: 2025-10-29 00:34:47.724 [INFO][4352] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 00:34:47.772738 containerd[1619]: 2025-10-29 00:34:47.730 [INFO][4352] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 00:34:47.772738 containerd[1619]: 2025-10-29 00:34:47.732 [INFO][4352] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 00:34:47.772738 containerd[1619]: 2025-10-29 00:34:47.732 [INFO][4352] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3495e970aaecb9041727c8ad41247edd93b205b6f5450b29b5756615581af4d6" host="localhost" Oct 29 00:34:47.773201 containerd[1619]: 2025-10-29 00:34:47.734 [INFO][4352] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3495e970aaecb9041727c8ad41247edd93b205b6f5450b29b5756615581af4d6 Oct 29 00:34:47.773201 containerd[1619]: 2025-10-29 00:34:47.740 [INFO][4352] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3495e970aaecb9041727c8ad41247edd93b205b6f5450b29b5756615581af4d6" host="localhost" Oct 29 00:34:47.773201 containerd[1619]: 2025-10-29 00:34:47.746 [INFO][4352] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.3495e970aaecb9041727c8ad41247edd93b205b6f5450b29b5756615581af4d6" host="localhost" Oct 29 00:34:47.773201 containerd[1619]: 2025-10-29 00:34:47.746 [INFO][4352] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.3495e970aaecb9041727c8ad41247edd93b205b6f5450b29b5756615581af4d6" host="localhost" Oct 29 00:34:47.773201 containerd[1619]: 2025-10-29 00:34:47.746 [INFO][4352] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 00:34:47.773201 containerd[1619]: 2025-10-29 00:34:47.746 [INFO][4352] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="3495e970aaecb9041727c8ad41247edd93b205b6f5450b29b5756615581af4d6" HandleID="k8s-pod-network.3495e970aaecb9041727c8ad41247edd93b205b6f5450b29b5756615581af4d6" Workload="localhost-k8s-calico--kube--controllers--67654757f7--t64h5-eth0" Oct 29 00:34:47.773374 containerd[1619]: 2025-10-29 00:34:47.750 [INFO][4330] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3495e970aaecb9041727c8ad41247edd93b205b6f5450b29b5756615581af4d6" Namespace="calico-system" Pod="calico-kube-controllers-67654757f7-t64h5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67654757f7--t64h5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--67654757f7--t64h5-eth0", GenerateName:"calico-kube-controllers-67654757f7-", Namespace:"calico-system", SelfLink:"", UID:"4c9c1840-5938-4d20-aaba-3f102838a251", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 34, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67654757f7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-67654757f7-t64h5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliae35fae89c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:34:47.773443 containerd[1619]: 2025-10-29 00:34:47.750 [INFO][4330] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="3495e970aaecb9041727c8ad41247edd93b205b6f5450b29b5756615581af4d6" Namespace="calico-system" Pod="calico-kube-controllers-67654757f7-t64h5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67654757f7--t64h5-eth0" Oct 29 00:34:47.773443 containerd[1619]: 2025-10-29 00:34:47.751 [INFO][4330] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliae35fae89c9 ContainerID="3495e970aaecb9041727c8ad41247edd93b205b6f5450b29b5756615581af4d6" Namespace="calico-system" Pod="calico-kube-controllers-67654757f7-t64h5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67654757f7--t64h5-eth0" Oct 29 00:34:47.773443 containerd[1619]: 2025-10-29 00:34:47.754 [INFO][4330] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3495e970aaecb9041727c8ad41247edd93b205b6f5450b29b5756615581af4d6" Namespace="calico-system" Pod="calico-kube-controllers-67654757f7-t64h5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67654757f7--t64h5-eth0" Oct 29 00:34:47.773511 containerd[1619]: 2025-10-29 00:34:47.755 [INFO][4330] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3495e970aaecb9041727c8ad41247edd93b205b6f5450b29b5756615581af4d6" Namespace="calico-system" Pod="calico-kube-controllers-67654757f7-t64h5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67654757f7--t64h5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--67654757f7--t64h5-eth0", GenerateName:"calico-kube-controllers-67654757f7-", Namespace:"calico-system", SelfLink:"", UID:"4c9c1840-5938-4d20-aaba-3f102838a251", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 34, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67654757f7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3495e970aaecb9041727c8ad41247edd93b205b6f5450b29b5756615581af4d6", Pod:"calico-kube-controllers-67654757f7-t64h5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliae35fae89c9", MAC:"aa:f5:d5:12:85:4c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:34:47.773566 containerd[1619]: 2025-10-29 00:34:47.765 [INFO][4330] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3495e970aaecb9041727c8ad41247edd93b205b6f5450b29b5756615581af4d6" Namespace="calico-system" Pod="calico-kube-controllers-67654757f7-t64h5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67654757f7--t64h5-eth0" Oct 29 00:34:47.821734 containerd[1619]: time="2025-10-29T00:34:47.821610216Z" level=info msg="connecting to shim 3495e970aaecb9041727c8ad41247edd93b205b6f5450b29b5756615581af4d6" address="unix:///run/containerd/s/3c4abb25f7a72e02fbec4a113908d738a7530f40ac12863d16aa611afd58cfa3" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:34:47.849951 systemd[1]: Started cri-containerd-3495e970aaecb9041727c8ad41247edd93b205b6f5450b29b5756615581af4d6.scope - libcontainer container 3495e970aaecb9041727c8ad41247edd93b205b6f5450b29b5756615581af4d6. Oct 29 00:34:47.866162 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 00:34:47.893282 kubelet[2828]: E1029 00:34:47.893132 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99fd685fb-9ss4w" podUID="e6d89d0d-8eff-4088-87f1-00579f9e5f1f" Oct 29 00:34:47.894096 kubelet[2828]: E1029 00:34:47.893748 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-589b55fc85-qrw8q" podUID="4100b610-f524-41bd-8d21-e97b360c25bf" Oct 29 00:34:47.910953 containerd[1619]: time="2025-10-29T00:34:47.910897838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67654757f7-t64h5,Uid:4c9c1840-5938-4d20-aaba-3f102838a251,Namespace:calico-system,Attempt:0,} returns sandbox id \"3495e970aaecb9041727c8ad41247edd93b205b6f5450b29b5756615581af4d6\"" Oct 29 00:34:47.914054 containerd[1619]: time="2025-10-29T00:34:47.913996724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 29 00:34:47.957864 systemd-networkd[1518]: cali7cfdae5b6af: Gained IPv6LL Oct 29 00:34:48.264296 containerd[1619]: time="2025-10-29T00:34:48.264141120Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:34:48.334163 containerd[1619]: time="2025-10-29T00:34:48.334082776Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 29 00:34:48.334290 containerd[1619]: time="2025-10-29T00:34:48.334199186Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 29 00:34:48.334597 kubelet[2828]: E1029 00:34:48.334549 2828 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 00:34:48.334973 kubelet[2828]: E1029 00:34:48.334602 2828 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 00:34:48.334973 kubelet[2828]: E1029 00:34:48.334753 2828 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8kh2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-67654757f7-t64h5_calico-system(4c9c1840-5938-4d20-aaba-3f102838a251): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 29 00:34:48.336412 kubelet[2828]: E1029 00:34:48.336294 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67654757f7-t64h5" podUID="4c9c1840-5938-4d20-aaba-3f102838a251" Oct 29 00:34:48.629323 kubelet[2828]: E1029 00:34:48.629252 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:34:48.629868 containerd[1619]: time="2025-10-29T00:34:48.629828465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zmz5w,Uid:c54f9d37-9b37-41e0-86e7-2635e688cb85,Namespace:kube-system,Attempt:0,}" Oct 29 00:34:48.909043 kubelet[2828]: E1029 00:34:48.908876 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67654757f7-t64h5" podUID="4c9c1840-5938-4d20-aaba-3f102838a251" Oct 29 00:34:49.095980 systemd-networkd[1518]: caliae1d5863733: Link UP Oct 29 00:34:49.096260 systemd-networkd[1518]: caliae1d5863733: Gained carrier Oct 29 00:34:49.111437 containerd[1619]: 2025-10-29 00:34:48.885 [INFO][4445] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 00:34:49.111437 containerd[1619]: 2025-10-29 00:34:49.022 [INFO][4445] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--zmz5w-eth0 coredns-674b8bbfcf- kube-system c54f9d37-9b37-41e0-86e7-2635e688cb85 851 0 2025-10-29 00:34:08 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-zmz5w eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliae1d5863733 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fd302fc2030978ad22963446cb08aba26e89d218d774e48e6f4aff00a12f1636" Namespace="kube-system" Pod="coredns-674b8bbfcf-zmz5w" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zmz5w-" Oct 29 00:34:49.111437 containerd[1619]: 2025-10-29 00:34:49.022 [INFO][4445] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fd302fc2030978ad22963446cb08aba26e89d218d774e48e6f4aff00a12f1636" Namespace="kube-system" Pod="coredns-674b8bbfcf-zmz5w" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zmz5w-eth0" Oct 29 00:34:49.111437 containerd[1619]: 2025-10-29 00:34:49.056 [INFO][4470] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fd302fc2030978ad22963446cb08aba26e89d218d774e48e6f4aff00a12f1636" HandleID="k8s-pod-network.fd302fc2030978ad22963446cb08aba26e89d218d774e48e6f4aff00a12f1636" Workload="localhost-k8s-coredns--674b8bbfcf--zmz5w-eth0" Oct 29 00:34:49.111763 containerd[1619]: 2025-10-29 00:34:49.057 [INFO][4470] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fd302fc2030978ad22963446cb08aba26e89d218d774e48e6f4aff00a12f1636" HandleID="k8s-pod-network.fd302fc2030978ad22963446cb08aba26e89d218d774e48e6f4aff00a12f1636" Workload="localhost-k8s-coredns--674b8bbfcf--zmz5w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df5c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-zmz5w", "timestamp":"2025-10-29 00:34:49.056888986 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 00:34:49.111763 containerd[1619]: 2025-10-29 00:34:49.057 [INFO][4470] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 00:34:49.111763 containerd[1619]: 2025-10-29 00:34:49.057 [INFO][4470] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 00:34:49.111763 containerd[1619]: 2025-10-29 00:34:49.057 [INFO][4470] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 00:34:49.111763 containerd[1619]: 2025-10-29 00:34:49.065 [INFO][4470] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fd302fc2030978ad22963446cb08aba26e89d218d774e48e6f4aff00a12f1636" host="localhost" Oct 29 00:34:49.111763 containerd[1619]: 2025-10-29 00:34:49.070 [INFO][4470] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 00:34:49.111763 containerd[1619]: 2025-10-29 00:34:49.074 [INFO][4470] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 00:34:49.111763 containerd[1619]: 2025-10-29 00:34:49.076 [INFO][4470] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 00:34:49.111763 containerd[1619]: 2025-10-29 00:34:49.078 [INFO][4470] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 00:34:49.111763 containerd[1619]: 2025-10-29 00:34:49.078 [INFO][4470] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fd302fc2030978ad22963446cb08aba26e89d218d774e48e6f4aff00a12f1636" host="localhost" Oct 29 00:34:49.111995 containerd[1619]: 2025-10-29 00:34:49.079 [INFO][4470] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fd302fc2030978ad22963446cb08aba26e89d218d774e48e6f4aff00a12f1636 Oct 29 00:34:49.111995 containerd[1619]: 2025-10-29 00:34:49.085 [INFO][4470] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fd302fc2030978ad22963446cb08aba26e89d218d774e48e6f4aff00a12f1636" host="localhost" Oct 29 00:34:49.111995 containerd[1619]: 2025-10-29 00:34:49.090 [INFO][4470] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.fd302fc2030978ad22963446cb08aba26e89d218d774e48e6f4aff00a12f1636" host="localhost" Oct 29 00:34:49.111995 containerd[1619]: 2025-10-29 00:34:49.090 [INFO][4470] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.fd302fc2030978ad22963446cb08aba26e89d218d774e48e6f4aff00a12f1636" host="localhost" Oct 29 00:34:49.111995 containerd[1619]: 2025-10-29 00:34:49.090 [INFO][4470] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 00:34:49.111995 containerd[1619]: 2025-10-29 00:34:49.090 [INFO][4470] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="fd302fc2030978ad22963446cb08aba26e89d218d774e48e6f4aff00a12f1636" HandleID="k8s-pod-network.fd302fc2030978ad22963446cb08aba26e89d218d774e48e6f4aff00a12f1636" Workload="localhost-k8s-coredns--674b8bbfcf--zmz5w-eth0" Oct 29 00:34:49.112120 containerd[1619]: 2025-10-29 00:34:49.093 [INFO][4445] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fd302fc2030978ad22963446cb08aba26e89d218d774e48e6f4aff00a12f1636" Namespace="kube-system" Pod="coredns-674b8bbfcf-zmz5w" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zmz5w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--zmz5w-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c54f9d37-9b37-41e0-86e7-2635e688cb85", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 34, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-zmz5w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliae1d5863733", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:34:49.112190 containerd[1619]: 2025-10-29 00:34:49.094 [INFO][4445] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="fd302fc2030978ad22963446cb08aba26e89d218d774e48e6f4aff00a12f1636" Namespace="kube-system" Pod="coredns-674b8bbfcf-zmz5w" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zmz5w-eth0" Oct 29 00:34:49.112190 containerd[1619]: 2025-10-29 00:34:49.094 [INFO][4445] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliae1d5863733 ContainerID="fd302fc2030978ad22963446cb08aba26e89d218d774e48e6f4aff00a12f1636" Namespace="kube-system" Pod="coredns-674b8bbfcf-zmz5w" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zmz5w-eth0" Oct 29 00:34:49.112190 containerd[1619]: 2025-10-29 00:34:49.095 [INFO][4445] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fd302fc2030978ad22963446cb08aba26e89d218d774e48e6f4aff00a12f1636" Namespace="kube-system" Pod="coredns-674b8bbfcf-zmz5w" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zmz5w-eth0" Oct 29 00:34:49.112257 containerd[1619]: 2025-10-29 00:34:49.097 [INFO][4445] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fd302fc2030978ad22963446cb08aba26e89d218d774e48e6f4aff00a12f1636" Namespace="kube-system" Pod="coredns-674b8bbfcf-zmz5w" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zmz5w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--zmz5w-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c54f9d37-9b37-41e0-86e7-2635e688cb85", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 34, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fd302fc2030978ad22963446cb08aba26e89d218d774e48e6f4aff00a12f1636", Pod:"coredns-674b8bbfcf-zmz5w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliae1d5863733", MAC:"c6:9e:10:aa:2b:c4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:34:49.112257 containerd[1619]: 2025-10-29 00:34:49.106 [INFO][4445] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fd302fc2030978ad22963446cb08aba26e89d218d774e48e6f4aff00a12f1636" Namespace="kube-system" Pod="coredns-674b8bbfcf-zmz5w" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zmz5w-eth0" Oct 29 00:34:49.143739 containerd[1619]: time="2025-10-29T00:34:49.143686575Z" level=info msg="connecting to shim fd302fc2030978ad22963446cb08aba26e89d218d774e48e6f4aff00a12f1636" address="unix:///run/containerd/s/5ea3da6af60dc910723b289ce10d3678ee9acde62437ddfd415b82dea240a172" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:34:49.172843 systemd-networkd[1518]: caliae35fae89c9: Gained IPv6LL Oct 29 00:34:49.174822 systemd[1]: Started cri-containerd-fd302fc2030978ad22963446cb08aba26e89d218d774e48e6f4aff00a12f1636.scope - libcontainer container fd302fc2030978ad22963446cb08aba26e89d218d774e48e6f4aff00a12f1636. Oct 29 00:34:49.190871 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 00:34:49.222160 containerd[1619]: time="2025-10-29T00:34:49.222120201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zmz5w,Uid:c54f9d37-9b37-41e0-86e7-2635e688cb85,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd302fc2030978ad22963446cb08aba26e89d218d774e48e6f4aff00a12f1636\"" Oct 29 00:34:49.222981 kubelet[2828]: E1029 00:34:49.222780 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:34:49.227702 containerd[1619]: time="2025-10-29T00:34:49.227653453Z" level=info msg="CreateContainer within sandbox \"fd302fc2030978ad22963446cb08aba26e89d218d774e48e6f4aff00a12f1636\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 29 00:34:49.244511 containerd[1619]: time="2025-10-29T00:34:49.244465627Z" level=info msg="Container 3edb258df873a284eefb95905832aa84ffb6696b17f02680dcd227598bdb7d07: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:34:49.247975 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2127929014.mount: Deactivated successfully. Oct 29 00:34:49.250313 containerd[1619]: time="2025-10-29T00:34:49.250262694Z" level=info msg="CreateContainer within sandbox \"fd302fc2030978ad22963446cb08aba26e89d218d774e48e6f4aff00a12f1636\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3edb258df873a284eefb95905832aa84ffb6696b17f02680dcd227598bdb7d07\"" Oct 29 00:34:49.250846 containerd[1619]: time="2025-10-29T00:34:49.250818627Z" level=info msg="StartContainer for \"3edb258df873a284eefb95905832aa84ffb6696b17f02680dcd227598bdb7d07\"" Oct 29 00:34:49.251906 containerd[1619]: time="2025-10-29T00:34:49.251876633Z" level=info msg="connecting to shim 3edb258df873a284eefb95905832aa84ffb6696b17f02680dcd227598bdb7d07" address="unix:///run/containerd/s/5ea3da6af60dc910723b289ce10d3678ee9acde62437ddfd415b82dea240a172" protocol=ttrpc version=3 Oct 29 00:34:49.274835 systemd[1]: Started cri-containerd-3edb258df873a284eefb95905832aa84ffb6696b17f02680dcd227598bdb7d07.scope - libcontainer container 3edb258df873a284eefb95905832aa84ffb6696b17f02680dcd227598bdb7d07. Oct 29 00:34:49.311807 containerd[1619]: time="2025-10-29T00:34:49.311769911Z" level=info msg="StartContainer for \"3edb258df873a284eefb95905832aa84ffb6696b17f02680dcd227598bdb7d07\" returns successfully" Oct 29 00:34:49.630668 containerd[1619]: time="2025-10-29T00:34:49.630128419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qqtll,Uid:31820ff6-c2b5-4f1e-b097-0b66b5dd1baa,Namespace:calico-system,Attempt:0,}" Oct 29 00:34:49.630668 containerd[1619]: time="2025-10-29T00:34:49.630302105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-c97gh,Uid:c2fb43e4-9dd6-4f00-992a-4f7339772bdb,Namespace:calico-system,Attempt:0,}" Oct 29 00:34:49.631207 containerd[1619]: time="2025-10-29T00:34:49.630720330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-99fd685fb-vhm8w,Uid:226993fc-2dd7-48d2-9d26-aaf9fe3f09e4,Namespace:calico-apiserver,Attempt:0,}" Oct 29 00:34:49.784441 systemd[1]: Started sshd@10-10.0.0.10:22-10.0.0.1:49136.service - OpenSSH per-connection server daemon (10.0.0.1:49136). Oct 29 00:34:49.930050 sshd[4566]: Accepted publickey for core from 10.0.0.1 port 49136 ssh2: RSA SHA256:NOSddcycRuuQ0Zp9cdpGYZy5vFByHCSYLp01T7glzwM Oct 29 00:34:49.933584 kubelet[2828]: E1029 00:34:49.932377 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:34:49.933166 sshd-session[4566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:34:49.937958 kubelet[2828]: E1029 00:34:49.937909 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67654757f7-t64h5" podUID="4c9c1840-5938-4d20-aaba-3f102838a251" Oct 29 00:34:49.948077 systemd-logind[1587]: New session 11 of user core. Oct 29 00:34:49.951734 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 29 00:34:50.326139 sshd[4623]: Connection closed by 10.0.0.1 port 49136 Oct 29 00:34:50.326621 sshd-session[4566]: pam_unix(sshd:session): session closed for user core Oct 29 00:34:50.331502 systemd[1]: sshd@10-10.0.0.10:22-10.0.0.1:49136.service: Deactivated successfully. Oct 29 00:34:50.333745 systemd[1]: session-11.scope: Deactivated successfully. Oct 29 00:34:50.334672 systemd-logind[1587]: Session 11 logged out. Waiting for processes to exit. Oct 29 00:34:50.335863 systemd-logind[1587]: Removed session 11. Oct 29 00:34:50.368557 kubelet[2828]: I1029 00:34:50.368505 2828 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 29 00:34:50.368970 kubelet[2828]: E1029 00:34:50.368943 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:34:50.386336 kubelet[2828]: I1029 00:34:50.385442 2828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-zmz5w" podStartSLOduration=42.385426978 podStartE2EDuration="42.385426978s" podCreationTimestamp="2025-10-29 00:34:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 00:34:50.384467147 +0000 UTC m=+47.023463742" watchObservedRunningTime="2025-10-29 00:34:50.385426978 +0000 UTC m=+47.024423573" Oct 29 00:34:50.490418 systemd-networkd[1518]: cali85888e9488a: Link UP Oct 29 00:34:50.492739 systemd-networkd[1518]: cali85888e9488a: Gained carrier Oct 29 00:34:50.521389 containerd[1619]: 2025-10-29 00:34:49.862 [INFO][4585] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 00:34:50.521389 containerd[1619]: 2025-10-29 00:34:49.898 [INFO][4585] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--c97gh-eth0 goldmane-666569f655- calico-system c2fb43e4-9dd6-4f00-992a-4f7339772bdb 850 0 2025-10-29 00:34:21 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-c97gh eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali85888e9488a [] [] }} ContainerID="0dccc72019ebbe5efedc510c50210110ede9c0e1e53bf0f75f507252c64fc17b" Namespace="calico-system" Pod="goldmane-666569f655-c97gh" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--c97gh-" Oct 29 00:34:50.521389 containerd[1619]: 2025-10-29 00:34:49.898 [INFO][4585] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0dccc72019ebbe5efedc510c50210110ede9c0e1e53bf0f75f507252c64fc17b" Namespace="calico-system" Pod="goldmane-666569f655-c97gh" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--c97gh-eth0" Oct 29 00:34:50.521389 containerd[1619]: 2025-10-29 00:34:49.980 [INFO][4615] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0dccc72019ebbe5efedc510c50210110ede9c0e1e53bf0f75f507252c64fc17b" HandleID="k8s-pod-network.0dccc72019ebbe5efedc510c50210110ede9c0e1e53bf0f75f507252c64fc17b" Workload="localhost-k8s-goldmane--666569f655--c97gh-eth0" Oct 29 00:34:50.521389 containerd[1619]: 2025-10-29 00:34:49.980 [INFO][4615] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0dccc72019ebbe5efedc510c50210110ede9c0e1e53bf0f75f507252c64fc17b" HandleID="k8s-pod-network.0dccc72019ebbe5efedc510c50210110ede9c0e1e53bf0f75f507252c64fc17b" Workload="localhost-k8s-goldmane--666569f655--c97gh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c6fd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-c97gh", "timestamp":"2025-10-29 00:34:49.980534529 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 00:34:50.521389 containerd[1619]: 2025-10-29 00:34:49.981 [INFO][4615] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 00:34:50.521389 containerd[1619]: 2025-10-29 00:34:49.981 [INFO][4615] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 00:34:50.521389 containerd[1619]: 2025-10-29 00:34:49.981 [INFO][4615] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 00:34:50.521389 containerd[1619]: 2025-10-29 00:34:50.084 [INFO][4615] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0dccc72019ebbe5efedc510c50210110ede9c0e1e53bf0f75f507252c64fc17b" host="localhost" Oct 29 00:34:50.521389 containerd[1619]: 2025-10-29 00:34:50.323 [INFO][4615] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 00:34:50.521389 containerd[1619]: 2025-10-29 00:34:50.404 [INFO][4615] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 00:34:50.521389 containerd[1619]: 2025-10-29 00:34:50.458 [INFO][4615] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 00:34:50.521389 containerd[1619]: 2025-10-29 00:34:50.462 [INFO][4615] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 00:34:50.521389 containerd[1619]: 2025-10-29 00:34:50.462 [INFO][4615] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0dccc72019ebbe5efedc510c50210110ede9c0e1e53bf0f75f507252c64fc17b" host="localhost" Oct 29 00:34:50.521389 containerd[1619]: 2025-10-29 00:34:50.466 [INFO][4615] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0dccc72019ebbe5efedc510c50210110ede9c0e1e53bf0f75f507252c64fc17b Oct 29 00:34:50.521389 containerd[1619]: 2025-10-29 00:34:50.472 [INFO][4615] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0dccc72019ebbe5efedc510c50210110ede9c0e1e53bf0f75f507252c64fc17b" host="localhost" Oct 29 00:34:50.521389 containerd[1619]: 2025-10-29 00:34:50.482 [INFO][4615] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.0dccc72019ebbe5efedc510c50210110ede9c0e1e53bf0f75f507252c64fc17b" host="localhost" Oct 29 00:34:50.521389 containerd[1619]: 2025-10-29 00:34:50.482 [INFO][4615] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.0dccc72019ebbe5efedc510c50210110ede9c0e1e53bf0f75f507252c64fc17b" host="localhost" Oct 29 00:34:50.521389 containerd[1619]: 2025-10-29 00:34:50.482 [INFO][4615] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 00:34:50.521389 containerd[1619]: 2025-10-29 00:34:50.482 [INFO][4615] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="0dccc72019ebbe5efedc510c50210110ede9c0e1e53bf0f75f507252c64fc17b" HandleID="k8s-pod-network.0dccc72019ebbe5efedc510c50210110ede9c0e1e53bf0f75f507252c64fc17b" Workload="localhost-k8s-goldmane--666569f655--c97gh-eth0" Oct 29 00:34:50.522237 containerd[1619]: 2025-10-29 00:34:50.486 [INFO][4585] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0dccc72019ebbe5efedc510c50210110ede9c0e1e53bf0f75f507252c64fc17b" Namespace="calico-system" Pod="goldmane-666569f655-c97gh" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--c97gh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--c97gh-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"c2fb43e4-9dd6-4f00-992a-4f7339772bdb", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 34, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-c97gh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali85888e9488a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:34:50.522237 containerd[1619]: 2025-10-29 00:34:50.486 [INFO][4585] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="0dccc72019ebbe5efedc510c50210110ede9c0e1e53bf0f75f507252c64fc17b" Namespace="calico-system" Pod="goldmane-666569f655-c97gh" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--c97gh-eth0" Oct 29 00:34:50.522237 containerd[1619]: 2025-10-29 00:34:50.486 [INFO][4585] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali85888e9488a ContainerID="0dccc72019ebbe5efedc510c50210110ede9c0e1e53bf0f75f507252c64fc17b" Namespace="calico-system" Pod="goldmane-666569f655-c97gh" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--c97gh-eth0" Oct 29 00:34:50.522237 containerd[1619]: 2025-10-29 00:34:50.492 [INFO][4585] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0dccc72019ebbe5efedc510c50210110ede9c0e1e53bf0f75f507252c64fc17b" Namespace="calico-system" Pod="goldmane-666569f655-c97gh" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--c97gh-eth0" Oct 29 00:34:50.522237 containerd[1619]: 2025-10-29 00:34:50.495 [INFO][4585] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0dccc72019ebbe5efedc510c50210110ede9c0e1e53bf0f75f507252c64fc17b" Namespace="calico-system" Pod="goldmane-666569f655-c97gh" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--c97gh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--c97gh-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"c2fb43e4-9dd6-4f00-992a-4f7339772bdb", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 34, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0dccc72019ebbe5efedc510c50210110ede9c0e1e53bf0f75f507252c64fc17b", Pod:"goldmane-666569f655-c97gh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali85888e9488a", MAC:"52:4f:aa:f2:66:02", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:34:50.522237 containerd[1619]: 2025-10-29 00:34:50.517 [INFO][4585] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0dccc72019ebbe5efedc510c50210110ede9c0e1e53bf0f75f507252c64fc17b" Namespace="calico-system" Pod="goldmane-666569f655-c97gh" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--c97gh-eth0" Oct 29 00:34:50.585965 containerd[1619]: time="2025-10-29T00:34:50.585707975Z" level=info msg="connecting to shim 0dccc72019ebbe5efedc510c50210110ede9c0e1e53bf0f75f507252c64fc17b" address="unix:///run/containerd/s/dff1848f78f1e4a538d22cae0cd86a773819ccf38b2ee31f019fb586f2d3ef66" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:34:50.625322 systemd[1]: Started cri-containerd-0dccc72019ebbe5efedc510c50210110ede9c0e1e53bf0f75f507252c64fc17b.scope - libcontainer container 0dccc72019ebbe5efedc510c50210110ede9c0e1e53bf0f75f507252c64fc17b. Oct 29 00:34:50.631606 kubelet[2828]: E1029 00:34:50.629258 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:34:50.629855 systemd-networkd[1518]: cali5d6afe56ba9: Link UP Oct 29 00:34:50.631144 systemd-networkd[1518]: cali5d6afe56ba9: Gained carrier Oct 29 00:34:50.636613 containerd[1619]: time="2025-10-29T00:34:50.635738796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-744kg,Uid:7410a0be-1e47-4bd5-ad08-82bed9e122fc,Namespace:kube-system,Attempt:0,}" Oct 29 00:34:50.662390 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 00:34:50.663156 containerd[1619]: 2025-10-29 00:34:49.874 [INFO][4598] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 00:34:50.663156 containerd[1619]: 2025-10-29 00:34:49.947 [INFO][4598] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--99fd685fb--vhm8w-eth0 calico-apiserver-99fd685fb- calico-apiserver 226993fc-2dd7-48d2-9d26-aaf9fe3f09e4 855 0 2025-10-29 00:34:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:99fd685fb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-99fd685fb-vhm8w eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5d6afe56ba9 [] [] }} ContainerID="81edae98a0a0d9e3af22ac3474050368e4bcdf275b685a12701298e98e0444b8" Namespace="calico-apiserver" Pod="calico-apiserver-99fd685fb-vhm8w" WorkloadEndpoint="localhost-k8s-calico--apiserver--99fd685fb--vhm8w-" Oct 29 00:34:50.663156 containerd[1619]: 2025-10-29 00:34:49.947 [INFO][4598] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="81edae98a0a0d9e3af22ac3474050368e4bcdf275b685a12701298e98e0444b8" Namespace="calico-apiserver" Pod="calico-apiserver-99fd685fb-vhm8w" WorkloadEndpoint="localhost-k8s-calico--apiserver--99fd685fb--vhm8w-eth0" Oct 29 00:34:50.663156 containerd[1619]: 2025-10-29 00:34:50.131 [INFO][4658] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="81edae98a0a0d9e3af22ac3474050368e4bcdf275b685a12701298e98e0444b8" HandleID="k8s-pod-network.81edae98a0a0d9e3af22ac3474050368e4bcdf275b685a12701298e98e0444b8" Workload="localhost-k8s-calico--apiserver--99fd685fb--vhm8w-eth0" Oct 29 00:34:50.663156 containerd[1619]: 2025-10-29 00:34:50.131 [INFO][4658] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="81edae98a0a0d9e3af22ac3474050368e4bcdf275b685a12701298e98e0444b8" HandleID="k8s-pod-network.81edae98a0a0d9e3af22ac3474050368e4bcdf275b685a12701298e98e0444b8" Workload="localhost-k8s-calico--apiserver--99fd685fb--vhm8w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325390), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-99fd685fb-vhm8w", "timestamp":"2025-10-29 00:34:50.13161285 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 00:34:50.663156 containerd[1619]: 2025-10-29 00:34:50.132 [INFO][4658] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 00:34:50.663156 containerd[1619]: 2025-10-29 00:34:50.482 [INFO][4658] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 00:34:50.663156 containerd[1619]: 2025-10-29 00:34:50.482 [INFO][4658] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 00:34:50.663156 containerd[1619]: 2025-10-29 00:34:50.500 [INFO][4658] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.81edae98a0a0d9e3af22ac3474050368e4bcdf275b685a12701298e98e0444b8" host="localhost" Oct 29 00:34:50.663156 containerd[1619]: 2025-10-29 00:34:50.563 [INFO][4658] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 00:34:50.663156 containerd[1619]: 2025-10-29 00:34:50.577 [INFO][4658] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 00:34:50.663156 containerd[1619]: 2025-10-29 00:34:50.581 [INFO][4658] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 00:34:50.663156 containerd[1619]: 2025-10-29 00:34:50.587 [INFO][4658] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 00:34:50.663156 containerd[1619]: 2025-10-29 00:34:50.587 [INFO][4658] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.81edae98a0a0d9e3af22ac3474050368e4bcdf275b685a12701298e98e0444b8" host="localhost" Oct 29 00:34:50.663156 containerd[1619]: 2025-10-29 00:34:50.591 [INFO][4658] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.81edae98a0a0d9e3af22ac3474050368e4bcdf275b685a12701298e98e0444b8 Oct 29 00:34:50.663156 containerd[1619]: 2025-10-29 00:34:50.598 [INFO][4658] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.81edae98a0a0d9e3af22ac3474050368e4bcdf275b685a12701298e98e0444b8" host="localhost" Oct 29 00:34:50.663156 containerd[1619]: 2025-10-29 00:34:50.610 [INFO][4658] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.81edae98a0a0d9e3af22ac3474050368e4bcdf275b685a12701298e98e0444b8" host="localhost" Oct 29 00:34:50.663156 containerd[1619]: 2025-10-29 00:34:50.610 [INFO][4658] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.81edae98a0a0d9e3af22ac3474050368e4bcdf275b685a12701298e98e0444b8" host="localhost" Oct 29 00:34:50.663156 containerd[1619]: 2025-10-29 00:34:50.610 [INFO][4658] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 00:34:50.663156 containerd[1619]: 2025-10-29 00:34:50.610 [INFO][4658] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="81edae98a0a0d9e3af22ac3474050368e4bcdf275b685a12701298e98e0444b8" HandleID="k8s-pod-network.81edae98a0a0d9e3af22ac3474050368e4bcdf275b685a12701298e98e0444b8" Workload="localhost-k8s-calico--apiserver--99fd685fb--vhm8w-eth0" Oct 29 00:34:50.664208 containerd[1619]: 2025-10-29 00:34:50.621 [INFO][4598] cni-plugin/k8s.go 418: Populated endpoint ContainerID="81edae98a0a0d9e3af22ac3474050368e4bcdf275b685a12701298e98e0444b8" Namespace="calico-apiserver" Pod="calico-apiserver-99fd685fb-vhm8w" WorkloadEndpoint="localhost-k8s-calico--apiserver--99fd685fb--vhm8w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--99fd685fb--vhm8w-eth0", GenerateName:"calico-apiserver-99fd685fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"226993fc-2dd7-48d2-9d26-aaf9fe3f09e4", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 34, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"99fd685fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-99fd685fb-vhm8w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5d6afe56ba9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:34:50.664208 containerd[1619]: 2025-10-29 00:34:50.621 [INFO][4598] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="81edae98a0a0d9e3af22ac3474050368e4bcdf275b685a12701298e98e0444b8" Namespace="calico-apiserver" Pod="calico-apiserver-99fd685fb-vhm8w" WorkloadEndpoint="localhost-k8s-calico--apiserver--99fd685fb--vhm8w-eth0" Oct 29 00:34:50.664208 containerd[1619]: 2025-10-29 00:34:50.621 [INFO][4598] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5d6afe56ba9 ContainerID="81edae98a0a0d9e3af22ac3474050368e4bcdf275b685a12701298e98e0444b8" Namespace="calico-apiserver" Pod="calico-apiserver-99fd685fb-vhm8w" WorkloadEndpoint="localhost-k8s-calico--apiserver--99fd685fb--vhm8w-eth0" Oct 29 00:34:50.664208 containerd[1619]: 2025-10-29 00:34:50.631 [INFO][4598] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="81edae98a0a0d9e3af22ac3474050368e4bcdf275b685a12701298e98e0444b8" Namespace="calico-apiserver" Pod="calico-apiserver-99fd685fb-vhm8w" WorkloadEndpoint="localhost-k8s-calico--apiserver--99fd685fb--vhm8w-eth0" Oct 29 00:34:50.664208 containerd[1619]: 2025-10-29 00:34:50.632 [INFO][4598] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="81edae98a0a0d9e3af22ac3474050368e4bcdf275b685a12701298e98e0444b8" Namespace="calico-apiserver" Pod="calico-apiserver-99fd685fb-vhm8w" WorkloadEndpoint="localhost-k8s-calico--apiserver--99fd685fb--vhm8w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--99fd685fb--vhm8w-eth0", GenerateName:"calico-apiserver-99fd685fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"226993fc-2dd7-48d2-9d26-aaf9fe3f09e4", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 34, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"99fd685fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"81edae98a0a0d9e3af22ac3474050368e4bcdf275b685a12701298e98e0444b8", Pod:"calico-apiserver-99fd685fb-vhm8w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5d6afe56ba9", MAC:"26:dd:ff:39:b6:0d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:34:50.664208 containerd[1619]: 2025-10-29 00:34:50.654 [INFO][4598] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="81edae98a0a0d9e3af22ac3474050368e4bcdf275b685a12701298e98e0444b8" Namespace="calico-apiserver" Pod="calico-apiserver-99fd685fb-vhm8w" WorkloadEndpoint="localhost-k8s-calico--apiserver--99fd685fb--vhm8w-eth0" Oct 29 00:34:50.722667 containerd[1619]: time="2025-10-29T00:34:50.722335482Z" level=info msg="connecting to shim 81edae98a0a0d9e3af22ac3474050368e4bcdf275b685a12701298e98e0444b8" address="unix:///run/containerd/s/b688f356f0b0679dad78da949e16150dd7a27302374b5c85d8b3fcb31c4efacb" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:34:50.724770 systemd-networkd[1518]: cali1147cc75242: Link UP Oct 29 00:34:50.726946 systemd-networkd[1518]: cali1147cc75242: Gained carrier Oct 29 00:34:50.751089 containerd[1619]: 2025-10-29 00:34:49.973 [INFO][4568] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 00:34:50.751089 containerd[1619]: 2025-10-29 00:34:50.315 [INFO][4568] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--qqtll-eth0 csi-node-driver- calico-system 31820ff6-c2b5-4f1e-b097-0b66b5dd1baa 729 0 2025-10-29 00:34:23 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-qqtll eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1147cc75242 [] [] }} ContainerID="c8077a7f3bfcbfd4b9eb097baf2fc186cafe46586fa72cb432121ac9fbc289f2" Namespace="calico-system" Pod="csi-node-driver-qqtll" WorkloadEndpoint="localhost-k8s-csi--node--driver--qqtll-" Oct 29 00:34:50.751089 containerd[1619]: 2025-10-29 00:34:50.317 [INFO][4568] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c8077a7f3bfcbfd4b9eb097baf2fc186cafe46586fa72cb432121ac9fbc289f2" Namespace="calico-system" Pod="csi-node-driver-qqtll" WorkloadEndpoint="localhost-k8s-csi--node--driver--qqtll-eth0" Oct 29 00:34:50.751089 containerd[1619]: 2025-10-29 00:34:50.415 [INFO][4671] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c8077a7f3bfcbfd4b9eb097baf2fc186cafe46586fa72cb432121ac9fbc289f2" HandleID="k8s-pod-network.c8077a7f3bfcbfd4b9eb097baf2fc186cafe46586fa72cb432121ac9fbc289f2" Workload="localhost-k8s-csi--node--driver--qqtll-eth0" Oct 29 00:34:50.751089 containerd[1619]: 2025-10-29 00:34:50.415 [INFO][4671] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c8077a7f3bfcbfd4b9eb097baf2fc186cafe46586fa72cb432121ac9fbc289f2" HandleID="k8s-pod-network.c8077a7f3bfcbfd4b9eb097baf2fc186cafe46586fa72cb432121ac9fbc289f2" Workload="localhost-k8s-csi--node--driver--qqtll-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138890), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-qqtll", "timestamp":"2025-10-29 00:34:50.415378535 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 00:34:50.751089 containerd[1619]: 2025-10-29 00:34:50.415 [INFO][4671] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 00:34:50.751089 containerd[1619]: 2025-10-29 00:34:50.610 [INFO][4671] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 00:34:50.751089 containerd[1619]: 2025-10-29 00:34:50.610 [INFO][4671] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 00:34:50.751089 containerd[1619]: 2025-10-29 00:34:50.654 [INFO][4671] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c8077a7f3bfcbfd4b9eb097baf2fc186cafe46586fa72cb432121ac9fbc289f2" host="localhost" Oct 29 00:34:50.751089 containerd[1619]: 2025-10-29 00:34:50.662 [INFO][4671] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 00:34:50.751089 containerd[1619]: 2025-10-29 00:34:50.674 [INFO][4671] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 00:34:50.751089 containerd[1619]: 2025-10-29 00:34:50.678 [INFO][4671] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 00:34:50.751089 containerd[1619]: 2025-10-29 00:34:50.682 [INFO][4671] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 00:34:50.751089 containerd[1619]: 2025-10-29 00:34:50.682 [INFO][4671] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c8077a7f3bfcbfd4b9eb097baf2fc186cafe46586fa72cb432121ac9fbc289f2" host="localhost" Oct 29 00:34:50.751089 containerd[1619]: 2025-10-29 00:34:50.685 [INFO][4671] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c8077a7f3bfcbfd4b9eb097baf2fc186cafe46586fa72cb432121ac9fbc289f2 Oct 29 00:34:50.751089 containerd[1619]: 2025-10-29 00:34:50.692 [INFO][4671] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c8077a7f3bfcbfd4b9eb097baf2fc186cafe46586fa72cb432121ac9fbc289f2" host="localhost" Oct 29 00:34:50.751089 containerd[1619]: 2025-10-29 00:34:50.701 [INFO][4671] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.c8077a7f3bfcbfd4b9eb097baf2fc186cafe46586fa72cb432121ac9fbc289f2" host="localhost" Oct 29 00:34:50.751089 containerd[1619]: 2025-10-29 00:34:50.702 [INFO][4671] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.c8077a7f3bfcbfd4b9eb097baf2fc186cafe46586fa72cb432121ac9fbc289f2" host="localhost" Oct 29 00:34:50.751089 containerd[1619]: 2025-10-29 00:34:50.702 [INFO][4671] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 00:34:50.751089 containerd[1619]: 2025-10-29 00:34:50.702 [INFO][4671] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="c8077a7f3bfcbfd4b9eb097baf2fc186cafe46586fa72cb432121ac9fbc289f2" HandleID="k8s-pod-network.c8077a7f3bfcbfd4b9eb097baf2fc186cafe46586fa72cb432121ac9fbc289f2" Workload="localhost-k8s-csi--node--driver--qqtll-eth0" Oct 29 00:34:50.752383 containerd[1619]: 2025-10-29 00:34:50.709 [INFO][4568] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c8077a7f3bfcbfd4b9eb097baf2fc186cafe46586fa72cb432121ac9fbc289f2" Namespace="calico-system" Pod="csi-node-driver-qqtll" WorkloadEndpoint="localhost-k8s-csi--node--driver--qqtll-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qqtll-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"31820ff6-c2b5-4f1e-b097-0b66b5dd1baa", ResourceVersion:"729", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 34, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-qqtll", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1147cc75242", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:34:50.752383 containerd[1619]: 2025-10-29 00:34:50.710 [INFO][4568] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="c8077a7f3bfcbfd4b9eb097baf2fc186cafe46586fa72cb432121ac9fbc289f2" Namespace="calico-system" Pod="csi-node-driver-qqtll" WorkloadEndpoint="localhost-k8s-csi--node--driver--qqtll-eth0" Oct 29 00:34:50.752383 containerd[1619]: 2025-10-29 00:34:50.710 [INFO][4568] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1147cc75242 ContainerID="c8077a7f3bfcbfd4b9eb097baf2fc186cafe46586fa72cb432121ac9fbc289f2" Namespace="calico-system" Pod="csi-node-driver-qqtll" WorkloadEndpoint="localhost-k8s-csi--node--driver--qqtll-eth0" Oct 29 00:34:50.752383 containerd[1619]: 2025-10-29 00:34:50.726 [INFO][4568] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c8077a7f3bfcbfd4b9eb097baf2fc186cafe46586fa72cb432121ac9fbc289f2" Namespace="calico-system" Pod="csi-node-driver-qqtll" WorkloadEndpoint="localhost-k8s-csi--node--driver--qqtll-eth0" Oct 29 00:34:50.752383 containerd[1619]: 2025-10-29 00:34:50.728 [INFO][4568] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c8077a7f3bfcbfd4b9eb097baf2fc186cafe46586fa72cb432121ac9fbc289f2" Namespace="calico-system" Pod="csi-node-driver-qqtll" WorkloadEndpoint="localhost-k8s-csi--node--driver--qqtll-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qqtll-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"31820ff6-c2b5-4f1e-b097-0b66b5dd1baa", ResourceVersion:"729", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 34, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c8077a7f3bfcbfd4b9eb097baf2fc186cafe46586fa72cb432121ac9fbc289f2", Pod:"csi-node-driver-qqtll", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1147cc75242", MAC:"06:29:d7:bd:4a:42", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:34:50.752383 containerd[1619]: 2025-10-29 00:34:50.744 [INFO][4568] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c8077a7f3bfcbfd4b9eb097baf2fc186cafe46586fa72cb432121ac9fbc289f2" Namespace="calico-system" Pod="csi-node-driver-qqtll" WorkloadEndpoint="localhost-k8s-csi--node--driver--qqtll-eth0" Oct 29 00:34:50.776824 systemd[1]: Started cri-containerd-81edae98a0a0d9e3af22ac3474050368e4bcdf275b685a12701298e98e0444b8.scope - libcontainer container 81edae98a0a0d9e3af22ac3474050368e4bcdf275b685a12701298e98e0444b8. Oct 29 00:34:50.797418 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 00:34:50.812488 containerd[1619]: time="2025-10-29T00:34:50.812434170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-c97gh,Uid:c2fb43e4-9dd6-4f00-992a-4f7339772bdb,Namespace:calico-system,Attempt:0,} returns sandbox id \"0dccc72019ebbe5efedc510c50210110ede9c0e1e53bf0f75f507252c64fc17b\"" Oct 29 00:34:50.815935 containerd[1619]: time="2025-10-29T00:34:50.815876429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 29 00:34:50.832326 systemd-networkd[1518]: cali5850b800a94: Link UP Oct 29 00:34:50.833827 systemd-networkd[1518]: cali5850b800a94: Gained carrier Oct 29 00:34:50.865884 containerd[1619]: 2025-10-29 00:34:50.687 [INFO][4736] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 00:34:50.865884 containerd[1619]: 2025-10-29 00:34:50.713 [INFO][4736] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--744kg-eth0 coredns-674b8bbfcf- kube-system 7410a0be-1e47-4bd5-ad08-82bed9e122fc 852 0 2025-10-29 00:34:08 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-744kg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5850b800a94 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d0ab6b9e3189b84af58faa31b0472a7b3083989ff4f67db8c70b2a77dfb1b26b" Namespace="kube-system" Pod="coredns-674b8bbfcf-744kg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--744kg-" Oct 29 00:34:50.865884 containerd[1619]: 2025-10-29 00:34:50.713 [INFO][4736] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d0ab6b9e3189b84af58faa31b0472a7b3083989ff4f67db8c70b2a77dfb1b26b" Namespace="kube-system" Pod="coredns-674b8bbfcf-744kg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--744kg-eth0" Oct 29 00:34:50.865884 containerd[1619]: 2025-10-29 00:34:50.764 [INFO][4775] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d0ab6b9e3189b84af58faa31b0472a7b3083989ff4f67db8c70b2a77dfb1b26b" HandleID="k8s-pod-network.d0ab6b9e3189b84af58faa31b0472a7b3083989ff4f67db8c70b2a77dfb1b26b" Workload="localhost-k8s-coredns--674b8bbfcf--744kg-eth0" Oct 29 00:34:50.865884 containerd[1619]: 2025-10-29 00:34:50.764 [INFO][4775] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d0ab6b9e3189b84af58faa31b0472a7b3083989ff4f67db8c70b2a77dfb1b26b" HandleID="k8s-pod-network.d0ab6b9e3189b84af58faa31b0472a7b3083989ff4f67db8c70b2a77dfb1b26b" Workload="localhost-k8s-coredns--674b8bbfcf--744kg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000387770), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-744kg", "timestamp":"2025-10-29 00:34:50.764240046 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 00:34:50.865884 containerd[1619]: 2025-10-29 00:34:50.764 [INFO][4775] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 00:34:50.865884 containerd[1619]: 2025-10-29 00:34:50.764 [INFO][4775] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 00:34:50.865884 containerd[1619]: 2025-10-29 00:34:50.764 [INFO][4775] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 00:34:50.865884 containerd[1619]: 2025-10-29 00:34:50.790 [INFO][4775] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d0ab6b9e3189b84af58faa31b0472a7b3083989ff4f67db8c70b2a77dfb1b26b" host="localhost" Oct 29 00:34:50.865884 containerd[1619]: 2025-10-29 00:34:50.796 [INFO][4775] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 00:34:50.865884 containerd[1619]: 2025-10-29 00:34:50.800 [INFO][4775] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 00:34:50.865884 containerd[1619]: 2025-10-29 00:34:50.802 [INFO][4775] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 00:34:50.865884 containerd[1619]: 2025-10-29 00:34:50.804 [INFO][4775] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 00:34:50.865884 containerd[1619]: 2025-10-29 00:34:50.804 [INFO][4775] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d0ab6b9e3189b84af58faa31b0472a7b3083989ff4f67db8c70b2a77dfb1b26b" host="localhost" Oct 29 00:34:50.865884 containerd[1619]: 2025-10-29 00:34:50.805 [INFO][4775] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d0ab6b9e3189b84af58faa31b0472a7b3083989ff4f67db8c70b2a77dfb1b26b Oct 29 00:34:50.865884 containerd[1619]: 2025-10-29 00:34:50.813 [INFO][4775] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d0ab6b9e3189b84af58faa31b0472a7b3083989ff4f67db8c70b2a77dfb1b26b" host="localhost" Oct 29 00:34:50.865884 containerd[1619]: 2025-10-29 00:34:50.822 [INFO][4775] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.d0ab6b9e3189b84af58faa31b0472a7b3083989ff4f67db8c70b2a77dfb1b26b" host="localhost" Oct 29 00:34:50.865884 containerd[1619]: 2025-10-29 00:34:50.822 [INFO][4775] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.d0ab6b9e3189b84af58faa31b0472a7b3083989ff4f67db8c70b2a77dfb1b26b" host="localhost" Oct 29 00:34:50.865884 containerd[1619]: 2025-10-29 00:34:50.822 [INFO][4775] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 00:34:50.865884 containerd[1619]: 2025-10-29 00:34:50.822 [INFO][4775] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="d0ab6b9e3189b84af58faa31b0472a7b3083989ff4f67db8c70b2a77dfb1b26b" HandleID="k8s-pod-network.d0ab6b9e3189b84af58faa31b0472a7b3083989ff4f67db8c70b2a77dfb1b26b" Workload="localhost-k8s-coredns--674b8bbfcf--744kg-eth0" Oct 29 00:34:50.866500 containerd[1619]: 2025-10-29 00:34:50.829 [INFO][4736] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d0ab6b9e3189b84af58faa31b0472a7b3083989ff4f67db8c70b2a77dfb1b26b" Namespace="kube-system" Pod="coredns-674b8bbfcf-744kg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--744kg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--744kg-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7410a0be-1e47-4bd5-ad08-82bed9e122fc", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 34, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-744kg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5850b800a94", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:34:50.866500 containerd[1619]: 2025-10-29 00:34:50.829 [INFO][4736] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="d0ab6b9e3189b84af58faa31b0472a7b3083989ff4f67db8c70b2a77dfb1b26b" Namespace="kube-system" Pod="coredns-674b8bbfcf-744kg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--744kg-eth0" Oct 29 00:34:50.866500 containerd[1619]: 2025-10-29 00:34:50.829 [INFO][4736] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5850b800a94 ContainerID="d0ab6b9e3189b84af58faa31b0472a7b3083989ff4f67db8c70b2a77dfb1b26b" Namespace="kube-system" Pod="coredns-674b8bbfcf-744kg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--744kg-eth0" Oct 29 00:34:50.866500 containerd[1619]: 2025-10-29 00:34:50.836 [INFO][4736] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d0ab6b9e3189b84af58faa31b0472a7b3083989ff4f67db8c70b2a77dfb1b26b" Namespace="kube-system" Pod="coredns-674b8bbfcf-744kg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--744kg-eth0" Oct 29 00:34:50.866500 containerd[1619]: 2025-10-29 00:34:50.839 [INFO][4736] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d0ab6b9e3189b84af58faa31b0472a7b3083989ff4f67db8c70b2a77dfb1b26b" Namespace="kube-system" Pod="coredns-674b8bbfcf-744kg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--744kg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--744kg-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7410a0be-1e47-4bd5-ad08-82bed9e122fc", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 34, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d0ab6b9e3189b84af58faa31b0472a7b3083989ff4f67db8c70b2a77dfb1b26b", Pod:"coredns-674b8bbfcf-744kg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5850b800a94", MAC:"56:64:32:f0:47:d8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:34:50.866500 containerd[1619]: 2025-10-29 00:34:50.854 [INFO][4736] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d0ab6b9e3189b84af58faa31b0472a7b3083989ff4f67db8c70b2a77dfb1b26b" Namespace="kube-system" Pod="coredns-674b8bbfcf-744kg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--744kg-eth0" Oct 29 00:34:50.872271 containerd[1619]: time="2025-10-29T00:34:50.856897554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-99fd685fb-vhm8w,Uid:226993fc-2dd7-48d2-9d26-aaf9fe3f09e4,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"81edae98a0a0d9e3af22ac3474050368e4bcdf275b685a12701298e98e0444b8\"" Oct 29 00:34:50.872470 containerd[1619]: time="2025-10-29T00:34:50.867944541Z" level=info msg="connecting to shim c8077a7f3bfcbfd4b9eb097baf2fc186cafe46586fa72cb432121ac9fbc289f2" address="unix:///run/containerd/s/0c87b2cd53211841de6fc864e24ac7c36cf173c6489f2b644142a7914d8069e5" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:34:50.910908 systemd[1]: Started cri-containerd-c8077a7f3bfcbfd4b9eb097baf2fc186cafe46586fa72cb432121ac9fbc289f2.scope - libcontainer container c8077a7f3bfcbfd4b9eb097baf2fc186cafe46586fa72cb432121ac9fbc289f2. Oct 29 00:34:50.915559 containerd[1619]: time="2025-10-29T00:34:50.915452839Z" level=info msg="connecting to shim d0ab6b9e3189b84af58faa31b0472a7b3083989ff4f67db8c70b2a77dfb1b26b" address="unix:///run/containerd/s/f54139bb2a1f729bf0cae813cf8edc550b5ced8e5ae80844796c064b2fcdf545" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:34:50.936373 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 00:34:50.943828 kubelet[2828]: E1029 00:34:50.943580 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:34:50.946312 kubelet[2828]: E1029 00:34:50.943992 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:34:50.952922 systemd[1]: Started cri-containerd-d0ab6b9e3189b84af58faa31b0472a7b3083989ff4f67db8c70b2a77dfb1b26b.scope - libcontainer container d0ab6b9e3189b84af58faa31b0472a7b3083989ff4f67db8c70b2a77dfb1b26b. Oct 29 00:34:50.971208 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 00:34:50.971344 containerd[1619]: time="2025-10-29T00:34:50.971245199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qqtll,Uid:31820ff6-c2b5-4f1e-b097-0b66b5dd1baa,Namespace:calico-system,Attempt:0,} returns sandbox id \"c8077a7f3bfcbfd4b9eb097baf2fc186cafe46586fa72cb432121ac9fbc289f2\"" Oct 29 00:34:51.014899 containerd[1619]: time="2025-10-29T00:34:51.014818653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-744kg,Uid:7410a0be-1e47-4bd5-ad08-82bed9e122fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0ab6b9e3189b84af58faa31b0472a7b3083989ff4f67db8c70b2a77dfb1b26b\"" Oct 29 00:34:51.015863 kubelet[2828]: E1029 00:34:51.015809 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:34:51.028918 systemd-networkd[1518]: caliae1d5863733: Gained IPv6LL Oct 29 00:34:51.045744 containerd[1619]: time="2025-10-29T00:34:51.045695835Z" level=info msg="CreateContainer within sandbox \"d0ab6b9e3189b84af58faa31b0472a7b3083989ff4f67db8c70b2a77dfb1b26b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 29 00:34:51.065097 containerd[1619]: time="2025-10-29T00:34:51.065019590Z" level=info msg="Container 6043d1a7293665dacb8f749716037e656f671b8de8116c80449a73d113ff8033: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:34:51.081277 containerd[1619]: time="2025-10-29T00:34:51.081213352Z" level=info msg="CreateContainer within sandbox \"d0ab6b9e3189b84af58faa31b0472a7b3083989ff4f67db8c70b2a77dfb1b26b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6043d1a7293665dacb8f749716037e656f671b8de8116c80449a73d113ff8033\"" Oct 29 00:34:51.082021 containerd[1619]: time="2025-10-29T00:34:51.081961586Z" level=info msg="StartContainer for \"6043d1a7293665dacb8f749716037e656f671b8de8116c80449a73d113ff8033\"" Oct 29 00:34:51.082975 containerd[1619]: time="2025-10-29T00:34:51.082943238Z" level=info msg="connecting to shim 6043d1a7293665dacb8f749716037e656f671b8de8116c80449a73d113ff8033" address="unix:///run/containerd/s/f54139bb2a1f729bf0cae813cf8edc550b5ced8e5ae80844796c064b2fcdf545" protocol=ttrpc version=3 Oct 29 00:34:51.106796 systemd[1]: Started cri-containerd-6043d1a7293665dacb8f749716037e656f671b8de8116c80449a73d113ff8033.scope - libcontainer container 6043d1a7293665dacb8f749716037e656f671b8de8116c80449a73d113ff8033. Oct 29 00:34:51.157559 containerd[1619]: time="2025-10-29T00:34:51.157018818Z" level=info msg="StartContainer for \"6043d1a7293665dacb8f749716037e656f671b8de8116c80449a73d113ff8033\" returns successfully" Oct 29 00:34:51.183325 containerd[1619]: time="2025-10-29T00:34:51.183149033Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:34:51.185051 containerd[1619]: time="2025-10-29T00:34:51.184862618Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 29 00:34:51.185051 containerd[1619]: time="2025-10-29T00:34:51.184952456Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 29 00:34:51.185995 kubelet[2828]: E1029 00:34:51.185817 2828 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 00:34:51.185995 kubelet[2828]: E1029 00:34:51.185869 2828 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 00:34:51.186420 containerd[1619]: time="2025-10-29T00:34:51.186382830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 00:34:51.187392 kubelet[2828]: E1029 00:34:51.187308 2828 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2nthc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-c97gh_calico-system(c2fb43e4-9dd6-4f00-992a-4f7339772bdb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 29 00:34:51.188911 kubelet[2828]: E1029 00:34:51.188658 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-c97gh" podUID="c2fb43e4-9dd6-4f00-992a-4f7339772bdb" Oct 29 00:34:51.542837 containerd[1619]: time="2025-10-29T00:34:51.541756790Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:34:51.552799 containerd[1619]: time="2025-10-29T00:34:51.552711794Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 00:34:51.552906 containerd[1619]: time="2025-10-29T00:34:51.552764342Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 00:34:51.553083 kubelet[2828]: E1029 00:34:51.553014 2828 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:34:51.553083 kubelet[2828]: E1029 00:34:51.553079 2828 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:34:51.554385 kubelet[2828]: E1029 00:34:51.553362 2828 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dxxpm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-99fd685fb-vhm8w_calico-apiserver(226993fc-2dd7-48d2-9d26-aaf9fe3f09e4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 00:34:51.554948 kubelet[2828]: E1029 00:34:51.554896 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99fd685fb-vhm8w" podUID="226993fc-2dd7-48d2-9d26-aaf9fe3f09e4" Oct 29 00:34:51.555174 containerd[1619]: time="2025-10-29T00:34:51.555052766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 29 00:34:51.668900 systemd-networkd[1518]: cali85888e9488a: Gained IPv6LL Oct 29 00:34:51.910781 containerd[1619]: time="2025-10-29T00:34:51.910591425Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:34:51.912282 containerd[1619]: time="2025-10-29T00:34:51.912220952Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 29 00:34:51.914339 containerd[1619]: time="2025-10-29T00:34:51.912341448Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 29 00:34:51.914418 kubelet[2828]: E1029 00:34:51.912629 2828 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 00:34:51.914418 kubelet[2828]: E1029 00:34:51.912720 2828 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 00:34:51.914418 kubelet[2828]: E1029 00:34:51.912953 2828 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bz22c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qqtll_calico-system(31820ff6-c2b5-4f1e-b097-0b66b5dd1baa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 29 00:34:51.915583 containerd[1619]: time="2025-10-29T00:34:51.915552153Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 29 00:34:51.953691 kubelet[2828]: E1029 00:34:51.953559 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:34:51.956625 kubelet[2828]: E1029 00:34:51.956499 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:34:51.958675 kubelet[2828]: E1029 00:34:51.958279 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99fd685fb-vhm8w" podUID="226993fc-2dd7-48d2-9d26-aaf9fe3f09e4" Oct 29 00:34:51.960024 kubelet[2828]: E1029 00:34:51.959957 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-c97gh" podUID="c2fb43e4-9dd6-4f00-992a-4f7339772bdb" Oct 29 00:34:51.988368 kubelet[2828]: I1029 00:34:51.988274 2828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-744kg" podStartSLOduration=43.988251891 podStartE2EDuration="43.988251891s" podCreationTimestamp="2025-10-29 00:34:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 00:34:51.971857572 +0000 UTC m=+48.610854167" watchObservedRunningTime="2025-10-29 00:34:51.988251891 +0000 UTC m=+48.627248486" Oct 29 00:34:52.052127 systemd-networkd[1518]: vxlan.calico: Link UP Oct 29 00:34:52.052141 systemd-networkd[1518]: vxlan.calico: Gained carrier Oct 29 00:34:52.233555 containerd[1619]: time="2025-10-29T00:34:52.233373134Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:34:52.234968 containerd[1619]: time="2025-10-29T00:34:52.234834526Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 29 00:34:52.234968 containerd[1619]: time="2025-10-29T00:34:52.234883728Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 29 00:34:52.235199 kubelet[2828]: E1029 00:34:52.235119 2828 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 00:34:52.235301 kubelet[2828]: E1029 00:34:52.235213 2828 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 00:34:52.235398 kubelet[2828]: E1029 00:34:52.235351 2828 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bz22c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qqtll_calico-system(31820ff6-c2b5-4f1e-b097-0b66b5dd1baa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 29 00:34:52.236909 kubelet[2828]: E1029 00:34:52.236803 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qqtll" podUID="31820ff6-c2b5-4f1e-b097-0b66b5dd1baa" Oct 29 00:34:52.244918 systemd-networkd[1518]: cali5850b800a94: Gained IPv6LL Oct 29 00:34:52.500815 systemd-networkd[1518]: cali5d6afe56ba9: Gained IPv6LL Oct 29 00:34:52.692818 systemd-networkd[1518]: cali1147cc75242: Gained IPv6LL Oct 29 00:34:52.959247 kubelet[2828]: E1029 00:34:52.959136 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:34:52.960308 kubelet[2828]: E1029 00:34:52.960263 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qqtll" podUID="31820ff6-c2b5-4f1e-b097-0b66b5dd1baa" Oct 29 00:34:53.460916 systemd-networkd[1518]: vxlan.calico: Gained IPv6LL Oct 29 00:34:53.961271 kubelet[2828]: E1029 00:34:53.961205 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:34:55.358257 systemd[1]: Started sshd@11-10.0.0.10:22-10.0.0.1:49142.service - OpenSSH per-connection server daemon (10.0.0.1:49142). Oct 29 00:34:55.431277 sshd[5110]: Accepted publickey for core from 10.0.0.1 port 49142 ssh2: RSA SHA256:NOSddcycRuuQ0Zp9cdpGYZy5vFByHCSYLp01T7glzwM Oct 29 00:34:55.433305 sshd-session[5110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:34:55.438622 systemd-logind[1587]: New session 12 of user core. Oct 29 00:34:55.447864 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 29 00:34:55.611816 sshd[5113]: Connection closed by 10.0.0.1 port 49142 Oct 29 00:34:55.612971 sshd-session[5110]: pam_unix(sshd:session): session closed for user core Oct 29 00:34:55.619216 systemd[1]: sshd@11-10.0.0.10:22-10.0.0.1:49142.service: Deactivated successfully. Oct 29 00:34:55.621874 systemd[1]: session-12.scope: Deactivated successfully. Oct 29 00:34:55.623974 systemd-logind[1587]: Session 12 logged out. Waiting for processes to exit. Oct 29 00:34:55.625248 systemd-logind[1587]: Removed session 12. Oct 29 00:35:00.627713 systemd[1]: Started sshd@12-10.0.0.10:22-10.0.0.1:34944.service - OpenSSH per-connection server daemon (10.0.0.1:34944). Oct 29 00:35:00.700031 sshd[5135]: Accepted publickey for core from 10.0.0.1 port 34944 ssh2: RSA SHA256:NOSddcycRuuQ0Zp9cdpGYZy5vFByHCSYLp01T7glzwM Oct 29 00:35:00.701891 sshd-session[5135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:35:00.707319 systemd-logind[1587]: New session 13 of user core. Oct 29 00:35:00.716830 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 29 00:35:00.835904 sshd[5138]: Connection closed by 10.0.0.1 port 34944 Oct 29 00:35:00.836362 sshd-session[5135]: pam_unix(sshd:session): session closed for user core Oct 29 00:35:00.845411 systemd[1]: sshd@12-10.0.0.10:22-10.0.0.1:34944.service: Deactivated successfully. Oct 29 00:35:00.847386 systemd[1]: session-13.scope: Deactivated successfully. Oct 29 00:35:00.848371 systemd-logind[1587]: Session 13 logged out. Waiting for processes to exit. Oct 29 00:35:00.851845 systemd[1]: Started sshd@13-10.0.0.10:22-10.0.0.1:34954.service - OpenSSH per-connection server daemon (10.0.0.1:34954). Oct 29 00:35:00.852556 systemd-logind[1587]: Removed session 13. Oct 29 00:35:00.906276 sshd[5153]: Accepted publickey for core from 10.0.0.1 port 34954 ssh2: RSA SHA256:NOSddcycRuuQ0Zp9cdpGYZy5vFByHCSYLp01T7glzwM Oct 29 00:35:00.907897 sshd-session[5153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:35:00.912616 systemd-logind[1587]: New session 14 of user core. Oct 29 00:35:00.926918 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 29 00:35:01.078558 sshd[5156]: Connection closed by 10.0.0.1 port 34954 Oct 29 00:35:01.080317 sshd-session[5153]: pam_unix(sshd:session): session closed for user core Oct 29 00:35:01.092004 systemd[1]: sshd@13-10.0.0.10:22-10.0.0.1:34954.service: Deactivated successfully. Oct 29 00:35:01.094630 systemd[1]: session-14.scope: Deactivated successfully. Oct 29 00:35:01.097699 systemd-logind[1587]: Session 14 logged out. Waiting for processes to exit. Oct 29 00:35:01.101046 systemd[1]: Started sshd@14-10.0.0.10:22-10.0.0.1:34962.service - OpenSSH per-connection server daemon (10.0.0.1:34962). Oct 29 00:35:01.104258 systemd-logind[1587]: Removed session 14. Oct 29 00:35:01.178009 sshd[5167]: Accepted publickey for core from 10.0.0.1 port 34962 ssh2: RSA SHA256:NOSddcycRuuQ0Zp9cdpGYZy5vFByHCSYLp01T7glzwM Oct 29 00:35:01.180018 sshd-session[5167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:35:01.185790 systemd-logind[1587]: New session 15 of user core. Oct 29 00:35:01.200851 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 29 00:35:01.339868 sshd[5171]: Connection closed by 10.0.0.1 port 34962 Oct 29 00:35:01.340245 sshd-session[5167]: pam_unix(sshd:session): session closed for user core Oct 29 00:35:01.344003 systemd[1]: sshd@14-10.0.0.10:22-10.0.0.1:34962.service: Deactivated successfully. Oct 29 00:35:01.346471 systemd[1]: session-15.scope: Deactivated successfully. Oct 29 00:35:01.348951 systemd-logind[1587]: Session 15 logged out. Waiting for processes to exit. Oct 29 00:35:01.350082 systemd-logind[1587]: Removed session 15. Oct 29 00:35:01.633917 containerd[1619]: time="2025-10-29T00:35:01.633803735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 29 00:35:01.996019 containerd[1619]: time="2025-10-29T00:35:01.995841347Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:35:01.998551 containerd[1619]: time="2025-10-29T00:35:01.998487582Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 29 00:35:01.998551 containerd[1619]: time="2025-10-29T00:35:01.998519393Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 29 00:35:01.998787 kubelet[2828]: E1029 00:35:01.998741 2828 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 00:35:01.999142 kubelet[2828]: E1029 00:35:01.998794 2828 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 00:35:01.999142 kubelet[2828]: E1029 00:35:01.999025 2828 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8kh2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-67654757f7-t64h5_calico-system(4c9c1840-5938-4d20-aaba-3f102838a251): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 29 00:35:01.999323 containerd[1619]: time="2025-10-29T00:35:01.999039813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 00:35:02.000876 kubelet[2828]: E1029 00:35:02.000812 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67654757f7-t64h5" podUID="4c9c1840-5938-4d20-aaba-3f102838a251" Oct 29 00:35:02.359259 containerd[1619]: time="2025-10-29T00:35:02.359194789Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:35:02.360649 containerd[1619]: time="2025-10-29T00:35:02.360596372Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 00:35:02.360743 containerd[1619]: time="2025-10-29T00:35:02.360668991Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 00:35:02.360942 kubelet[2828]: E1029 00:35:02.360885 2828 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:35:02.361009 kubelet[2828]: E1029 00:35:02.360953 2828 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:35:02.361253 kubelet[2828]: E1029 00:35:02.361139 2828 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6gz4w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-99fd685fb-9ss4w_calico-apiserver(e6d89d0d-8eff-4088-87f1-00579f9e5f1f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 00:35:02.362449 kubelet[2828]: E1029 00:35:02.362411 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99fd685fb-9ss4w" podUID="e6d89d0d-8eff-4088-87f1-00579f9e5f1f" Oct 29 00:35:02.631191 containerd[1619]: time="2025-10-29T00:35:02.631057520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 29 00:35:02.990712 containerd[1619]: time="2025-10-29T00:35:02.990518581Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:35:02.992165 containerd[1619]: time="2025-10-29T00:35:02.992097404Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 29 00:35:02.992252 containerd[1619]: time="2025-10-29T00:35:02.992231491Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 29 00:35:02.992481 kubelet[2828]: E1029 00:35:02.992422 2828 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 00:35:02.992566 kubelet[2828]: E1029 00:35:02.992507 2828 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 00:35:02.992868 kubelet[2828]: E1029 00:35:02.992789 2828 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:207fdd95d4f244cd9bb253bbb4016093,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sz2l9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-589b55fc85-qrw8q_calico-system(4100b610-f524-41bd-8d21-e97b360c25bf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 29 00:35:02.993069 containerd[1619]: time="2025-10-29T00:35:02.992887581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 29 00:35:03.303438 containerd[1619]: time="2025-10-29T00:35:03.303073324Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:35:03.305163 containerd[1619]: time="2025-10-29T00:35:03.305090065Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 29 00:35:03.305266 containerd[1619]: time="2025-10-29T00:35:03.305130121Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 29 00:35:03.305484 kubelet[2828]: E1029 00:35:03.305425 2828 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 00:35:03.305943 kubelet[2828]: E1029 00:35:03.305506 2828 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 00:35:03.305943 kubelet[2828]: E1029 00:35:03.305850 2828 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2nthc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-c97gh_calico-system(c2fb43e4-9dd6-4f00-992a-4f7339772bdb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 29 00:35:03.306170 containerd[1619]: time="2025-10-29T00:35:03.306138998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 29 00:35:03.307325 kubelet[2828]: E1029 00:35:03.307285 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-c97gh" podUID="c2fb43e4-9dd6-4f00-992a-4f7339772bdb" Oct 29 00:35:03.617359 containerd[1619]: time="2025-10-29T00:35:03.617292780Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:35:03.618748 containerd[1619]: time="2025-10-29T00:35:03.618519194Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 29 00:35:03.618748 containerd[1619]: time="2025-10-29T00:35:03.618613945Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 29 00:35:03.618995 kubelet[2828]: E1029 00:35:03.618861 2828 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 00:35:03.618995 kubelet[2828]: E1029 00:35:03.618946 2828 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 00:35:03.620188 kubelet[2828]: E1029 00:35:03.619108 2828 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sz2l9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-589b55fc85-qrw8q_calico-system(4100b610-f524-41bd-8d21-e97b360c25bf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 29 00:35:03.620480 kubelet[2828]: E1029 00:35:03.620411 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-589b55fc85-qrw8q" podUID="4100b610-f524-41bd-8d21-e97b360c25bf" Oct 29 00:35:04.630269 containerd[1619]: time="2025-10-29T00:35:04.630200969Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 29 00:35:05.086756 containerd[1619]: time="2025-10-29T00:35:05.086692703Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:35:05.088180 containerd[1619]: time="2025-10-29T00:35:05.088138804Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 29 00:35:05.088299 containerd[1619]: time="2025-10-29T00:35:05.088190954Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 29 00:35:05.088501 kubelet[2828]: E1029 00:35:05.088369 2828 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 00:35:05.088501 kubelet[2828]: E1029 00:35:05.088437 2828 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 00:35:05.088916 kubelet[2828]: E1029 00:35:05.088605 2828 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bz22c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qqtll_calico-system(31820ff6-c2b5-4f1e-b097-0b66b5dd1baa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 29 00:35:05.091684 containerd[1619]: time="2025-10-29T00:35:05.090829132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 29 00:35:05.493688 containerd[1619]: time="2025-10-29T00:35:05.493490945Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:35:05.495603 containerd[1619]: time="2025-10-29T00:35:05.495490218Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 29 00:35:05.495746 containerd[1619]: time="2025-10-29T00:35:05.495541226Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 29 00:35:05.496000 kubelet[2828]: E1029 00:35:05.495942 2828 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 00:35:05.496065 kubelet[2828]: E1029 00:35:05.496013 2828 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 00:35:05.496214 kubelet[2828]: E1029 00:35:05.496151 2828 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bz22c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qqtll_calico-system(31820ff6-c2b5-4f1e-b097-0b66b5dd1baa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 29 00:35:05.497436 kubelet[2828]: E1029 00:35:05.497382 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qqtll" podUID="31820ff6-c2b5-4f1e-b097-0b66b5dd1baa" Oct 29 00:35:06.353478 systemd[1]: Started sshd@15-10.0.0.10:22-10.0.0.1:58518.service - OpenSSH per-connection server daemon (10.0.0.1:58518). Oct 29 00:35:06.415867 sshd[5194]: Accepted publickey for core from 10.0.0.1 port 58518 ssh2: RSA SHA256:NOSddcycRuuQ0Zp9cdpGYZy5vFByHCSYLp01T7glzwM Oct 29 00:35:06.417908 sshd-session[5194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:35:06.423185 systemd-logind[1587]: New session 16 of user core. Oct 29 00:35:06.435823 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 29 00:35:06.552023 sshd[5197]: Connection closed by 10.0.0.1 port 58518 Oct 29 00:35:06.552377 sshd-session[5194]: pam_unix(sshd:session): session closed for user core Oct 29 00:35:06.556441 systemd[1]: sshd@15-10.0.0.10:22-10.0.0.1:58518.service: Deactivated successfully. Oct 29 00:35:06.559545 systemd[1]: session-16.scope: Deactivated successfully. Oct 29 00:35:06.561120 systemd-logind[1587]: Session 16 logged out. Waiting for processes to exit. Oct 29 00:35:06.563423 systemd-logind[1587]: Removed session 16. Oct 29 00:35:06.630405 containerd[1619]: time="2025-10-29T00:35:06.630242528Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 00:35:07.002984 containerd[1619]: time="2025-10-29T00:35:07.002831330Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:35:07.004249 containerd[1619]: time="2025-10-29T00:35:07.004186524Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 00:35:07.004305 containerd[1619]: time="2025-10-29T00:35:07.004271096Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 00:35:07.004510 kubelet[2828]: E1029 00:35:07.004450 2828 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:35:07.004510 kubelet[2828]: E1029 00:35:07.004509 2828 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:35:07.005006 kubelet[2828]: E1029 00:35:07.004681 2828 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dxxpm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-99fd685fb-vhm8w_calico-apiserver(226993fc-2dd7-48d2-9d26-aaf9fe3f09e4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 00:35:07.005860 kubelet[2828]: E1029 00:35:07.005807 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99fd685fb-vhm8w" podUID="226993fc-2dd7-48d2-9d26-aaf9fe3f09e4" Oct 29 00:35:11.575276 systemd[1]: Started sshd@16-10.0.0.10:22-10.0.0.1:58528.service - OpenSSH per-connection server daemon (10.0.0.1:58528). Oct 29 00:35:11.631772 sshd[5217]: Accepted publickey for core from 10.0.0.1 port 58528 ssh2: RSA SHA256:NOSddcycRuuQ0Zp9cdpGYZy5vFByHCSYLp01T7glzwM Oct 29 00:35:11.633668 sshd-session[5217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:35:11.639249 systemd-logind[1587]: New session 17 of user core. Oct 29 00:35:11.649906 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 29 00:35:11.775879 sshd[5220]: Connection closed by 10.0.0.1 port 58528 Oct 29 00:35:11.776333 sshd-session[5217]: pam_unix(sshd:session): session closed for user core Oct 29 00:35:11.781327 systemd[1]: sshd@16-10.0.0.10:22-10.0.0.1:58528.service: Deactivated successfully. Oct 29 00:35:11.783432 systemd[1]: session-17.scope: Deactivated successfully. Oct 29 00:35:11.784403 systemd-logind[1587]: Session 17 logged out. Waiting for processes to exit. Oct 29 00:35:11.785716 systemd-logind[1587]: Removed session 17. Oct 29 00:35:14.631611 kubelet[2828]: E1029 00:35:14.631538 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-c97gh" podUID="c2fb43e4-9dd6-4f00-992a-4f7339772bdb" Oct 29 00:35:16.632678 kubelet[2828]: E1029 00:35:16.631066 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99fd685fb-9ss4w" podUID="e6d89d0d-8eff-4088-87f1-00579f9e5f1f" Oct 29 00:35:16.635116 kubelet[2828]: E1029 00:35:16.635067 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67654757f7-t64h5" podUID="4c9c1840-5938-4d20-aaba-3f102838a251" Oct 29 00:35:16.685986 containerd[1619]: time="2025-10-29T00:35:16.685922465Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c1a7ee94ba46a7f0e5e04c3e91fb566ff78e45022f6034d61fcc29cc6175d707\" id:\"6e97262412af3765fa112595537007c13e8b41aa7ae5196a4d640e90827b9853\" pid:5252 exited_at:{seconds:1761698116 nanos:685538012}" Oct 29 00:35:16.689043 kubelet[2828]: E1029 00:35:16.689008 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:35:16.789055 systemd[1]: Started sshd@17-10.0.0.10:22-10.0.0.1:59430.service - OpenSSH per-connection server daemon (10.0.0.1:59430). Oct 29 00:35:16.854894 sshd[5267]: Accepted publickey for core from 10.0.0.1 port 59430 ssh2: RSA SHA256:NOSddcycRuuQ0Zp9cdpGYZy5vFByHCSYLp01T7glzwM Oct 29 00:35:16.856743 sshd-session[5267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:35:16.861244 systemd-logind[1587]: New session 18 of user core. Oct 29 00:35:16.875833 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 29 00:35:17.013081 sshd[5271]: Connection closed by 10.0.0.1 port 59430 Oct 29 00:35:17.013435 sshd-session[5267]: pam_unix(sshd:session): session closed for user core Oct 29 00:35:17.019265 systemd[1]: sshd@17-10.0.0.10:22-10.0.0.1:59430.service: Deactivated successfully. Oct 29 00:35:17.021784 systemd[1]: session-18.scope: Deactivated successfully. Oct 29 00:35:17.022700 systemd-logind[1587]: Session 18 logged out. Waiting for processes to exit. Oct 29 00:35:17.024380 systemd-logind[1587]: Removed session 18. Oct 29 00:35:18.630974 kubelet[2828]: E1029 00:35:18.630858 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99fd685fb-vhm8w" podUID="226993fc-2dd7-48d2-9d26-aaf9fe3f09e4" Oct 29 00:35:18.631601 kubelet[2828]: E1029 00:35:18.631239 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-589b55fc85-qrw8q" podUID="4100b610-f524-41bd-8d21-e97b360c25bf" Oct 29 00:35:19.632959 kubelet[2828]: E1029 00:35:19.632880 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qqtll" podUID="31820ff6-c2b5-4f1e-b097-0b66b5dd1baa" Oct 29 00:35:22.035466 systemd[1]: Started sshd@18-10.0.0.10:22-10.0.0.1:59446.service - OpenSSH per-connection server daemon (10.0.0.1:59446). Oct 29 00:35:22.093754 sshd[5285]: Accepted publickey for core from 10.0.0.1 port 59446 ssh2: RSA SHA256:NOSddcycRuuQ0Zp9cdpGYZy5vFByHCSYLp01T7glzwM Oct 29 00:35:22.096158 sshd-session[5285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:35:22.101195 systemd-logind[1587]: New session 19 of user core. Oct 29 00:35:22.111772 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 29 00:35:22.268337 sshd[5288]: Connection closed by 10.0.0.1 port 59446 Oct 29 00:35:22.269208 sshd-session[5285]: pam_unix(sshd:session): session closed for user core Oct 29 00:35:22.276051 systemd-logind[1587]: Session 19 logged out. Waiting for processes to exit. Oct 29 00:35:22.276708 systemd[1]: sshd@18-10.0.0.10:22-10.0.0.1:59446.service: Deactivated successfully. Oct 29 00:35:22.279895 systemd[1]: session-19.scope: Deactivated successfully. Oct 29 00:35:22.284139 systemd-logind[1587]: Removed session 19. Oct 29 00:35:27.292932 systemd[1]: Started sshd@19-10.0.0.10:22-10.0.0.1:46274.service - OpenSSH per-connection server daemon (10.0.0.1:46274). Oct 29 00:35:27.367749 sshd[5301]: Accepted publickey for core from 10.0.0.1 port 46274 ssh2: RSA SHA256:NOSddcycRuuQ0Zp9cdpGYZy5vFByHCSYLp01T7glzwM Oct 29 00:35:27.369506 sshd-session[5301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:35:27.374316 systemd-logind[1587]: New session 20 of user core. Oct 29 00:35:27.381848 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 29 00:35:27.526048 sshd[5304]: Connection closed by 10.0.0.1 port 46274 Oct 29 00:35:27.526456 sshd-session[5301]: pam_unix(sshd:session): session closed for user core Oct 29 00:35:27.540085 systemd[1]: sshd@19-10.0.0.10:22-10.0.0.1:46274.service: Deactivated successfully. Oct 29 00:35:27.542678 systemd[1]: session-20.scope: Deactivated successfully. Oct 29 00:35:27.543614 systemd-logind[1587]: Session 20 logged out. Waiting for processes to exit. Oct 29 00:35:27.548137 systemd[1]: Started sshd@20-10.0.0.10:22-10.0.0.1:46286.service - OpenSSH per-connection server daemon (10.0.0.1:46286). Oct 29 00:35:27.548961 systemd-logind[1587]: Removed session 20. Oct 29 00:35:27.600701 sshd[5319]: Accepted publickey for core from 10.0.0.1 port 46286 ssh2: RSA SHA256:NOSddcycRuuQ0Zp9cdpGYZy5vFByHCSYLp01T7glzwM Oct 29 00:35:27.602182 sshd-session[5319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:35:27.607396 systemd-logind[1587]: New session 21 of user core. Oct 29 00:35:27.618926 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 29 00:35:27.631491 containerd[1619]: time="2025-10-29T00:35:27.631092210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 00:35:27.985864 containerd[1619]: time="2025-10-29T00:35:27.985783402Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:35:27.989404 containerd[1619]: time="2025-10-29T00:35:27.989346860Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 00:35:27.989621 containerd[1619]: time="2025-10-29T00:35:27.989445547Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 00:35:27.989718 kubelet[2828]: E1029 00:35:27.989602 2828 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:35:27.990325 kubelet[2828]: E1029 00:35:27.989729 2828 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:35:27.990325 kubelet[2828]: E1029 00:35:27.990091 2828 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6gz4w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-99fd685fb-9ss4w_calico-apiserver(e6d89d0d-8eff-4088-87f1-00579f9e5f1f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 00:35:27.990544 containerd[1619]: time="2025-10-29T00:35:27.990075163Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 29 00:35:27.991682 kubelet[2828]: E1029 00:35:27.991610 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99fd685fb-9ss4w" podUID="e6d89d0d-8eff-4088-87f1-00579f9e5f1f" Oct 29 00:35:27.996436 sshd[5322]: Connection closed by 10.0.0.1 port 46286 Oct 29 00:35:27.997738 sshd-session[5319]: pam_unix(sshd:session): session closed for user core Oct 29 00:35:28.009175 systemd[1]: sshd@20-10.0.0.10:22-10.0.0.1:46286.service: Deactivated successfully. Oct 29 00:35:28.012005 systemd[1]: session-21.scope: Deactivated successfully. Oct 29 00:35:28.013627 systemd-logind[1587]: Session 21 logged out. Waiting for processes to exit. Oct 29 00:35:28.018131 systemd[1]: Started sshd@21-10.0.0.10:22-10.0.0.1:46290.service - OpenSSH per-connection server daemon (10.0.0.1:46290). Oct 29 00:35:28.018998 systemd-logind[1587]: Removed session 21. Oct 29 00:35:28.087016 sshd[5334]: Accepted publickey for core from 10.0.0.1 port 46290 ssh2: RSA SHA256:NOSddcycRuuQ0Zp9cdpGYZy5vFByHCSYLp01T7glzwM Oct 29 00:35:28.089186 sshd-session[5334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:35:28.094950 systemd-logind[1587]: New session 22 of user core. Oct 29 00:35:28.104856 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 29 00:35:28.341701 containerd[1619]: time="2025-10-29T00:35:28.341037627Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:35:28.344727 containerd[1619]: time="2025-10-29T00:35:28.344676216Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 29 00:35:28.344815 containerd[1619]: time="2025-10-29T00:35:28.344758773Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 29 00:35:28.344956 kubelet[2828]: E1029 00:35:28.344910 2828 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 00:35:28.345035 kubelet[2828]: E1029 00:35:28.344967 2828 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 00:35:28.345172 kubelet[2828]: E1029 00:35:28.345112 2828 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2nthc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-c97gh_calico-system(c2fb43e4-9dd6-4f00-992a-4f7339772bdb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 29 00:35:28.346679 kubelet[2828]: E1029 00:35:28.346606 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-c97gh" podUID="c2fb43e4-9dd6-4f00-992a-4f7339772bdb" Oct 29 00:35:28.765600 sshd[5337]: Connection closed by 10.0.0.1 port 46290 Oct 29 00:35:28.768741 sshd-session[5334]: pam_unix(sshd:session): session closed for user core Oct 29 00:35:28.780983 systemd[1]: sshd@21-10.0.0.10:22-10.0.0.1:46290.service: Deactivated successfully. Oct 29 00:35:28.784542 systemd[1]: session-22.scope: Deactivated successfully. Oct 29 00:35:28.787827 systemd-logind[1587]: Session 22 logged out. Waiting for processes to exit. Oct 29 00:35:28.790301 systemd-logind[1587]: Removed session 22. Oct 29 00:35:28.792960 systemd[1]: Started sshd@22-10.0.0.10:22-10.0.0.1:46292.service - OpenSSH per-connection server daemon (10.0.0.1:46292). Oct 29 00:35:28.856203 sshd[5374]: Accepted publickey for core from 10.0.0.1 port 46292 ssh2: RSA SHA256:NOSddcycRuuQ0Zp9cdpGYZy5vFByHCSYLp01T7glzwM Oct 29 00:35:28.857861 sshd-session[5374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:35:28.863236 systemd-logind[1587]: New session 23 of user core. Oct 29 00:35:28.875830 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 29 00:35:29.104975 sshd[5379]: Connection closed by 10.0.0.1 port 46292 Oct 29 00:35:29.106859 sshd-session[5374]: pam_unix(sshd:session): session closed for user core Oct 29 00:35:29.121427 systemd[1]: sshd@22-10.0.0.10:22-10.0.0.1:46292.service: Deactivated successfully. Oct 29 00:35:29.124050 systemd[1]: session-23.scope: Deactivated successfully. Oct 29 00:35:29.125523 systemd-logind[1587]: Session 23 logged out. Waiting for processes to exit. Oct 29 00:35:29.129503 systemd[1]: Started sshd@23-10.0.0.10:22-10.0.0.1:46298.service - OpenSSH per-connection server daemon (10.0.0.1:46298). Oct 29 00:35:29.130227 systemd-logind[1587]: Removed session 23. Oct 29 00:35:29.190746 sshd[5391]: Accepted publickey for core from 10.0.0.1 port 46298 ssh2: RSA SHA256:NOSddcycRuuQ0Zp9cdpGYZy5vFByHCSYLp01T7glzwM Oct 29 00:35:29.192943 sshd-session[5391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:35:29.198311 systemd-logind[1587]: New session 24 of user core. Oct 29 00:35:29.215970 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 29 00:35:29.341118 sshd[5394]: Connection closed by 10.0.0.1 port 46298 Oct 29 00:35:29.341460 sshd-session[5391]: pam_unix(sshd:session): session closed for user core Oct 29 00:35:29.346802 systemd[1]: sshd@23-10.0.0.10:22-10.0.0.1:46298.service: Deactivated successfully. Oct 29 00:35:29.349438 systemd[1]: session-24.scope: Deactivated successfully. Oct 29 00:35:29.350426 systemd-logind[1587]: Session 24 logged out. Waiting for processes to exit. Oct 29 00:35:29.352416 systemd-logind[1587]: Removed session 24. Oct 29 00:35:29.629816 containerd[1619]: time="2025-10-29T00:35:29.629771460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 29 00:35:30.021591 containerd[1619]: time="2025-10-29T00:35:30.021418661Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:35:30.031807 containerd[1619]: time="2025-10-29T00:35:30.031715701Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 29 00:35:30.032099 containerd[1619]: time="2025-10-29T00:35:30.031754896Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 29 00:35:30.032301 kubelet[2828]: E1029 00:35:30.032250 2828 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 00:35:30.032835 kubelet[2828]: E1029 00:35:30.032791 2828 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 00:35:30.033041 kubelet[2828]: E1029 00:35:30.032982 2828 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8kh2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-67654757f7-t64h5_calico-system(4c9c1840-5938-4d20-aaba-3f102838a251): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 29 00:35:30.034442 kubelet[2828]: E1029 00:35:30.034383 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67654757f7-t64h5" podUID="4c9c1840-5938-4d20-aaba-3f102838a251" Oct 29 00:35:30.629226 kubelet[2828]: E1029 00:35:30.629175 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:35:31.630817 containerd[1619]: time="2025-10-29T00:35:31.630721245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 29 00:35:32.110268 containerd[1619]: time="2025-10-29T00:35:32.110182749Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:35:32.111500 containerd[1619]: time="2025-10-29T00:35:32.111459190Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 29 00:35:32.111617 containerd[1619]: time="2025-10-29T00:35:32.111553058Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 29 00:35:32.111821 kubelet[2828]: E1029 00:35:32.111765 2828 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 00:35:32.112253 kubelet[2828]: E1029 00:35:32.111844 2828 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 00:35:32.112253 kubelet[2828]: E1029 00:35:32.112053 2828 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bz22c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qqtll_calico-system(31820ff6-c2b5-4f1e-b097-0b66b5dd1baa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 29 00:35:32.114223 containerd[1619]: time="2025-10-29T00:35:32.114175442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 29 00:35:32.442289 containerd[1619]: time="2025-10-29T00:35:32.442109271Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:35:32.444255 containerd[1619]: time="2025-10-29T00:35:32.444210466Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 29 00:35:32.444328 containerd[1619]: time="2025-10-29T00:35:32.444268457Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 29 00:35:32.444542 kubelet[2828]: E1029 00:35:32.444487 2828 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 00:35:32.444627 kubelet[2828]: E1029 00:35:32.444556 2828 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 00:35:32.444843 kubelet[2828]: E1029 00:35:32.444779 2828 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bz22c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qqtll_calico-system(31820ff6-c2b5-4f1e-b097-0b66b5dd1baa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 29 00:35:32.446230 kubelet[2828]: E1029 00:35:32.446146 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qqtll" podUID="31820ff6-c2b5-4f1e-b097-0b66b5dd1baa" Oct 29 00:35:33.630811 containerd[1619]: time="2025-10-29T00:35:33.630620724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 00:35:33.985735 containerd[1619]: time="2025-10-29T00:35:33.985552186Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:35:33.990560 containerd[1619]: time="2025-10-29T00:35:33.990508083Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 00:35:33.990668 containerd[1619]: time="2025-10-29T00:35:33.990591051Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 00:35:33.990822 kubelet[2828]: E1029 00:35:33.990772 2828 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:35:33.991237 kubelet[2828]: E1029 00:35:33.990834 2828 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:35:33.991237 kubelet[2828]: E1029 00:35:33.991090 2828 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dxxpm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-99fd685fb-vhm8w_calico-apiserver(226993fc-2dd7-48d2-9d26-aaf9fe3f09e4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 00:35:33.992004 containerd[1619]: time="2025-10-29T00:35:33.991957502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 29 00:35:33.992422 kubelet[2828]: E1029 00:35:33.992372 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99fd685fb-vhm8w" podUID="226993fc-2dd7-48d2-9d26-aaf9fe3f09e4" Oct 29 00:35:34.351505 containerd[1619]: time="2025-10-29T00:35:34.351430747Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:35:34.353040 containerd[1619]: time="2025-10-29T00:35:34.353002747Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 29 00:35:34.353173 containerd[1619]: time="2025-10-29T00:35:34.353104891Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 29 00:35:34.353318 kubelet[2828]: E1029 00:35:34.353262 2828 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 00:35:34.353374 kubelet[2828]: E1029 00:35:34.353321 2828 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 00:35:34.353496 kubelet[2828]: E1029 00:35:34.353451 2828 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:207fdd95d4f244cd9bb253bbb4016093,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sz2l9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-589b55fc85-qrw8q_calico-system(4100b610-f524-41bd-8d21-e97b360c25bf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 29 00:35:34.355430 containerd[1619]: time="2025-10-29T00:35:34.355405933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 29 00:35:34.362929 systemd[1]: Started sshd@24-10.0.0.10:22-10.0.0.1:46306.service - OpenSSH per-connection server daemon (10.0.0.1:46306). Oct 29 00:35:34.425733 sshd[5416]: Accepted publickey for core from 10.0.0.1 port 46306 ssh2: RSA SHA256:NOSddcycRuuQ0Zp9cdpGYZy5vFByHCSYLp01T7glzwM Oct 29 00:35:34.427810 sshd-session[5416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:35:34.433235 systemd-logind[1587]: New session 25 of user core. Oct 29 00:35:34.440791 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 29 00:35:34.562090 sshd[5419]: Connection closed by 10.0.0.1 port 46306 Oct 29 00:35:34.562425 sshd-session[5416]: pam_unix(sshd:session): session closed for user core Oct 29 00:35:34.568897 systemd[1]: sshd@24-10.0.0.10:22-10.0.0.1:46306.service: Deactivated successfully. Oct 29 00:35:34.571078 systemd[1]: session-25.scope: Deactivated successfully. Oct 29 00:35:34.571978 systemd-logind[1587]: Session 25 logged out. Waiting for processes to exit. Oct 29 00:35:34.573345 systemd-logind[1587]: Removed session 25. Oct 29 00:35:34.700187 containerd[1619]: time="2025-10-29T00:35:34.700028550Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:35:34.701272 containerd[1619]: time="2025-10-29T00:35:34.701231571Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 29 00:35:34.701338 containerd[1619]: time="2025-10-29T00:35:34.701314909Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 29 00:35:34.701505 kubelet[2828]: E1029 00:35:34.701460 2828 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 00:35:34.701567 kubelet[2828]: E1029 00:35:34.701517 2828 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 00:35:34.701764 kubelet[2828]: E1029 00:35:34.701720 2828 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sz2l9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-589b55fc85-qrw8q_calico-system(4100b610-f524-41bd-8d21-e97b360c25bf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 29 00:35:34.702989 kubelet[2828]: E1029 00:35:34.702924 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-589b55fc85-qrw8q" podUID="4100b610-f524-41bd-8d21-e97b360c25bf" Oct 29 00:35:37.632682 kubelet[2828]: E1029 00:35:37.632615 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:35:38.629177 kubelet[2828]: E1029 00:35:38.629099 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:35:39.574912 systemd[1]: Started sshd@25-10.0.0.10:22-10.0.0.1:40756.service - OpenSSH per-connection server daemon (10.0.0.1:40756). Oct 29 00:35:39.642518 sshd[5436]: Accepted publickey for core from 10.0.0.1 port 40756 ssh2: RSA SHA256:NOSddcycRuuQ0Zp9cdpGYZy5vFByHCSYLp01T7glzwM Oct 29 00:35:39.644459 sshd-session[5436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:35:39.649930 systemd-logind[1587]: New session 26 of user core. Oct 29 00:35:39.662839 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 29 00:35:39.787035 sshd[5439]: Connection closed by 10.0.0.1 port 40756 Oct 29 00:35:39.787346 sshd-session[5436]: pam_unix(sshd:session): session closed for user core Oct 29 00:35:39.793838 systemd[1]: sshd@25-10.0.0.10:22-10.0.0.1:40756.service: Deactivated successfully. Oct 29 00:35:39.796129 systemd[1]: session-26.scope: Deactivated successfully. Oct 29 00:35:39.797086 systemd-logind[1587]: Session 26 logged out. Waiting for processes to exit. Oct 29 00:35:39.798630 systemd-logind[1587]: Removed session 26. Oct 29 00:35:41.631985 kubelet[2828]: E1029 00:35:41.631924 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-c97gh" podUID="c2fb43e4-9dd6-4f00-992a-4f7339772bdb" Oct 29 00:35:42.630094 kubelet[2828]: E1029 00:35:42.630016 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99fd685fb-9ss4w" podUID="e6d89d0d-8eff-4088-87f1-00579f9e5f1f" Oct 29 00:35:43.633980 kubelet[2828]: E1029 00:35:43.633810 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qqtll" podUID="31820ff6-c2b5-4f1e-b097-0b66b5dd1baa" Oct 29 00:35:44.630554 kubelet[2828]: E1029 00:35:44.630490 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67654757f7-t64h5" podUID="4c9c1840-5938-4d20-aaba-3f102838a251" Oct 29 00:35:44.801540 systemd[1]: Started sshd@26-10.0.0.10:22-10.0.0.1:40758.service - OpenSSH per-connection server daemon (10.0.0.1:40758). Oct 29 00:35:44.881307 sshd[5452]: Accepted publickey for core from 10.0.0.1 port 40758 ssh2: RSA SHA256:NOSddcycRuuQ0Zp9cdpGYZy5vFByHCSYLp01T7glzwM Oct 29 00:35:44.881894 sshd-session[5452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:35:44.886895 systemd-logind[1587]: New session 27 of user core. Oct 29 00:35:44.891782 systemd[1]: Started session-27.scope - Session 27 of User core. Oct 29 00:35:45.021419 sshd[5455]: Connection closed by 10.0.0.1 port 40758 Oct 29 00:35:45.021984 sshd-session[5452]: pam_unix(sshd:session): session closed for user core Oct 29 00:35:45.027083 systemd[1]: sshd@26-10.0.0.10:22-10.0.0.1:40758.service: Deactivated successfully. Oct 29 00:35:45.029582 systemd[1]: session-27.scope: Deactivated successfully. Oct 29 00:35:45.030914 systemd-logind[1587]: Session 27 logged out. Waiting for processes to exit. Oct 29 00:35:45.032736 systemd-logind[1587]: Removed session 27.