Nov 1 10:07:37.223220 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Sat Nov 1 08:12:41 -00 2025 Nov 1 10:07:37.223244 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=91cbcb3658f876d239d31cc29b206c4e950f20e536a8e14bd58a23c6f0ecf128 Nov 1 10:07:37.223253 kernel: BIOS-provided physical RAM map: Nov 1 10:07:37.223264 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 1 10:07:37.223270 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Nov 1 10:07:37.223277 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Nov 1 10:07:37.223286 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Nov 1 10:07:37.223293 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Nov 1 10:07:37.223303 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Nov 1 10:07:37.223310 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Nov 1 10:07:37.223317 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Nov 1 10:07:37.223326 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Nov 1 10:07:37.223333 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Nov 1 10:07:37.223341 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Nov 1 10:07:37.223349 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Nov 1 10:07:37.223357 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Nov 1 10:07:37.223370 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Nov 1 10:07:37.223377 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Nov 1 10:07:37.223385 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Nov 1 10:07:37.223392 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Nov 1 10:07:37.223399 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Nov 1 10:07:37.223407 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Nov 1 10:07:37.223414 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 1 10:07:37.223422 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 1 10:07:37.223429 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Nov 1 10:07:37.223436 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 1 10:07:37.223446 kernel: NX (Execute Disable) protection: active Nov 1 10:07:37.223453 kernel: APIC: Static calls initialized Nov 1 10:07:37.223461 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Nov 1 10:07:37.223468 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Nov 1 10:07:37.223476 kernel: extended physical RAM map: Nov 1 10:07:37.223483 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 1 10:07:37.223491 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Nov 1 10:07:37.223498 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Nov 1 10:07:37.223506 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Nov 1 10:07:37.223513 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Nov 1 10:07:37.223521 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Nov 1 10:07:37.223530 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Nov 1 10:07:37.223537 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Nov 1 10:07:37.223545 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Nov 1 10:07:37.223556 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Nov 1 10:07:37.223565 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Nov 1 10:07:37.223573 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Nov 1 10:07:37.223581 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Nov 1 10:07:37.223589 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Nov 1 10:07:37.223597 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Nov 1 10:07:37.223604 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Nov 1 10:07:37.223612 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Nov 1 10:07:37.223620 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Nov 1 10:07:37.223628 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Nov 1 10:07:37.223637 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Nov 1 10:07:37.223645 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Nov 1 10:07:37.223653 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Nov 1 10:07:37.223661 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Nov 1 10:07:37.223669 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 1 10:07:37.223676 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 1 10:07:37.223684 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Nov 1 10:07:37.223692 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 1 10:07:37.223702 kernel: efi: EFI v2.7 by EDK II Nov 1 10:07:37.223710 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Nov 1 10:07:37.223718 kernel: random: crng init done Nov 1 10:07:37.223730 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Nov 1 10:07:37.223739 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Nov 1 10:07:37.223756 kernel: secureboot: Secure boot disabled Nov 1 10:07:37.223764 kernel: SMBIOS 2.8 present. Nov 1 10:07:37.223772 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Nov 1 10:07:37.223780 kernel: DMI: Memory slots populated: 1/1 Nov 1 10:07:37.223787 kernel: Hypervisor detected: KVM Nov 1 10:07:37.223795 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Nov 1 10:07:37.223803 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 1 10:07:37.223811 kernel: kvm-clock: using sched offset of 4733034373 cycles Nov 1 10:07:37.223819 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 1 10:07:37.223830 kernel: tsc: Detected 2794.748 MHz processor Nov 1 10:07:37.223839 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 10:07:37.223847 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 10:07:37.223855 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Nov 1 10:07:37.223864 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 1 10:07:37.223872 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 10:07:37.223880 kernel: Using GB pages for direct mapping Nov 1 10:07:37.223890 kernel: ACPI: Early table checksum verification disabled Nov 1 10:07:37.223899 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Nov 1 10:07:37.223907 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Nov 1 10:07:37.223915 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 10:07:37.223924 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 10:07:37.223932 kernel: ACPI: FACS 0x000000009CBDD000 000040 Nov 1 10:07:37.223940 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 10:07:37.223950 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 10:07:37.223973 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 10:07:37.223981 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 10:07:37.223989 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Nov 1 10:07:37.223997 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Nov 1 10:07:37.224006 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Nov 1 10:07:37.224014 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Nov 1 10:07:37.224024 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Nov 1 10:07:37.224032 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Nov 1 10:07:37.224040 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Nov 1 10:07:37.224048 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Nov 1 10:07:37.224056 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Nov 1 10:07:37.224064 kernel: No NUMA configuration found Nov 1 10:07:37.224072 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Nov 1 10:07:37.224080 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Nov 1 10:07:37.224090 kernel: Zone ranges: Nov 1 10:07:37.224099 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 10:07:37.224106 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Nov 1 10:07:37.224114 kernel: Normal empty Nov 1 10:07:37.224122 kernel: Device empty Nov 1 10:07:37.224130 kernel: Movable zone start for each node Nov 1 10:07:37.224138 kernel: Early memory node ranges Nov 1 10:07:37.224146 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 1 10:07:37.224159 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Nov 1 10:07:37.224167 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Nov 1 10:07:37.224175 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Nov 1 10:07:37.224183 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Nov 1 10:07:37.224191 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Nov 1 10:07:37.224200 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Nov 1 10:07:37.224208 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Nov 1 10:07:37.224221 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Nov 1 10:07:37.224229 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 10:07:37.224244 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 1 10:07:37.224255 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Nov 1 10:07:37.224263 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 10:07:37.224272 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Nov 1 10:07:37.224280 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Nov 1 10:07:37.224289 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Nov 1 10:07:37.224297 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Nov 1 10:07:37.224306 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Nov 1 10:07:37.224317 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 1 10:07:37.224325 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 1 10:07:37.224334 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 1 10:07:37.224342 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 1 10:07:37.224353 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 1 10:07:37.224361 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 10:07:37.224370 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 1 10:07:37.224378 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 1 10:07:37.224386 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 10:07:37.224395 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 1 10:07:37.224403 kernel: TSC deadline timer available Nov 1 10:07:37.224413 kernel: CPU topo: Max. logical packages: 1 Nov 1 10:07:37.224422 kernel: CPU topo: Max. logical dies: 1 Nov 1 10:07:37.224430 kernel: CPU topo: Max. dies per package: 1 Nov 1 10:07:37.224438 kernel: CPU topo: Max. threads per core: 1 Nov 1 10:07:37.224447 kernel: CPU topo: Num. cores per package: 4 Nov 1 10:07:37.224455 kernel: CPU topo: Num. threads per package: 4 Nov 1 10:07:37.224463 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Nov 1 10:07:37.224473 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 1 10:07:37.224482 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 1 10:07:37.224490 kernel: kvm-guest: setup PV sched yield Nov 1 10:07:37.224498 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Nov 1 10:07:37.224507 kernel: Booting paravirtualized kernel on KVM Nov 1 10:07:37.224515 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 10:07:37.224524 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 1 10:07:37.224535 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Nov 1 10:07:37.224543 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Nov 1 10:07:37.224551 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 1 10:07:37.224560 kernel: kvm-guest: PV spinlocks enabled Nov 1 10:07:37.224568 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 1 10:07:37.224580 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=91cbcb3658f876d239d31cc29b206c4e950f20e536a8e14bd58a23c6f0ecf128 Nov 1 10:07:37.224589 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 1 10:07:37.224600 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 10:07:37.224608 kernel: Fallback order for Node 0: 0 Nov 1 10:07:37.224616 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Nov 1 10:07:37.224625 kernel: Policy zone: DMA32 Nov 1 10:07:37.224633 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 10:07:37.224642 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 1 10:07:37.224653 kernel: ftrace: allocating 40092 entries in 157 pages Nov 1 10:07:37.224665 kernel: ftrace: allocated 157 pages with 5 groups Nov 1 10:07:37.224675 kernel: Dynamic Preempt: voluntary Nov 1 10:07:37.224683 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 1 10:07:37.224692 kernel: rcu: RCU event tracing is enabled. Nov 1 10:07:37.224701 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 1 10:07:37.224709 kernel: Trampoline variant of Tasks RCU enabled. Nov 1 10:07:37.224718 kernel: Rude variant of Tasks RCU enabled. Nov 1 10:07:37.224729 kernel: Tracing variant of Tasks RCU enabled. Nov 1 10:07:37.224737 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 10:07:37.224753 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 1 10:07:37.224764 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 1 10:07:37.224772 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 1 10:07:37.224781 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 1 10:07:37.224790 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 1 10:07:37.224801 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 1 10:07:37.224809 kernel: Console: colour dummy device 80x25 Nov 1 10:07:37.224817 kernel: printk: legacy console [ttyS0] enabled Nov 1 10:07:37.224826 kernel: ACPI: Core revision 20240827 Nov 1 10:07:37.224834 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 1 10:07:37.224843 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 10:07:37.224851 kernel: x2apic enabled Nov 1 10:07:37.224859 kernel: APIC: Switched APIC routing to: physical x2apic Nov 1 10:07:37.224870 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 1 10:07:37.224879 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 1 10:07:37.224887 kernel: kvm-guest: setup PV IPIs Nov 1 10:07:37.224895 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 1 10:07:37.224904 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Nov 1 10:07:37.224913 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Nov 1 10:07:37.224921 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 1 10:07:37.224932 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 1 10:07:37.224940 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 1 10:07:37.224949 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 10:07:37.224970 kernel: Spectre V2 : Mitigation: Retpolines Nov 1 10:07:37.224979 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 1 10:07:37.224987 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 1 10:07:37.224996 kernel: active return thunk: retbleed_return_thunk Nov 1 10:07:37.225007 kernel: RETBleed: Mitigation: untrained return thunk Nov 1 10:07:37.225018 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 10:07:37.225026 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 1 10:07:37.225035 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 1 10:07:37.225044 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 1 10:07:37.225052 kernel: active return thunk: srso_return_thunk Nov 1 10:07:37.225063 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 1 10:07:37.225071 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 10:07:37.225080 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 10:07:37.225088 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 10:07:37.225096 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 10:07:37.225105 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 1 10:07:37.225114 kernel: Freeing SMP alternatives memory: 32K Nov 1 10:07:37.225124 kernel: pid_max: default: 32768 minimum: 301 Nov 1 10:07:37.225133 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 1 10:07:37.225141 kernel: landlock: Up and running. Nov 1 10:07:37.225149 kernel: SELinux: Initializing. Nov 1 10:07:37.225158 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 10:07:37.225166 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 10:07:37.225175 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 1 10:07:37.225185 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 1 10:07:37.225194 kernel: ... version: 0 Nov 1 10:07:37.225202 kernel: ... bit width: 48 Nov 1 10:07:37.225210 kernel: ... generic registers: 6 Nov 1 10:07:37.225218 kernel: ... value mask: 0000ffffffffffff Nov 1 10:07:37.225227 kernel: ... max period: 00007fffffffffff Nov 1 10:07:37.225235 kernel: ... fixed-purpose events: 0 Nov 1 10:07:37.225244 kernel: ... event mask: 000000000000003f Nov 1 10:07:37.225254 kernel: signal: max sigframe size: 1776 Nov 1 10:07:37.225263 kernel: rcu: Hierarchical SRCU implementation. Nov 1 10:07:37.225271 kernel: rcu: Max phase no-delay instances is 400. Nov 1 10:07:37.225282 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 1 10:07:37.225290 kernel: smp: Bringing up secondary CPUs ... Nov 1 10:07:37.225299 kernel: smpboot: x86: Booting SMP configuration: Nov 1 10:07:37.225307 kernel: .... node #0, CPUs: #1 #2 #3 Nov 1 10:07:37.225317 kernel: smp: Brought up 1 node, 4 CPUs Nov 1 10:07:37.225326 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Nov 1 10:07:37.225335 kernel: Memory: 2441096K/2565800K available (14336K kernel code, 2443K rwdata, 29892K rodata, 15356K init, 2688K bss, 118764K reserved, 0K cma-reserved) Nov 1 10:07:37.225343 kernel: devtmpfs: initialized Nov 1 10:07:37.225351 kernel: x86/mm: Memory block size: 128MB Nov 1 10:07:37.225360 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Nov 1 10:07:37.225368 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Nov 1 10:07:37.225379 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Nov 1 10:07:37.225388 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Nov 1 10:07:37.225396 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Nov 1 10:07:37.225404 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Nov 1 10:07:37.225413 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 10:07:37.225421 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 1 10:07:37.225430 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 10:07:37.225440 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 10:07:37.225449 kernel: audit: initializing netlink subsys (disabled) Nov 1 10:07:37.225457 kernel: audit: type=2000 audit(1761991654.485:1): state=initialized audit_enabled=0 res=1 Nov 1 10:07:37.225465 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 10:07:37.225474 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 10:07:37.225482 kernel: cpuidle: using governor menu Nov 1 10:07:37.225490 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 10:07:37.225501 kernel: dca service started, version 1.12.1 Nov 1 10:07:37.225510 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Nov 1 10:07:37.225518 kernel: PCI: Using configuration type 1 for base access Nov 1 10:07:37.225526 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 10:07:37.225535 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 10:07:37.225543 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 1 10:07:37.225552 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 10:07:37.225563 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 1 10:07:37.225571 kernel: ACPI: Added _OSI(Module Device) Nov 1 10:07:37.225579 kernel: ACPI: Added _OSI(Processor Device) Nov 1 10:07:37.225587 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 10:07:37.225596 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 1 10:07:37.225604 kernel: ACPI: Interpreter enabled Nov 1 10:07:37.225612 kernel: ACPI: PM: (supports S0 S3 S5) Nov 1 10:07:37.225623 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 10:07:37.225631 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 10:07:37.225640 kernel: PCI: Using E820 reservations for host bridge windows Nov 1 10:07:37.225648 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 1 10:07:37.225657 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 1 10:07:37.225915 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 1 10:07:37.226118 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 1 10:07:37.226301 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 1 10:07:37.226313 kernel: PCI host bridge to bus 0000:00 Nov 1 10:07:37.226502 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 10:07:37.226663 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 10:07:37.226840 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 10:07:37.227024 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Nov 1 10:07:37.227186 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Nov 1 10:07:37.227344 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Nov 1 10:07:37.227503 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 1 10:07:37.227703 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 1 10:07:37.227912 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Nov 1 10:07:37.228159 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Nov 1 10:07:37.228364 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Nov 1 10:07:37.228547 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Nov 1 10:07:37.228731 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 10:07:37.228943 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 1 10:07:37.229175 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Nov 1 10:07:37.229356 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Nov 1 10:07:37.229541 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Nov 1 10:07:37.229731 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 1 10:07:37.229925 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Nov 1 10:07:37.230610 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Nov 1 10:07:37.230811 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Nov 1 10:07:37.231013 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 1 10:07:37.231192 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Nov 1 10:07:37.231366 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Nov 1 10:07:37.231540 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Nov 1 10:07:37.231714 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Nov 1 10:07:37.231913 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 1 10:07:37.232123 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 1 10:07:37.232324 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 1 10:07:37.232502 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Nov 1 10:07:37.232683 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Nov 1 10:07:37.232887 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 1 10:07:37.233086 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Nov 1 10:07:37.233099 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 1 10:07:37.233108 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 1 10:07:37.233117 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 1 10:07:37.233125 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 1 10:07:37.233138 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 1 10:07:37.233146 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 1 10:07:37.233155 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 1 10:07:37.233163 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 1 10:07:37.233172 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 1 10:07:37.233180 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 1 10:07:37.233189 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 1 10:07:37.233197 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 1 10:07:37.233207 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 1 10:07:37.233216 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 1 10:07:37.233224 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 1 10:07:37.233233 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 1 10:07:37.233241 kernel: iommu: Default domain type: Translated Nov 1 10:07:37.233250 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 10:07:37.233258 kernel: efivars: Registered efivars operations Nov 1 10:07:37.233270 kernel: PCI: Using ACPI for IRQ routing Nov 1 10:07:37.233278 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 10:07:37.233287 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Nov 1 10:07:37.233295 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Nov 1 10:07:37.233303 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Nov 1 10:07:37.233312 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Nov 1 10:07:37.233320 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Nov 1 10:07:37.233331 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Nov 1 10:07:37.233339 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Nov 1 10:07:37.233348 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Nov 1 10:07:37.233524 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 1 10:07:37.233698 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 1 10:07:37.233885 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 10:07:37.233900 kernel: vgaarb: loaded Nov 1 10:07:37.233909 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 1 10:07:37.233918 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 1 10:07:37.233927 kernel: clocksource: Switched to clocksource kvm-clock Nov 1 10:07:37.233935 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 10:07:37.233944 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 10:07:37.233973 kernel: pnp: PnP ACPI init Nov 1 10:07:37.234184 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Nov 1 10:07:37.234200 kernel: pnp: PnP ACPI: found 6 devices Nov 1 10:07:37.234209 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 10:07:37.234218 kernel: NET: Registered PF_INET protocol family Nov 1 10:07:37.234227 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 10:07:37.234236 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 1 10:07:37.234246 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 10:07:37.234257 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 1 10:07:37.234266 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 1 10:07:37.234275 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 1 10:07:37.234284 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 10:07:37.234293 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 10:07:37.234302 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 10:07:37.234311 kernel: NET: Registered PF_XDP protocol family Nov 1 10:07:37.234489 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Nov 1 10:07:37.234666 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Nov 1 10:07:37.234858 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 10:07:37.235047 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 10:07:37.235221 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 10:07:37.235385 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Nov 1 10:07:37.235561 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Nov 1 10:07:37.235727 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Nov 1 10:07:37.235739 kernel: PCI: CLS 0 bytes, default 64 Nov 1 10:07:37.235758 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Nov 1 10:07:37.235773 kernel: Initialise system trusted keyrings Nov 1 10:07:37.235782 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 1 10:07:37.235791 kernel: Key type asymmetric registered Nov 1 10:07:37.235800 kernel: Asymmetric key parser 'x509' registered Nov 1 10:07:37.235808 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 1 10:07:37.235817 kernel: io scheduler mq-deadline registered Nov 1 10:07:37.235829 kernel: io scheduler kyber registered Nov 1 10:07:37.235841 kernel: io scheduler bfq registered Nov 1 10:07:37.235850 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 10:07:37.235862 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 1 10:07:37.235872 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 1 10:07:37.235883 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 1 10:07:37.235892 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 10:07:37.235901 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 10:07:37.235910 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 1 10:07:37.235921 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 1 10:07:37.235930 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 1 10:07:37.235939 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 1 10:07:37.236148 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 1 10:07:37.236323 kernel: rtc_cmos 00:04: registered as rtc0 Nov 1 10:07:37.236492 kernel: rtc_cmos 00:04: setting system clock to 2025-11-01T10:07:35 UTC (1761991655) Nov 1 10:07:37.236673 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Nov 1 10:07:37.236686 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 1 10:07:37.236695 kernel: efifb: probing for efifb Nov 1 10:07:37.236704 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Nov 1 10:07:37.236713 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Nov 1 10:07:37.236721 kernel: efifb: scrolling: redraw Nov 1 10:07:37.236730 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 1 10:07:37.236742 kernel: Console: switching to colour frame buffer device 160x50 Nov 1 10:07:37.236761 kernel: fb0: EFI VGA frame buffer device Nov 1 10:07:37.236769 kernel: pstore: Using crash dump compression: deflate Nov 1 10:07:37.236779 kernel: pstore: Registered efi_pstore as persistent store backend Nov 1 10:07:37.236790 kernel: NET: Registered PF_INET6 protocol family Nov 1 10:07:37.236799 kernel: Segment Routing with IPv6 Nov 1 10:07:37.236808 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 10:07:37.236819 kernel: NET: Registered PF_PACKET protocol family Nov 1 10:07:37.236828 kernel: Key type dns_resolver registered Nov 1 10:07:37.236838 kernel: IPI shorthand broadcast: enabled Nov 1 10:07:37.236850 kernel: sched_clock: Marking stable (2323003295, 285536686)->(2667554997, -59015016) Nov 1 10:07:37.236859 kernel: registered taskstats version 1 Nov 1 10:07:37.236871 kernel: Loading compiled-in X.509 certificates Nov 1 10:07:37.236882 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: d8ad6d63e9d0f6e32055e659cacaf9092255a45e' Nov 1 10:07:37.236893 kernel: Demotion targets for Node 0: null Nov 1 10:07:37.236902 kernel: Key type .fscrypt registered Nov 1 10:07:37.236910 kernel: Key type fscrypt-provisioning registered Nov 1 10:07:37.236919 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 1 10:07:37.236928 kernel: ima: Allocated hash algorithm: sha1 Nov 1 10:07:37.236937 kernel: ima: No architecture policies found Nov 1 10:07:37.236946 kernel: clk: Disabling unused clocks Nov 1 10:07:37.236970 kernel: Freeing unused kernel image (initmem) memory: 15356K Nov 1 10:07:37.236982 kernel: Write protecting the kernel read-only data: 45056k Nov 1 10:07:37.236990 kernel: Freeing unused kernel image (rodata/data gap) memory: 828K Nov 1 10:07:37.236999 kernel: Run /init as init process Nov 1 10:07:37.237008 kernel: with arguments: Nov 1 10:07:37.237017 kernel: /init Nov 1 10:07:37.237026 kernel: with environment: Nov 1 10:07:37.237034 kernel: HOME=/ Nov 1 10:07:37.237045 kernel: TERM=linux Nov 1 10:07:37.237054 kernel: SCSI subsystem initialized Nov 1 10:07:37.237063 kernel: libata version 3.00 loaded. Nov 1 10:07:37.237248 kernel: ahci 0000:00:1f.2: version 3.0 Nov 1 10:07:37.237261 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 1 10:07:37.237454 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 1 10:07:37.237636 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 1 10:07:37.237844 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 1 10:07:37.238084 kernel: scsi host0: ahci Nov 1 10:07:37.238283 kernel: scsi host1: ahci Nov 1 10:07:37.238473 kernel: scsi host2: ahci Nov 1 10:07:37.238668 kernel: scsi host3: ahci Nov 1 10:07:37.238882 kernel: scsi host4: ahci Nov 1 10:07:37.239092 kernel: scsi host5: ahci Nov 1 10:07:37.239107 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Nov 1 10:07:37.239116 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Nov 1 10:07:37.239125 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Nov 1 10:07:37.239134 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Nov 1 10:07:37.239147 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Nov 1 10:07:37.239156 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Nov 1 10:07:37.239165 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 1 10:07:37.239174 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 1 10:07:37.239183 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 1 10:07:37.239192 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 1 10:07:37.239200 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 1 10:07:37.239211 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 1 10:07:37.239220 kernel: ata3.00: LPM support broken, forcing max_power Nov 1 10:07:37.239229 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 1 10:07:37.239238 kernel: ata3.00: applying bridge limits Nov 1 10:07:37.239246 kernel: ata3.00: LPM support broken, forcing max_power Nov 1 10:07:37.239255 kernel: ata3.00: configured for UDMA/100 Nov 1 10:07:37.239461 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 1 10:07:37.239654 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 1 10:07:37.239905 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Nov 1 10:07:37.239919 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 10:07:37.239928 kernel: GPT:16515071 != 27000831 Nov 1 10:07:37.239937 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 10:07:37.239945 kernel: GPT:16515071 != 27000831 Nov 1 10:07:37.239977 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 10:07:37.239986 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 10:07:37.240183 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 1 10:07:37.240196 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 1 10:07:37.240391 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 1 10:07:37.240403 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 10:07:37.240412 kernel: device-mapper: uevent: version 1.0.3 Nov 1 10:07:37.240425 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 1 10:07:37.240434 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 1 10:07:37.240443 kernel: raid6: avx2x4 gen() 29078 MB/s Nov 1 10:07:37.240452 kernel: raid6: avx2x2 gen() 29468 MB/s Nov 1 10:07:37.240461 kernel: raid6: avx2x1 gen() 25541 MB/s Nov 1 10:07:37.240469 kernel: raid6: using algorithm avx2x2 gen() 29468 MB/s Nov 1 10:07:37.240479 kernel: raid6: .... xor() 19249 MB/s, rmw enabled Nov 1 10:07:37.240490 kernel: raid6: using avx2x2 recovery algorithm Nov 1 10:07:37.240498 kernel: xor: automatically using best checksumming function avx Nov 1 10:07:37.240508 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 1 10:07:37.240517 kernel: BTRFS: device fsid 8763e8a0-bf7f-4ffe-acc8-da149b03dd0b devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (181) Nov 1 10:07:37.240526 kernel: BTRFS info (device dm-0): first mount of filesystem 8763e8a0-bf7f-4ffe-acc8-da149b03dd0b Nov 1 10:07:37.240535 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 1 10:07:37.240544 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 1 10:07:37.240555 kernel: BTRFS info (device dm-0): enabling free space tree Nov 1 10:07:37.240564 kernel: loop: module loaded Nov 1 10:07:37.240573 kernel: loop0: detected capacity change from 0 to 100136 Nov 1 10:07:37.240582 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 10:07:37.240592 systemd[1]: Successfully made /usr/ read-only. Nov 1 10:07:37.240603 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 1 10:07:37.240616 systemd[1]: Detected virtualization kvm. Nov 1 10:07:37.240625 systemd[1]: Detected architecture x86-64. Nov 1 10:07:37.240634 systemd[1]: Running in initrd. Nov 1 10:07:37.240643 systemd[1]: No hostname configured, using default hostname. Nov 1 10:07:37.240653 systemd[1]: Hostname set to . Nov 1 10:07:37.240662 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 1 10:07:37.240675 systemd[1]: Queued start job for default target initrd.target. Nov 1 10:07:37.240686 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 1 10:07:37.240697 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 10:07:37.240707 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 10:07:37.240717 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 1 10:07:37.240727 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 10:07:37.240740 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 1 10:07:37.240758 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 1 10:07:37.240768 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 10:07:37.240778 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 10:07:37.240787 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 1 10:07:37.240797 systemd[1]: Reached target paths.target - Path Units. Nov 1 10:07:37.240806 systemd[1]: Reached target slices.target - Slice Units. Nov 1 10:07:37.240818 systemd[1]: Reached target swap.target - Swaps. Nov 1 10:07:37.240827 systemd[1]: Reached target timers.target - Timer Units. Nov 1 10:07:37.240837 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 10:07:37.240846 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 10:07:37.240855 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 1 10:07:37.240864 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 1 10:07:37.240876 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 10:07:37.240886 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 10:07:37.240895 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 10:07:37.240904 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 10:07:37.240913 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 1 10:07:37.240923 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 1 10:07:37.240932 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 10:07:37.240944 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 1 10:07:37.240971 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 1 10:07:37.240980 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 10:07:37.240990 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 10:07:37.241002 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 10:07:37.241011 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 10:07:37.241023 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 1 10:07:37.241033 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 10:07:37.241042 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 10:07:37.241091 systemd-journald[317]: Collecting audit messages is disabled. Nov 1 10:07:37.241116 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 10:07:37.241126 systemd-journald[317]: Journal started Nov 1 10:07:37.241147 systemd-journald[317]: Runtime Journal (/run/log/journal/0e4b1e3611384470b3868616b4055954) is 6M, max 48.1M, 42M free. Nov 1 10:07:37.243979 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 10:07:37.245107 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 10:07:37.252987 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 10:07:37.253101 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 10:07:37.255677 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 10:07:37.278710 systemd-modules-load[320]: Inserted module 'br_netfilter' Nov 1 10:07:37.279518 kernel: Bridge firewalling registered Nov 1 10:07:37.282611 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 10:07:37.285148 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 10:07:37.295447 systemd-tmpfiles[331]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 1 10:07:37.300406 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 10:07:37.304289 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 10:07:37.308189 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 10:07:37.312134 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 10:07:37.315176 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 10:07:37.320366 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 10:07:37.344444 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 10:07:37.347088 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 1 10:07:37.375445 dracut-cmdline[363]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=91cbcb3658f876d239d31cc29b206c4e950f20e536a8e14bd58a23c6f0ecf128 Nov 1 10:07:37.378218 systemd-resolved[351]: Positive Trust Anchors: Nov 1 10:07:37.378231 systemd-resolved[351]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 10:07:37.378235 systemd-resolved[351]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 1 10:07:37.378265 systemd-resolved[351]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 10:07:37.400158 systemd-resolved[351]: Defaulting to hostname 'linux'. Nov 1 10:07:37.401583 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 10:07:37.403606 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 10:07:37.487994 kernel: Loading iSCSI transport class v2.0-870. Nov 1 10:07:37.503003 kernel: iscsi: registered transport (tcp) Nov 1 10:07:37.527607 kernel: iscsi: registered transport (qla4xxx) Nov 1 10:07:37.527694 kernel: QLogic iSCSI HBA Driver Nov 1 10:07:37.557016 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 1 10:07:37.585793 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 1 10:07:37.587497 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 1 10:07:37.650349 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 1 10:07:37.653732 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 1 10:07:37.655135 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 1 10:07:37.701225 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 1 10:07:37.703989 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 10:07:37.735162 systemd-udevd[597]: Using default interface naming scheme 'v257'. Nov 1 10:07:37.749338 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 10:07:37.755091 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 1 10:07:37.785013 dracut-pre-trigger[653]: rd.md=0: removing MD RAID activation Nov 1 10:07:37.806812 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 10:07:37.810720 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 10:07:37.828123 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 10:07:37.830230 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 10:07:37.877418 systemd-networkd[724]: lo: Link UP Nov 1 10:07:37.877427 systemd-networkd[724]: lo: Gained carrier Nov 1 10:07:37.878100 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 10:07:37.878751 systemd[1]: Reached target network.target - Network. Nov 1 10:07:37.917744 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 10:07:37.924695 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 1 10:07:37.966999 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 1 10:07:37.989787 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 1 10:07:38.021986 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 10:07:38.033486 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 1 10:07:38.046989 kernel: AES CTR mode by8 optimization enabled Nov 1 10:07:38.047018 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 1 10:07:38.047354 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 1 10:07:38.055120 systemd-networkd[724]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 1 10:07:38.055125 systemd-networkd[724]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 10:07:38.056866 systemd-networkd[724]: eth0: Link UP Nov 1 10:07:38.057493 systemd-networkd[724]: eth0: Gained carrier Nov 1 10:07:38.057502 systemd-networkd[724]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 1 10:07:38.061240 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 1 10:07:38.064011 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 10:07:38.064284 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 10:07:38.064935 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 10:07:38.073007 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 10:07:38.091040 systemd-networkd[724]: eth0: DHCPv4 address 10.0.0.91/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 1 10:07:38.095160 disk-uuid[812]: Primary Header is updated. Nov 1 10:07:38.095160 disk-uuid[812]: Secondary Entries is updated. Nov 1 10:07:38.095160 disk-uuid[812]: Secondary Header is updated. Nov 1 10:07:38.114186 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 10:07:38.184023 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 1 10:07:38.188585 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 10:07:38.192484 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 10:07:38.196144 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 10:07:38.200805 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 1 10:07:38.235803 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 1 10:07:39.141880 disk-uuid[825]: Warning: The kernel is still using the old partition table. Nov 1 10:07:39.141880 disk-uuid[825]: The new table will be used at the next reboot or after you Nov 1 10:07:39.141880 disk-uuid[825]: run partprobe(8) or kpartx(8) Nov 1 10:07:39.141880 disk-uuid[825]: The operation has completed successfully. Nov 1 10:07:39.152225 systemd-networkd[724]: eth0: Gained IPv6LL Nov 1 10:07:39.162489 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 10:07:39.162665 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 1 10:07:39.168105 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 1 10:07:39.201990 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (863) Nov 1 10:07:39.205103 kernel: BTRFS info (device vda6): first mount of filesystem 75c18d9e-3deb-43e1-a433-af20f45ab517 Nov 1 10:07:39.205121 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 10:07:39.208906 kernel: BTRFS info (device vda6): turning on async discard Nov 1 10:07:39.208934 kernel: BTRFS info (device vda6): enabling free space tree Nov 1 10:07:39.216978 kernel: BTRFS info (device vda6): last unmount of filesystem 75c18d9e-3deb-43e1-a433-af20f45ab517 Nov 1 10:07:39.218062 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 1 10:07:39.220939 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 1 10:07:39.507782 ignition[882]: Ignition 2.22.0 Nov 1 10:07:39.507800 ignition[882]: Stage: fetch-offline Nov 1 10:07:39.507873 ignition[882]: no configs at "/usr/lib/ignition/base.d" Nov 1 10:07:39.507887 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 10:07:39.508055 ignition[882]: parsed url from cmdline: "" Nov 1 10:07:39.508060 ignition[882]: no config URL provided Nov 1 10:07:39.508068 ignition[882]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 10:07:39.508080 ignition[882]: no config at "/usr/lib/ignition/user.ign" Nov 1 10:07:39.508141 ignition[882]: op(1): [started] loading QEMU firmware config module Nov 1 10:07:39.508147 ignition[882]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 1 10:07:39.519276 ignition[882]: op(1): [finished] loading QEMU firmware config module Nov 1 10:07:39.609110 ignition[882]: parsing config with SHA512: 880c5f881bff159006fb3903a7104a731cd21cde4994b54212d57ed6a5e87441da9c9658aa7c8a3ab56bc8be82ddf1677111087c1bc37c904958fbf891980663 Nov 1 10:07:39.617248 unknown[882]: fetched base config from "system" Nov 1 10:07:39.617752 ignition[882]: fetch-offline: fetch-offline passed Nov 1 10:07:39.617270 unknown[882]: fetched user config from "qemu" Nov 1 10:07:39.617843 ignition[882]: Ignition finished successfully Nov 1 10:07:39.621426 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 10:07:39.624296 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 1 10:07:39.625792 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 1 10:07:39.694869 ignition[893]: Ignition 2.22.0 Nov 1 10:07:39.694884 ignition[893]: Stage: kargs Nov 1 10:07:39.695051 ignition[893]: no configs at "/usr/lib/ignition/base.d" Nov 1 10:07:39.695061 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 10:07:39.695993 ignition[893]: kargs: kargs passed Nov 1 10:07:39.701542 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 1 10:07:39.696045 ignition[893]: Ignition finished successfully Nov 1 10:07:39.704575 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 1 10:07:39.809843 ignition[901]: Ignition 2.22.0 Nov 1 10:07:39.809856 ignition[901]: Stage: disks Nov 1 10:07:39.810035 ignition[901]: no configs at "/usr/lib/ignition/base.d" Nov 1 10:07:39.810046 ignition[901]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 10:07:39.813912 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 1 10:07:39.810732 ignition[901]: disks: disks passed Nov 1 10:07:39.817656 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 1 10:07:39.810784 ignition[901]: Ignition finished successfully Nov 1 10:07:39.820826 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 1 10:07:39.822832 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 10:07:39.825446 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 10:07:39.826387 systemd[1]: Reached target basic.target - Basic System. Nov 1 10:07:39.828129 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 1 10:07:39.869188 systemd-fsck[912]: ROOT: clean, 15/456736 files, 38230/456704 blocks Nov 1 10:07:39.877315 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 1 10:07:39.882299 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 1 10:07:40.097001 kernel: EXT4-fs (vda9): mounted filesystem 9a0b584a-8c68-48a6-a0f9-92613ad0f15d r/w with ordered data mode. Quota mode: none. Nov 1 10:07:40.097842 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 1 10:07:40.099144 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 1 10:07:40.102788 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 10:07:40.107073 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 1 10:07:40.107983 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 1 10:07:40.108024 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 10:07:40.108048 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 10:07:40.129817 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 1 10:07:40.137060 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (920) Nov 1 10:07:40.137207 kernel: BTRFS info (device vda6): first mount of filesystem 75c18d9e-3deb-43e1-a433-af20f45ab517 Nov 1 10:07:40.137224 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 10:07:40.137236 kernel: BTRFS info (device vda6): turning on async discard Nov 1 10:07:40.137248 kernel: BTRFS info (device vda6): enabling free space tree Nov 1 10:07:40.135717 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 1 10:07:40.139993 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 10:07:40.193724 initrd-setup-root[944]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 10:07:40.198605 initrd-setup-root[951]: cut: /sysroot/etc/group: No such file or directory Nov 1 10:07:40.202881 initrd-setup-root[958]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 10:07:40.209209 initrd-setup-root[965]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 10:07:40.307590 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 1 10:07:40.310949 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 1 10:07:40.313542 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 1 10:07:40.330749 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 1 10:07:40.333388 kernel: BTRFS info (device vda6): last unmount of filesystem 75c18d9e-3deb-43e1-a433-af20f45ab517 Nov 1 10:07:40.352095 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 1 10:07:40.383355 ignition[1033]: INFO : Ignition 2.22.0 Nov 1 10:07:40.383355 ignition[1033]: INFO : Stage: mount Nov 1 10:07:40.386255 ignition[1033]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 10:07:40.386255 ignition[1033]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 10:07:40.390364 ignition[1033]: INFO : mount: mount passed Nov 1 10:07:40.391673 ignition[1033]: INFO : Ignition finished successfully Nov 1 10:07:40.395740 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 1 10:07:40.397377 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 1 10:07:41.100042 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 10:07:41.132166 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1046) Nov 1 10:07:41.132232 kernel: BTRFS info (device vda6): first mount of filesystem 75c18d9e-3deb-43e1-a433-af20f45ab517 Nov 1 10:07:41.132245 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 10:07:41.137335 kernel: BTRFS info (device vda6): turning on async discard Nov 1 10:07:41.137381 kernel: BTRFS info (device vda6): enabling free space tree Nov 1 10:07:41.139168 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 10:07:41.173766 ignition[1063]: INFO : Ignition 2.22.0 Nov 1 10:07:41.173766 ignition[1063]: INFO : Stage: files Nov 1 10:07:41.176488 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 10:07:41.176488 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 10:07:41.176488 ignition[1063]: DEBUG : files: compiled without relabeling support, skipping Nov 1 10:07:41.181970 ignition[1063]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 10:07:41.181970 ignition[1063]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 10:07:41.188952 ignition[1063]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 10:07:41.191334 ignition[1063]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 10:07:41.193578 ignition[1063]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 10:07:41.192045 unknown[1063]: wrote ssh authorized keys file for user: core Nov 1 10:07:41.197472 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 10:07:41.197472 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 1 10:07:41.237354 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 1 10:07:41.298973 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 10:07:41.302041 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 1 10:07:41.304929 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 10:07:41.304929 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 10:07:41.304929 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 10:07:41.304929 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 10:07:41.304929 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 10:07:41.304929 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 10:07:41.304929 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 10:07:41.324300 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 10:07:41.324300 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 10:07:41.324300 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 10:07:41.324300 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 10:07:41.324300 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 10:07:41.324300 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 1 10:07:41.781981 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 1 10:07:42.267168 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 10:07:42.267168 ignition[1063]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 1 10:07:42.273664 ignition[1063]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 10:07:42.273664 ignition[1063]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 10:07:42.273664 ignition[1063]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 1 10:07:42.273664 ignition[1063]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 1 10:07:42.273664 ignition[1063]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 1 10:07:42.273664 ignition[1063]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 1 10:07:42.273664 ignition[1063]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 1 10:07:42.273664 ignition[1063]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 1 10:07:42.303714 ignition[1063]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 1 10:07:42.308750 ignition[1063]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 1 10:07:42.311457 ignition[1063]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 1 10:07:42.311457 ignition[1063]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 1 10:07:42.315893 ignition[1063]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 10:07:42.315893 ignition[1063]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 10:07:42.315893 ignition[1063]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 10:07:42.315893 ignition[1063]: INFO : files: files passed Nov 1 10:07:42.315893 ignition[1063]: INFO : Ignition finished successfully Nov 1 10:07:42.326513 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 1 10:07:42.330000 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 1 10:07:42.331465 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 1 10:07:42.357633 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 10:07:42.357782 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 1 10:07:42.365669 initrd-setup-root-after-ignition[1094]: grep: /sysroot/oem/oem-release: No such file or directory Nov 1 10:07:42.370569 initrd-setup-root-after-ignition[1096]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 10:07:42.373109 initrd-setup-root-after-ignition[1096]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 1 10:07:42.376732 initrd-setup-root-after-ignition[1100]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 10:07:42.379523 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 10:07:42.384209 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 1 10:07:42.388599 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 1 10:07:42.439449 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 10:07:42.439583 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 1 10:07:42.440908 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 1 10:07:42.441264 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 1 10:07:42.448794 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 1 10:07:42.451048 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 1 10:07:42.492039 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 10:07:42.494440 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 1 10:07:42.518260 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 1 10:07:42.518504 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 1 10:07:42.519322 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 10:07:42.524560 systemd[1]: Stopped target timers.target - Timer Units. Nov 1 10:07:42.525349 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 10:07:42.525464 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 10:07:42.533102 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 1 10:07:42.533948 systemd[1]: Stopped target basic.target - Basic System. Nov 1 10:07:42.538403 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 1 10:07:42.538923 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 10:07:42.544491 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 1 10:07:42.547689 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 1 10:07:42.551074 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 1 10:07:42.554409 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 10:07:42.554938 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 1 10:07:42.561399 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 1 10:07:42.564456 systemd[1]: Stopped target swap.target - Swaps. Nov 1 10:07:42.567421 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 10:07:42.567557 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 1 10:07:42.572011 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 1 10:07:42.572890 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 10:07:42.577547 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 1 10:07:42.577827 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 10:07:42.580849 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 10:07:42.580979 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 1 10:07:42.587235 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 10:07:42.587359 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 10:07:42.588437 systemd[1]: Stopped target paths.target - Path Units. Nov 1 10:07:42.592509 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 10:07:42.598317 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 10:07:42.598948 systemd[1]: Stopped target slices.target - Slice Units. Nov 1 10:07:42.605164 systemd[1]: Stopped target sockets.target - Socket Units. Nov 1 10:07:42.606039 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 10:07:42.606133 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 10:07:42.610846 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 10:07:42.610930 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 10:07:42.613826 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 10:07:42.613969 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 10:07:42.614692 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 10:07:42.614802 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 1 10:07:42.616370 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 1 10:07:42.626283 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 1 10:07:42.628951 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 1 10:07:42.629105 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 10:07:42.629904 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 10:07:42.630027 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 10:07:42.630440 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 10:07:42.630538 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 10:07:42.645239 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 10:07:42.645386 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 1 10:07:42.670381 ignition[1120]: INFO : Ignition 2.22.0 Nov 1 10:07:42.670381 ignition[1120]: INFO : Stage: umount Nov 1 10:07:42.672848 ignition[1120]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 10:07:42.672848 ignition[1120]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 10:07:42.672848 ignition[1120]: INFO : umount: umount passed Nov 1 10:07:42.672848 ignition[1120]: INFO : Ignition finished successfully Nov 1 10:07:42.676032 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 10:07:42.676697 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 10:07:42.676831 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 1 10:07:42.679566 systemd[1]: Stopped target network.target - Network. Nov 1 10:07:42.682354 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 10:07:42.682412 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 1 10:07:42.683392 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 10:07:42.683451 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 1 10:07:42.683908 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 10:07:42.683971 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 1 10:07:42.688776 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 1 10:07:42.688828 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 1 10:07:42.691789 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 1 10:07:42.694641 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 1 10:07:42.709974 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 10:07:42.710157 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 1 10:07:42.719640 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 10:07:42.719796 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 1 10:07:42.726080 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 1 10:07:42.726863 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 10:07:42.726935 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 1 10:07:42.733856 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 1 10:07:42.737097 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 10:07:42.737168 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 10:07:42.739342 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 10:07:42.739395 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 1 10:07:42.741404 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 10:07:42.741480 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 1 10:07:42.744696 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 10:07:42.749141 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 10:07:42.756121 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 1 10:07:42.757258 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 10:07:42.757370 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 1 10:07:42.780000 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 10:07:42.780199 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 10:07:42.781207 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 10:07:42.781256 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 1 10:07:42.785829 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 10:07:42.785870 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 10:07:42.788940 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 10:07:42.789071 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 1 10:07:42.794626 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 10:07:42.794678 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 1 10:07:42.798233 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 10:07:42.798287 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 10:07:42.807414 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 1 10:07:42.807811 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 1 10:07:42.807864 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 1 10:07:42.812837 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 1 10:07:42.812890 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 10:07:42.813651 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 1 10:07:42.813701 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 10:07:42.820482 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 10:07:42.820536 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 10:07:42.823679 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 10:07:42.823733 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 10:07:42.828265 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 10:07:42.828394 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 1 10:07:42.831149 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 10:07:42.831255 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 1 10:07:42.835465 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 1 10:07:42.838805 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 1 10:07:42.866459 systemd[1]: Switching root. Nov 1 10:07:42.917214 systemd-journald[317]: Journal stopped Nov 1 10:07:44.351122 systemd-journald[317]: Received SIGTERM from PID 1 (systemd). Nov 1 10:07:44.351315 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 10:07:44.351331 kernel: SELinux: policy capability open_perms=1 Nov 1 10:07:44.351343 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 10:07:44.351355 kernel: SELinux: policy capability always_check_network=0 Nov 1 10:07:44.351375 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 10:07:44.351392 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 10:07:44.351404 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 10:07:44.351416 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 10:07:44.351435 kernel: SELinux: policy capability userspace_initial_context=0 Nov 1 10:07:44.351448 kernel: audit: type=1403 audit(1761991663.390:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 10:07:44.351468 systemd[1]: Successfully loaded SELinux policy in 67.008ms. Nov 1 10:07:44.351497 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.744ms. Nov 1 10:07:44.351511 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 1 10:07:44.351525 systemd[1]: Detected virtualization kvm. Nov 1 10:07:44.351538 systemd[1]: Detected architecture x86-64. Nov 1 10:07:44.351561 systemd[1]: Detected first boot. Nov 1 10:07:44.351575 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 1 10:07:44.351588 zram_generator::config[1165]: No configuration found. Nov 1 10:07:44.351611 kernel: Guest personality initialized and is inactive Nov 1 10:07:44.351623 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 1 10:07:44.351635 kernel: Initialized host personality Nov 1 10:07:44.351650 kernel: NET: Registered PF_VSOCK protocol family Nov 1 10:07:44.351663 systemd[1]: Populated /etc with preset unit settings. Nov 1 10:07:44.351676 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 1 10:07:44.351689 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 1 10:07:44.351710 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 1 10:07:44.351723 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 1 10:07:44.351737 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 1 10:07:44.351750 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 1 10:07:44.351762 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 1 10:07:44.351779 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 1 10:07:44.351792 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 1 10:07:44.351814 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 1 10:07:44.351827 systemd[1]: Created slice user.slice - User and Session Slice. Nov 1 10:07:44.351841 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 10:07:44.351854 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 10:07:44.351866 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 1 10:07:44.351883 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 1 10:07:44.351896 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 1 10:07:44.351918 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 10:07:44.351938 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 1 10:07:44.351952 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 10:07:44.352014 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 10:07:44.352027 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 1 10:07:44.352040 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 1 10:07:44.352063 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 1 10:07:44.352076 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 1 10:07:44.352091 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 10:07:44.352106 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 10:07:44.352120 systemd[1]: Reached target slices.target - Slice Units. Nov 1 10:07:44.352132 systemd[1]: Reached target swap.target - Swaps. Nov 1 10:07:44.352145 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 1 10:07:44.352167 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 1 10:07:44.352180 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 1 10:07:44.352193 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 10:07:44.352210 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 10:07:44.352223 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 10:07:44.352236 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 1 10:07:44.352249 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 1 10:07:44.352263 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 1 10:07:44.352286 systemd[1]: Mounting media.mount - External Media Directory... Nov 1 10:07:44.352299 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 10:07:44.352315 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 1 10:07:44.352328 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 1 10:07:44.352343 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 1 10:07:44.352356 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 10:07:44.352377 systemd[1]: Reached target machines.target - Containers. Nov 1 10:07:44.352390 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 1 10:07:44.352403 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 10:07:44.352415 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 10:07:44.352431 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 1 10:07:44.352445 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 10:07:44.352458 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 10:07:44.352479 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 10:07:44.352492 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 1 10:07:44.352505 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 10:07:44.352518 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 10:07:44.352531 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 1 10:07:44.352543 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 1 10:07:44.352565 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 1 10:07:44.352587 systemd[1]: Stopped systemd-fsck-usr.service. Nov 1 10:07:44.352604 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 1 10:07:44.352617 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 10:07:44.352633 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 10:07:44.352645 kernel: ACPI: bus type drm_connector registered Nov 1 10:07:44.352663 kernel: fuse: init (API version 7.41) Nov 1 10:07:44.352675 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 1 10:07:44.352699 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 1 10:07:44.352712 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 1 10:07:44.352725 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 10:07:44.352745 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 10:07:44.352765 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 1 10:07:44.352798 systemd-journald[1250]: Collecting audit messages is disabled. Nov 1 10:07:44.352825 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 1 10:07:44.352838 systemd-journald[1250]: Journal started Nov 1 10:07:44.352868 systemd-journald[1250]: Runtime Journal (/run/log/journal/0e4b1e3611384470b3868616b4055954) is 6M, max 48.1M, 42M free. Nov 1 10:07:44.352907 systemd[1]: Mounted media.mount - External Media Directory. Nov 1 10:07:44.028326 systemd[1]: Queued start job for default target multi-user.target. Nov 1 10:07:44.051057 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 1 10:07:44.051610 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 1 10:07:44.358207 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 10:07:44.360485 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 1 10:07:44.362369 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 1 10:07:44.364294 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 1 10:07:44.366236 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 1 10:07:44.368480 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 10:07:44.370805 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 10:07:44.371074 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 1 10:07:44.373326 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 10:07:44.373569 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 10:07:44.375724 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 10:07:44.375954 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 10:07:44.378026 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 10:07:44.378253 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 10:07:44.380532 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 10:07:44.380768 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 1 10:07:44.382829 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 10:07:44.383184 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 10:07:44.385323 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 10:07:44.387656 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 1 10:07:44.390976 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 1 10:07:44.393489 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 1 10:07:44.411899 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 1 10:07:44.414182 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 1 10:07:44.417528 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 1 10:07:44.420408 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 1 10:07:44.422312 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 10:07:44.422343 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 10:07:44.424946 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 1 10:07:44.427485 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 10:07:44.436111 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 1 10:07:44.439269 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 1 10:07:44.441382 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 10:07:44.444192 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 1 10:07:44.447107 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 10:07:44.448648 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 10:07:44.449588 systemd-journald[1250]: Time spent on flushing to /var/log/journal/0e4b1e3611384470b3868616b4055954 is 19.949ms for 1049 entries. Nov 1 10:07:44.449588 systemd-journald[1250]: System Journal (/var/log/journal/0e4b1e3611384470b3868616b4055954) is 8M, max 163.5M, 155.5M free. Nov 1 10:07:44.490300 systemd-journald[1250]: Received client request to flush runtime journal. Nov 1 10:07:44.490348 kernel: loop1: detected capacity change from 0 to 219144 Nov 1 10:07:44.455091 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 1 10:07:44.458199 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 10:07:44.462426 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 10:07:44.464682 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 1 10:07:44.466728 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 1 10:07:44.469230 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 1 10:07:44.476794 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 1 10:07:44.481162 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 1 10:07:44.496436 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 10:07:44.499135 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Nov 1 10:07:44.499148 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Nov 1 10:07:44.499856 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 1 10:07:44.504028 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 10:07:44.508921 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 1 10:07:44.526169 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 1 10:07:44.528588 kernel: loop2: detected capacity change from 0 to 119080 Nov 1 10:07:44.552164 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 1 10:07:44.556528 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 10:07:44.561006 kernel: loop3: detected capacity change from 0 to 111544 Nov 1 10:07:44.561090 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 10:07:44.576721 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 1 10:07:44.586191 systemd-tmpfiles[1306]: ACLs are not supported, ignoring. Nov 1 10:07:44.586214 systemd-tmpfiles[1306]: ACLs are not supported, ignoring. Nov 1 10:07:44.592711 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 10:07:44.598983 kernel: loop4: detected capacity change from 0 to 219144 Nov 1 10:07:44.606994 kernel: loop5: detected capacity change from 0 to 119080 Nov 1 10:07:44.616982 kernel: loop6: detected capacity change from 0 to 111544 Nov 1 10:07:44.624012 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 1 10:07:44.631171 (sd-merge)[1311]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Nov 1 10:07:44.635689 (sd-merge)[1311]: Merged extensions into '/usr'. Nov 1 10:07:44.641325 systemd[1]: Reload requested from client PID 1284 ('systemd-sysext') (unit systemd-sysext.service)... Nov 1 10:07:44.641343 systemd[1]: Reloading... Nov 1 10:07:44.714987 zram_generator::config[1351]: No configuration found. Nov 1 10:07:44.718545 systemd-resolved[1305]: Positive Trust Anchors: Nov 1 10:07:44.718567 systemd-resolved[1305]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 10:07:44.718572 systemd-resolved[1305]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 1 10:07:44.718605 systemd-resolved[1305]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 10:07:44.723037 systemd-resolved[1305]: Defaulting to hostname 'linux'. Nov 1 10:07:44.910795 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 10:07:44.911481 systemd[1]: Reloading finished in 269 ms. Nov 1 10:07:44.951330 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 10:07:44.953645 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 1 10:07:44.958490 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 10:07:44.971318 systemd[1]: Starting ensure-sysext.service... Nov 1 10:07:44.973740 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 10:07:44.986163 systemd[1]: Reload requested from client PID 1381 ('systemctl') (unit ensure-sysext.service)... Nov 1 10:07:44.986183 systemd[1]: Reloading... Nov 1 10:07:44.994115 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 1 10:07:44.994157 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 1 10:07:44.994478 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 10:07:44.994775 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 1 10:07:44.995733 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 10:07:44.996178 systemd-tmpfiles[1382]: ACLs are not supported, ignoring. Nov 1 10:07:44.996256 systemd-tmpfiles[1382]: ACLs are not supported, ignoring. Nov 1 10:07:45.002947 systemd-tmpfiles[1382]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 10:07:45.003067 systemd-tmpfiles[1382]: Skipping /boot Nov 1 10:07:45.016021 systemd-tmpfiles[1382]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 10:07:45.016035 systemd-tmpfiles[1382]: Skipping /boot Nov 1 10:07:45.054071 zram_generator::config[1415]: No configuration found. Nov 1 10:07:45.231345 systemd[1]: Reloading finished in 244 ms. Nov 1 10:07:45.253860 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 1 10:07:45.273469 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 10:07:45.285391 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 1 10:07:45.288273 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 1 10:07:45.291340 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 1 10:07:45.297408 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 1 10:07:45.301363 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 10:07:45.305556 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 1 10:07:45.311324 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 10:07:45.311499 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 10:07:45.313172 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 10:07:45.316604 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 10:07:45.319577 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 10:07:45.321418 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 10:07:45.321538 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 1 10:07:45.321649 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 10:07:45.328493 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 10:07:45.329219 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 10:07:45.329394 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 10:07:45.329484 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 1 10:07:45.329578 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 10:07:45.338592 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 1 10:07:45.341446 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 10:07:45.342062 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 10:07:45.344644 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 10:07:45.346353 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 10:07:45.346463 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 1 10:07:45.346595 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 10:07:45.355592 systemd[1]: Finished ensure-sysext.service. Nov 1 10:07:45.357785 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 10:07:45.358139 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 10:07:45.368649 systemd-udevd[1455]: Using default interface naming scheme 'v257'. Nov 1 10:07:45.372225 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 1 10:07:45.375927 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 1 10:07:45.378879 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 10:07:45.380985 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 10:07:45.384495 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 10:07:45.384929 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 10:07:45.387994 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 10:07:45.388270 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 10:07:45.393728 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 10:07:45.393796 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 10:07:45.405303 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 1 10:07:45.406256 augenrules[1490]: No rules Nov 1 10:07:45.407808 systemd[1]: audit-rules.service: Deactivated successfully. Nov 1 10:07:45.408122 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 1 10:07:45.412701 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 10:07:45.417511 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 10:07:45.425264 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 10:07:45.483185 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 1 10:07:45.485723 systemd[1]: Reached target time-set.target - System Time Set. Nov 1 10:07:45.518614 systemd-networkd[1502]: lo: Link UP Nov 1 10:07:45.518635 systemd-networkd[1502]: lo: Gained carrier Nov 1 10:07:45.520326 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 10:07:45.522206 systemd[1]: Reached target network.target - Network. Nov 1 10:07:45.526323 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 1 10:07:45.531100 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 1 10:07:45.552613 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 1 10:07:45.619585 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 1 10:07:45.623270 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 1 10:07:45.635105 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 10:07:45.639979 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 1 10:07:45.647369 systemd-networkd[1502]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 1 10:07:45.647382 systemd-networkd[1502]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 10:07:45.648019 systemd-networkd[1502]: eth0: Link UP Nov 1 10:07:45.648218 systemd-networkd[1502]: eth0: Gained carrier Nov 1 10:07:45.648232 systemd-networkd[1502]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 1 10:07:45.653000 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 1 10:07:45.660733 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 1 10:07:45.661131 systemd-networkd[1502]: eth0: DHCPv4 address 10.0.0.91/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 1 10:07:45.663088 kernel: ACPI: button: Power Button [PWRF] Nov 1 10:07:45.661983 systemd-timesyncd[1474]: Network configuration changed, trying to establish connection. Nov 1 10:07:46.762955 systemd-timesyncd[1474]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 1 10:07:46.763056 systemd-timesyncd[1474]: Initial clock synchronization to Sat 2025-11-01 10:07:46.762833 UTC. Nov 1 10:07:46.763439 systemd-resolved[1305]: Clock change detected. Flushing caches. Nov 1 10:07:46.776970 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Nov 1 10:07:46.777361 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 1 10:07:46.780057 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 1 10:07:46.955084 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 10:07:47.016232 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 10:07:47.016569 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 10:07:47.025976 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 10:07:47.079741 kernel: kvm_amd: TSC scaling supported Nov 1 10:07:47.079830 kernel: kvm_amd: Nested Virtualization enabled Nov 1 10:07:47.079845 kernel: kvm_amd: Nested Paging enabled Nov 1 10:07:47.079858 kernel: kvm_amd: LBR virtualization supported Nov 1 10:07:47.082967 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 1 10:07:47.082997 kernel: kvm_amd: Virtual GIF supported Nov 1 10:07:47.113734 ldconfig[1453]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 10:07:47.117732 kernel: EDAC MC: Ver: 3.0.0 Nov 1 10:07:47.121340 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 1 10:07:47.124557 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 1 10:07:47.142349 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 10:07:47.153521 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 1 10:07:47.155617 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 10:07:47.157421 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 1 10:07:47.159411 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 1 10:07:47.161389 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 1 10:07:47.163417 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 1 10:07:47.165489 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 1 10:07:47.167577 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 1 10:07:47.169586 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 10:07:47.169628 systemd[1]: Reached target paths.target - Path Units. Nov 1 10:07:47.171086 systemd[1]: Reached target timers.target - Timer Units. Nov 1 10:07:47.173674 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 1 10:07:47.177358 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 1 10:07:47.181307 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 1 10:07:47.183481 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 1 10:07:47.185515 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 1 10:07:47.191368 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 1 10:07:47.193380 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 1 10:07:47.195875 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 1 10:07:47.198299 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 10:07:47.199836 systemd[1]: Reached target basic.target - Basic System. Nov 1 10:07:47.201376 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 1 10:07:47.201409 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 1 10:07:47.202496 systemd[1]: Starting containerd.service - containerd container runtime... Nov 1 10:07:47.205274 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 1 10:07:47.207822 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 1 10:07:47.210742 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 1 10:07:47.213404 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 1 10:07:47.215058 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 1 10:07:47.221883 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 1 10:07:47.225835 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 1 10:07:47.227474 jq[1571]: false Nov 1 10:07:47.228738 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 1 10:07:47.232859 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 1 10:07:47.236905 extend-filesystems[1572]: Found /dev/vda6 Nov 1 10:07:47.236665 oslogin_cache_refresh[1573]: Refreshing passwd entry cache Nov 1 10:07:47.238685 google_oslogin_nss_cache[1573]: oslogin_cache_refresh[1573]: Refreshing passwd entry cache Nov 1 10:07:47.239192 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 1 10:07:47.241244 extend-filesystems[1572]: Found /dev/vda9 Nov 1 10:07:47.244396 extend-filesystems[1572]: Checking size of /dev/vda9 Nov 1 10:07:47.249107 google_oslogin_nss_cache[1573]: oslogin_cache_refresh[1573]: Failure getting users, quitting Nov 1 10:07:47.249107 google_oslogin_nss_cache[1573]: oslogin_cache_refresh[1573]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 1 10:07:47.249107 google_oslogin_nss_cache[1573]: oslogin_cache_refresh[1573]: Refreshing group entry cache Nov 1 10:07:47.248147 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 1 10:07:47.246878 oslogin_cache_refresh[1573]: Failure getting users, quitting Nov 1 10:07:47.246903 oslogin_cache_refresh[1573]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 1 10:07:47.249887 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 1 10:07:47.246963 oslogin_cache_refresh[1573]: Refreshing group entry cache Nov 1 10:07:47.250511 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 1 10:07:47.251920 systemd[1]: Starting update-engine.service - Update Engine... Nov 1 10:07:47.254525 google_oslogin_nss_cache[1573]: oslogin_cache_refresh[1573]: Failure getting groups, quitting Nov 1 10:07:47.254525 google_oslogin_nss_cache[1573]: oslogin_cache_refresh[1573]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 1 10:07:47.253572 oslogin_cache_refresh[1573]: Failure getting groups, quitting Nov 1 10:07:47.253584 oslogin_cache_refresh[1573]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 1 10:07:47.254877 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 1 10:07:47.259476 extend-filesystems[1572]: Resized partition /dev/vda9 Nov 1 10:07:47.333733 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Nov 1 10:07:47.265912 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 1 10:07:47.333959 extend-filesystems[1597]: resize2fs 1.47.3 (8-Jul-2025) Nov 1 10:07:47.338047 update_engine[1590]: I20251101 10:07:47.300916 1590 main.cc:92] Flatcar Update Engine starting Nov 1 10:07:47.268402 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 10:07:47.268660 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 1 10:07:47.269042 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 1 10:07:47.269301 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 1 10:07:47.338793 tar[1600]: linux-amd64/LICENSE Nov 1 10:07:47.338793 tar[1600]: linux-amd64/helm Nov 1 10:07:47.272103 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 10:07:47.272355 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 1 10:07:47.278238 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 10:07:47.278505 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 1 10:07:47.361583 dbus-daemon[1569]: [system] SELinux support is enabled Nov 1 10:07:47.361834 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 1 10:07:47.367719 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Nov 1 10:07:47.370416 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 10:07:47.370452 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 1 10:07:47.372628 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 10:07:47.393506 update_engine[1590]: I20251101 10:07:47.373965 1590 update_check_scheduler.cc:74] Next update check in 2m21s Nov 1 10:07:47.372645 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 1 10:07:47.381637 systemd[1]: Started update-engine.service - Update Engine. Nov 1 10:07:47.388844 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 1 10:07:47.416312 extend-filesystems[1597]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 1 10:07:47.416312 extend-filesystems[1597]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 1 10:07:47.416312 extend-filesystems[1597]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Nov 1 10:07:47.408552 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 10:07:47.423079 extend-filesystems[1572]: Resized filesystem in /dev/vda9 Nov 1 10:07:47.424646 jq[1592]: true Nov 1 10:07:47.429972 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 1 10:07:47.574802 jq[1623]: true Nov 1 10:07:47.592657 locksmithd[1618]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 10:07:47.598361 systemd-logind[1585]: Watching system buttons on /dev/input/event2 (Power Button) Nov 1 10:07:47.598645 systemd-logind[1585]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 1 10:07:47.599796 systemd-logind[1585]: New seat seat0. Nov 1 10:07:47.602164 systemd[1]: Started systemd-logind.service - User Login Management. Nov 1 10:07:47.641980 bash[1650]: Updated "/home/core/.ssh/authorized_keys" Nov 1 10:07:47.644309 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 1 10:07:47.652151 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 1 10:07:47.739722 containerd[1601]: time="2025-11-01T10:07:47Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 1 10:07:47.740712 containerd[1601]: time="2025-11-01T10:07:47.740640205Z" level=info msg="starting containerd" revision=75cb2b7193e4e490e9fbdc236c0e811ccaba3376 version=v2.1.4 Nov 1 10:07:47.742432 sshd_keygen[1599]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 10:07:47.758724 containerd[1601]: time="2025-11-01T10:07:47.757173180Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.523µs" Nov 1 10:07:47.758724 containerd[1601]: time="2025-11-01T10:07:47.757221611Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 1 10:07:47.758724 containerd[1601]: time="2025-11-01T10:07:47.757276414Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 1 10:07:47.758724 containerd[1601]: time="2025-11-01T10:07:47.757295860Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 1 10:07:47.758724 containerd[1601]: time="2025-11-01T10:07:47.757482871Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 1 10:07:47.758724 containerd[1601]: time="2025-11-01T10:07:47.757500745Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 1 10:07:47.758724 containerd[1601]: time="2025-11-01T10:07:47.757567981Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 1 10:07:47.758724 containerd[1601]: time="2025-11-01T10:07:47.757579041Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 1 10:07:47.758724 containerd[1601]: time="2025-11-01T10:07:47.757860369Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 1 10:07:47.758724 containerd[1601]: time="2025-11-01T10:07:47.757875958Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 1 10:07:47.758724 containerd[1601]: time="2025-11-01T10:07:47.757886608Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 1 10:07:47.758724 containerd[1601]: time="2025-11-01T10:07:47.757894202Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Nov 1 10:07:47.759132 containerd[1601]: time="2025-11-01T10:07:47.758055595Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Nov 1 10:07:47.759132 containerd[1601]: time="2025-11-01T10:07:47.758069391Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 1 10:07:47.759132 containerd[1601]: time="2025-11-01T10:07:47.758169529Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 1 10:07:47.759132 containerd[1601]: time="2025-11-01T10:07:47.758408808Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 1 10:07:47.759132 containerd[1601]: time="2025-11-01T10:07:47.758438133Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 1 10:07:47.759132 containerd[1601]: time="2025-11-01T10:07:47.758448071Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 1 10:07:47.759132 containerd[1601]: time="2025-11-01T10:07:47.758492464Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 1 10:07:47.759132 containerd[1601]: time="2025-11-01T10:07:47.758823455Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 1 10:07:47.759132 containerd[1601]: time="2025-11-01T10:07:47.758915337Z" level=info msg="metadata content store policy set" policy=shared Nov 1 10:07:47.767402 containerd[1601]: time="2025-11-01T10:07:47.767348974Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 1 10:07:47.767476 containerd[1601]: time="2025-11-01T10:07:47.767433933Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Nov 1 10:07:47.767589 containerd[1601]: time="2025-11-01T10:07:47.767544300Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Nov 1 10:07:47.767589 containerd[1601]: time="2025-11-01T10:07:47.767559078Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 1 10:07:47.767589 containerd[1601]: time="2025-11-01T10:07:47.767573905Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 1 10:07:47.767589 containerd[1601]: time="2025-11-01T10:07:47.767586659Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 1 10:07:47.767709 containerd[1601]: time="2025-11-01T10:07:47.767598782Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 1 10:07:47.767709 containerd[1601]: time="2025-11-01T10:07:47.767609121Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 1 10:07:47.767709 containerd[1601]: time="2025-11-01T10:07:47.767627165Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 1 10:07:47.767709 containerd[1601]: time="2025-11-01T10:07:47.767644147Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 1 10:07:47.767709 containerd[1601]: time="2025-11-01T10:07:47.767656350Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 1 10:07:47.767709 containerd[1601]: time="2025-11-01T10:07:47.767667952Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 1 10:07:47.767709 containerd[1601]: time="2025-11-01T10:07:47.767678431Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 1 10:07:47.767709 containerd[1601]: time="2025-11-01T10:07:47.767710572Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 1 10:07:47.767919 containerd[1601]: time="2025-11-01T10:07:47.767854742Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 1 10:07:47.767919 containerd[1601]: time="2025-11-01T10:07:47.767890449Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 1 10:07:47.767919 containerd[1601]: time="2025-11-01T10:07:47.767909314Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 1 10:07:47.768009 containerd[1601]: time="2025-11-01T10:07:47.767930063Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 1 10:07:47.768009 containerd[1601]: time="2025-11-01T10:07:47.767941725Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 1 10:07:47.768009 containerd[1601]: time="2025-11-01T10:07:47.767953888Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 1 10:07:47.768009 containerd[1601]: time="2025-11-01T10:07:47.767966191Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 1 10:07:47.768009 containerd[1601]: time="2025-11-01T10:07:47.767980217Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 1 10:07:47.768009 containerd[1601]: time="2025-11-01T10:07:47.767990968Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 1 10:07:47.768009 containerd[1601]: time="2025-11-01T10:07:47.768000776Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 1 10:07:47.768009 containerd[1601]: time="2025-11-01T10:07:47.768011085Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 1 10:07:47.768242 containerd[1601]: time="2025-11-01T10:07:47.768034509Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 1 10:07:47.768242 containerd[1601]: time="2025-11-01T10:07:47.768088571Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 1 10:07:47.768242 containerd[1601]: time="2025-11-01T10:07:47.768102316Z" level=info msg="Start snapshots syncer" Nov 1 10:07:47.768242 containerd[1601]: time="2025-11-01T10:07:47.768138865Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 1 10:07:47.768502 containerd[1601]: time="2025-11-01T10:07:47.768461690Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 1 10:07:47.768649 containerd[1601]: time="2025-11-01T10:07:47.768528075Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 1 10:07:47.768649 containerd[1601]: time="2025-11-01T10:07:47.768603917Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 1 10:07:47.770726 containerd[1601]: time="2025-11-01T10:07:47.768732107Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 1 10:07:47.770726 containerd[1601]: time="2025-11-01T10:07:47.768756854Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 1 10:07:47.770726 containerd[1601]: time="2025-11-01T10:07:47.768767824Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 1 10:07:47.770726 containerd[1601]: time="2025-11-01T10:07:47.768778114Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 1 10:07:47.770726 containerd[1601]: time="2025-11-01T10:07:47.768790337Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 1 10:07:47.770726 containerd[1601]: time="2025-11-01T10:07:47.768814472Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 1 10:07:47.770726 containerd[1601]: time="2025-11-01T10:07:47.768826274Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 1 10:07:47.770726 containerd[1601]: time="2025-11-01T10:07:47.768837184Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 1 10:07:47.770726 containerd[1601]: time="2025-11-01T10:07:47.768846843Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 1 10:07:47.770726 containerd[1601]: time="2025-11-01T10:07:47.768887409Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 1 10:07:47.770726 containerd[1601]: time="2025-11-01T10:07:47.768900283Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 1 10:07:47.770726 containerd[1601]: time="2025-11-01T10:07:47.768909129Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 1 10:07:47.770726 containerd[1601]: time="2025-11-01T10:07:47.768919078Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 1 10:07:47.770726 containerd[1601]: time="2025-11-01T10:07:47.768927554Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 1 10:07:47.771115 containerd[1601]: time="2025-11-01T10:07:47.768937663Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 1 10:07:47.771115 containerd[1601]: time="2025-11-01T10:07:47.768948042Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 1 10:07:47.771115 containerd[1601]: time="2025-11-01T10:07:47.768969563Z" level=info msg="runtime interface created" Nov 1 10:07:47.771115 containerd[1601]: time="2025-11-01T10:07:47.768975714Z" level=info msg="created NRI interface" Nov 1 10:07:47.771115 containerd[1601]: time="2025-11-01T10:07:47.769036879Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 1 10:07:47.771115 containerd[1601]: time="2025-11-01T10:07:47.769050555Z" level=info msg="Connect containerd service" Nov 1 10:07:47.771115 containerd[1601]: time="2025-11-01T10:07:47.769069811Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 1 10:07:47.771115 containerd[1601]: time="2025-11-01T10:07:47.770104661Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 10:07:47.784602 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 1 10:07:47.788819 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 1 10:07:47.812988 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 10:07:47.813297 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 1 10:07:47.864083 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 1 10:07:47.888839 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 1 10:07:47.893485 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 1 10:07:47.896939 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 1 10:07:47.898861 systemd[1]: Reached target getty.target - Login Prompts. Nov 1 10:07:47.939231 tar[1600]: linux-amd64/README.md Nov 1 10:07:47.952759 containerd[1601]: time="2025-11-01T10:07:47.951466520Z" level=info msg="Start subscribing containerd event" Nov 1 10:07:47.952759 containerd[1601]: time="2025-11-01T10:07:47.951565826Z" level=info msg="Start recovering state" Nov 1 10:07:47.952759 containerd[1601]: time="2025-11-01T10:07:47.951720446Z" level=info msg="Start event monitor" Nov 1 10:07:47.952759 containerd[1601]: time="2025-11-01T10:07:47.951743750Z" level=info msg="Start cni network conf syncer for default" Nov 1 10:07:47.952759 containerd[1601]: time="2025-11-01T10:07:47.951752135Z" level=info msg="Start streaming server" Nov 1 10:07:47.952759 containerd[1601]: time="2025-11-01T10:07:47.951773445Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 1 10:07:47.952759 containerd[1601]: time="2025-11-01T10:07:47.951785658Z" level=info msg="runtime interface starting up..." Nov 1 10:07:47.952759 containerd[1601]: time="2025-11-01T10:07:47.951795477Z" level=info msg="starting plugins..." Nov 1 10:07:47.952759 containerd[1601]: time="2025-11-01T10:07:47.951818259Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 1 10:07:47.952759 containerd[1601]: time="2025-11-01T10:07:47.952042560Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 10:07:47.952759 containerd[1601]: time="2025-11-01T10:07:47.952113303Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 10:07:47.952407 systemd[1]: Started containerd.service - containerd container runtime. Nov 1 10:07:47.953173 containerd[1601]: time="2025-11-01T10:07:47.953152602Z" level=info msg="containerd successfully booted in 0.214081s" Nov 1 10:07:48.054365 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 1 10:07:48.763952 systemd-networkd[1502]: eth0: Gained IPv6LL Nov 1 10:07:48.767377 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 1 10:07:48.770734 systemd[1]: Reached target network-online.target - Network is Online. Nov 1 10:07:48.774316 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 1 10:07:48.777580 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 10:07:48.790936 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 1 10:07:48.814085 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 1 10:07:48.814382 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 1 10:07:48.816850 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 1 10:07:48.818803 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 1 10:07:49.564184 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 1 10:07:49.567287 systemd[1]: Started sshd@0-10.0.0.91:22-10.0.0.1:47848.service - OpenSSH per-connection server daemon (10.0.0.1:47848). Nov 1 10:07:49.644763 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 47848 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:07:49.646807 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:07:49.666575 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 1 10:07:49.669595 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 1 10:07:49.672462 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 10:07:49.681103 (kubelet)[1716]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 10:07:49.682772 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 1 10:07:49.685949 systemd-logind[1585]: New session 1 of user core. Nov 1 10:07:49.696133 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 1 10:07:49.701544 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 1 10:07:49.720101 (systemd)[1719]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 10:07:49.722970 systemd-logind[1585]: New session c1 of user core. Nov 1 10:07:49.863870 systemd[1719]: Queued start job for default target default.target. Nov 1 10:07:49.879013 systemd[1719]: Created slice app.slice - User Application Slice. Nov 1 10:07:49.879040 systemd[1719]: Reached target paths.target - Paths. Nov 1 10:07:49.879084 systemd[1719]: Reached target timers.target - Timers. Nov 1 10:07:49.880674 systemd[1719]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 1 10:07:49.893195 systemd[1719]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 1 10:07:49.893368 systemd[1719]: Reached target sockets.target - Sockets. Nov 1 10:07:49.893431 systemd[1719]: Reached target basic.target - Basic System. Nov 1 10:07:49.893498 systemd[1719]: Reached target default.target - Main User Target. Nov 1 10:07:49.893546 systemd[1719]: Startup finished in 162ms. Nov 1 10:07:49.893785 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 1 10:07:49.897275 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 1 10:07:49.899239 systemd[1]: Startup finished in 3.515s (kernel) + 6.502s (initrd) + 5.474s (userspace) = 15.492s. Nov 1 10:07:49.919079 systemd[1]: Started sshd@1-10.0.0.91:22-10.0.0.1:47854.service - OpenSSH per-connection server daemon (10.0.0.1:47854). Nov 1 10:07:49.969541 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 47854 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:07:49.971011 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:07:49.975996 systemd-logind[1585]: New session 2 of user core. Nov 1 10:07:49.989803 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 1 10:07:50.003283 sshd[1739]: Connection closed by 10.0.0.1 port 47854 Nov 1 10:07:50.003547 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Nov 1 10:07:50.015383 systemd[1]: sshd@1-10.0.0.91:22-10.0.0.1:47854.service: Deactivated successfully. Nov 1 10:07:50.017299 systemd[1]: session-2.scope: Deactivated successfully. Nov 1 10:07:50.018071 systemd-logind[1585]: Session 2 logged out. Waiting for processes to exit. Nov 1 10:07:50.020941 systemd[1]: Started sshd@2-10.0.0.91:22-10.0.0.1:53456.service - OpenSSH per-connection server daemon (10.0.0.1:53456). Nov 1 10:07:50.023788 systemd-logind[1585]: Removed session 2. Nov 1 10:07:50.054808 kubelet[1716]: E1101 10:07:50.054759 1716 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 10:07:50.058641 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 10:07:50.058893 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 10:07:50.059300 systemd[1]: kubelet.service: Consumed 1.087s CPU time, 257.7M memory peak. Nov 1 10:07:50.070774 sshd[1745]: Accepted publickey for core from 10.0.0.1 port 53456 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:07:50.072064 sshd-session[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:07:50.076177 systemd-logind[1585]: New session 3 of user core. Nov 1 10:07:50.087830 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 1 10:07:50.095770 sshd[1750]: Connection closed by 10.0.0.1 port 53456 Nov 1 10:07:50.096041 sshd-session[1745]: pam_unix(sshd:session): session closed for user core Nov 1 10:07:50.108963 systemd[1]: sshd@2-10.0.0.91:22-10.0.0.1:53456.service: Deactivated successfully. Nov 1 10:07:50.110576 systemd[1]: session-3.scope: Deactivated successfully. Nov 1 10:07:50.111313 systemd-logind[1585]: Session 3 logged out. Waiting for processes to exit. Nov 1 10:07:50.113835 systemd[1]: Started sshd@3-10.0.0.91:22-10.0.0.1:53460.service - OpenSSH per-connection server daemon (10.0.0.1:53460). Nov 1 10:07:50.114370 systemd-logind[1585]: Removed session 3. Nov 1 10:07:50.166213 sshd[1756]: Accepted publickey for core from 10.0.0.1 port 53460 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:07:50.167437 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:07:50.171479 systemd-logind[1585]: New session 4 of user core. Nov 1 10:07:50.180827 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 1 10:07:50.192994 sshd[1759]: Connection closed by 10.0.0.1 port 53460 Nov 1 10:07:50.193304 sshd-session[1756]: pam_unix(sshd:session): session closed for user core Nov 1 10:07:50.204108 systemd[1]: sshd@3-10.0.0.91:22-10.0.0.1:53460.service: Deactivated successfully. Nov 1 10:07:50.205957 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 10:07:50.206668 systemd-logind[1585]: Session 4 logged out. Waiting for processes to exit. Nov 1 10:07:50.209577 systemd[1]: Started sshd@4-10.0.0.91:22-10.0.0.1:53474.service - OpenSSH per-connection server daemon (10.0.0.1:53474). Nov 1 10:07:50.210291 systemd-logind[1585]: Removed session 4. Nov 1 10:07:50.259805 sshd[1765]: Accepted publickey for core from 10.0.0.1 port 53474 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:07:50.261260 sshd-session[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:07:50.265672 systemd-logind[1585]: New session 5 of user core. Nov 1 10:07:50.275867 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 1 10:07:50.297069 sudo[1773]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 1 10:07:50.297382 sudo[1773]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 10:07:50.317216 sudo[1773]: pam_unix(sudo:session): session closed for user root Nov 1 10:07:50.319037 sshd[1772]: Connection closed by 10.0.0.1 port 53474 Nov 1 10:07:50.319455 sshd-session[1765]: pam_unix(sshd:session): session closed for user core Nov 1 10:07:50.333236 systemd[1]: sshd@4-10.0.0.91:22-10.0.0.1:53474.service: Deactivated successfully. Nov 1 10:07:50.334908 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 10:07:50.335634 systemd-logind[1585]: Session 5 logged out. Waiting for processes to exit. Nov 1 10:07:50.338298 systemd[1]: Started sshd@5-10.0.0.91:22-10.0.0.1:53484.service - OpenSSH per-connection server daemon (10.0.0.1:53484). Nov 1 10:07:50.339065 systemd-logind[1585]: Removed session 5. Nov 1 10:07:50.387763 sshd[1779]: Accepted publickey for core from 10.0.0.1 port 53484 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:07:50.389204 sshd-session[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:07:50.393553 systemd-logind[1585]: New session 6 of user core. Nov 1 10:07:50.406832 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 1 10:07:50.424001 sudo[1784]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 1 10:07:50.424451 sudo[1784]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 10:07:50.431609 sudo[1784]: pam_unix(sudo:session): session closed for user root Nov 1 10:07:50.441039 sudo[1783]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 1 10:07:50.441362 sudo[1783]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 10:07:50.452921 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 1 10:07:50.512922 augenrules[1806]: No rules Nov 1 10:07:50.515032 systemd[1]: audit-rules.service: Deactivated successfully. Nov 1 10:07:50.515432 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 1 10:07:50.516834 sudo[1783]: pam_unix(sudo:session): session closed for user root Nov 1 10:07:50.518628 sshd[1782]: Connection closed by 10.0.0.1 port 53484 Nov 1 10:07:50.518990 sshd-session[1779]: pam_unix(sshd:session): session closed for user core Nov 1 10:07:50.528058 systemd[1]: sshd@5-10.0.0.91:22-10.0.0.1:53484.service: Deactivated successfully. Nov 1 10:07:50.530166 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 10:07:50.530928 systemd-logind[1585]: Session 6 logged out. Waiting for processes to exit. Nov 1 10:07:50.533625 systemd[1]: Started sshd@6-10.0.0.91:22-10.0.0.1:53498.service - OpenSSH per-connection server daemon (10.0.0.1:53498). Nov 1 10:07:50.534262 systemd-logind[1585]: Removed session 6. Nov 1 10:07:50.587617 sshd[1815]: Accepted publickey for core from 10.0.0.1 port 53498 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:07:50.588896 sshd-session[1815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:07:50.592864 systemd-logind[1585]: New session 7 of user core. Nov 1 10:07:50.602810 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 1 10:07:50.615864 sudo[1820]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 10:07:50.616182 sudo[1820]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 10:07:50.971827 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 1 10:07:50.995012 (dockerd)[1841]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 1 10:07:51.277127 dockerd[1841]: time="2025-11-01T10:07:51.276968603Z" level=info msg="Starting up" Nov 1 10:07:51.277829 dockerd[1841]: time="2025-11-01T10:07:51.277807086Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 1 10:07:51.291481 dockerd[1841]: time="2025-11-01T10:07:51.291438209Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 1 10:07:51.821461 dockerd[1841]: time="2025-11-01T10:07:51.821412169Z" level=info msg="Loading containers: start." Nov 1 10:07:51.831732 kernel: Initializing XFRM netlink socket Nov 1 10:07:52.094347 systemd-networkd[1502]: docker0: Link UP Nov 1 10:07:52.099799 dockerd[1841]: time="2025-11-01T10:07:52.099740454Z" level=info msg="Loading containers: done." Nov 1 10:07:52.113170 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3353916976-merged.mount: Deactivated successfully. Nov 1 10:07:52.114718 dockerd[1841]: time="2025-11-01T10:07:52.114646648Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 10:07:52.114844 dockerd[1841]: time="2025-11-01T10:07:52.114765662Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 1 10:07:52.114887 dockerd[1841]: time="2025-11-01T10:07:52.114864597Z" level=info msg="Initializing buildkit" Nov 1 10:07:52.143485 dockerd[1841]: time="2025-11-01T10:07:52.143432402Z" level=info msg="Completed buildkit initialization" Nov 1 10:07:52.149794 dockerd[1841]: time="2025-11-01T10:07:52.149755019Z" level=info msg="Daemon has completed initialization" Nov 1 10:07:52.149926 dockerd[1841]: time="2025-11-01T10:07:52.149808179Z" level=info msg="API listen on /run/docker.sock" Nov 1 10:07:52.149992 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 1 10:07:52.683162 containerd[1601]: time="2025-11-01T10:07:52.683108858Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 1 10:07:53.354976 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2723586000.mount: Deactivated successfully. Nov 1 10:07:54.062584 containerd[1601]: time="2025-11-01T10:07:54.062511350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:07:54.063319 containerd[1601]: time="2025-11-01T10:07:54.063279431Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=25393225" Nov 1 10:07:54.064432 containerd[1601]: time="2025-11-01T10:07:54.064393991Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:07:54.066794 containerd[1601]: time="2025-11-01T10:07:54.066760298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:07:54.067877 containerd[1601]: time="2025-11-01T10:07:54.067816239Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 1.384645635s" Nov 1 10:07:54.067930 containerd[1601]: time="2025-11-01T10:07:54.067888594Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Nov 1 10:07:54.068546 containerd[1601]: time="2025-11-01T10:07:54.068466628Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 1 10:07:55.130786 containerd[1601]: time="2025-11-01T10:07:55.130717618Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:07:55.131554 containerd[1601]: time="2025-11-01T10:07:55.131484596Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21151604" Nov 1 10:07:55.132788 containerd[1601]: time="2025-11-01T10:07:55.132744159Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:07:55.135191 containerd[1601]: time="2025-11-01T10:07:55.135155831Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:07:55.136071 containerd[1601]: time="2025-11-01T10:07:55.135998171Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.067458465s" Nov 1 10:07:55.136071 containerd[1601]: time="2025-11-01T10:07:55.136067341Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Nov 1 10:07:55.136786 containerd[1601]: time="2025-11-01T10:07:55.136735043Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 1 10:07:56.042205 containerd[1601]: time="2025-11-01T10:07:56.042137015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:07:56.044116 containerd[1601]: time="2025-11-01T10:07:56.044075220Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=0" Nov 1 10:07:56.046316 containerd[1601]: time="2025-11-01T10:07:56.046253625Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:07:56.048766 containerd[1601]: time="2025-11-01T10:07:56.048721513Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:07:56.049599 containerd[1601]: time="2025-11-01T10:07:56.049559074Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 912.787091ms" Nov 1 10:07:56.049599 containerd[1601]: time="2025-11-01T10:07:56.049596033Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Nov 1 10:07:56.050365 containerd[1601]: time="2025-11-01T10:07:56.050327725Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 1 10:07:57.365065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4218172874.mount: Deactivated successfully. Nov 1 10:07:57.566277 containerd[1601]: time="2025-11-01T10:07:57.566202462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:07:57.567110 containerd[1601]: time="2025-11-01T10:07:57.567076881Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=0" Nov 1 10:07:57.568074 containerd[1601]: time="2025-11-01T10:07:57.568043534Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:07:57.570080 containerd[1601]: time="2025-11-01T10:07:57.570035300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:07:57.570510 containerd[1601]: time="2025-11-01T10:07:57.570462881Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 1.520100531s" Nov 1 10:07:57.570537 containerd[1601]: time="2025-11-01T10:07:57.570510150Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Nov 1 10:07:57.571233 containerd[1601]: time="2025-11-01T10:07:57.571206867Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 1 10:07:58.199138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2012341264.mount: Deactivated successfully. Nov 1 10:07:58.925382 containerd[1601]: time="2025-11-01T10:07:58.925310815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:07:58.926024 containerd[1601]: time="2025-11-01T10:07:58.925971845Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=21568837" Nov 1 10:07:58.927205 containerd[1601]: time="2025-11-01T10:07:58.927154723Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:07:58.929705 containerd[1601]: time="2025-11-01T10:07:58.929657486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:07:58.930543 containerd[1601]: time="2025-11-01T10:07:58.930485820Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.359217358s" Nov 1 10:07:58.930543 containerd[1601]: time="2025-11-01T10:07:58.930536926Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Nov 1 10:07:58.931158 containerd[1601]: time="2025-11-01T10:07:58.931134226Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 1 10:07:59.409137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1831768199.mount: Deactivated successfully. Nov 1 10:07:59.413753 containerd[1601]: time="2025-11-01T10:07:59.413684380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:07:59.414490 containerd[1601]: time="2025-11-01T10:07:59.414455235Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Nov 1 10:07:59.415563 containerd[1601]: time="2025-11-01T10:07:59.415520623Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:07:59.417508 containerd[1601]: time="2025-11-01T10:07:59.417459650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:07:59.418155 containerd[1601]: time="2025-11-01T10:07:59.418086325Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 486.922183ms" Nov 1 10:07:59.418155 containerd[1601]: time="2025-11-01T10:07:59.418144624Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Nov 1 10:07:59.418784 containerd[1601]: time="2025-11-01T10:07:59.418747304Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 1 10:08:00.309229 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 10:08:00.310944 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 10:08:00.915331 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 10:08:00.920091 (kubelet)[2237]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 10:08:00.978479 kubelet[2237]: E1101 10:08:00.978392 2237 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 10:08:00.984711 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 10:08:00.984943 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 10:08:00.985381 systemd[1]: kubelet.service: Consumed 267ms CPU time, 110.2M memory peak. Nov 1 10:08:02.424069 containerd[1601]: time="2025-11-01T10:08:02.423970190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:08:02.425029 containerd[1601]: time="2025-11-01T10:08:02.424971358Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=61186606" Nov 1 10:08:02.426419 containerd[1601]: time="2025-11-01T10:08:02.426369800Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:08:02.429135 containerd[1601]: time="2025-11-01T10:08:02.429098828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:08:02.430238 containerd[1601]: time="2025-11-01T10:08:02.430186107Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 3.011409708s" Nov 1 10:08:02.430238 containerd[1601]: time="2025-11-01T10:08:02.430233296Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Nov 1 10:08:06.340269 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 10:08:06.340445 systemd[1]: kubelet.service: Consumed 267ms CPU time, 110.2M memory peak. Nov 1 10:08:06.342926 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 10:08:06.367682 systemd[1]: Reload requested from client PID 2280 ('systemctl') (unit session-7.scope)... Nov 1 10:08:06.367719 systemd[1]: Reloading... Nov 1 10:08:06.443727 zram_generator::config[2323]: No configuration found. Nov 1 10:08:06.697321 systemd[1]: Reloading finished in 329 ms. Nov 1 10:08:06.771372 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 1 10:08:06.771485 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 1 10:08:06.771822 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 10:08:06.771888 systemd[1]: kubelet.service: Consumed 171ms CPU time, 98.2M memory peak. Nov 1 10:08:06.773499 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 10:08:06.987729 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 10:08:07.008004 (kubelet)[2372]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 10:08:07.053950 kubelet[2372]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 10:08:07.053950 kubelet[2372]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 10:08:07.054220 kubelet[2372]: I1101 10:08:07.053979 2372 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 10:08:08.217618 kubelet[2372]: I1101 10:08:08.217556 2372 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 1 10:08:08.217618 kubelet[2372]: I1101 10:08:08.217585 2372 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 10:08:08.220211 kubelet[2372]: I1101 10:08:08.220186 2372 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 1 10:08:08.220211 kubelet[2372]: I1101 10:08:08.220201 2372 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 10:08:08.220412 kubelet[2372]: I1101 10:08:08.220391 2372 server.go:956] "Client rotation is on, will bootstrap in background" Nov 1 10:08:08.225940 kubelet[2372]: E1101 10:08:08.225891 2372 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.91:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 1 10:08:08.228001 kubelet[2372]: I1101 10:08:08.227962 2372 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 10:08:08.232682 kubelet[2372]: I1101 10:08:08.232647 2372 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 1 10:08:08.237995 kubelet[2372]: I1101 10:08:08.237952 2372 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 1 10:08:08.238314 kubelet[2372]: I1101 10:08:08.238277 2372 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 10:08:08.238496 kubelet[2372]: I1101 10:08:08.238308 2372 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 10:08:08.238496 kubelet[2372]: I1101 10:08:08.238495 2372 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 10:08:08.238736 kubelet[2372]: I1101 10:08:08.238503 2372 container_manager_linux.go:306] "Creating device plugin manager" Nov 1 10:08:08.238736 kubelet[2372]: I1101 10:08:08.238664 2372 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 1 10:08:08.241813 kubelet[2372]: I1101 10:08:08.241783 2372 state_mem.go:36] "Initialized new in-memory state store" Nov 1 10:08:08.242035 kubelet[2372]: I1101 10:08:08.242003 2372 kubelet.go:475] "Attempting to sync node with API server" Nov 1 10:08:08.242035 kubelet[2372]: I1101 10:08:08.242022 2372 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 10:08:08.242090 kubelet[2372]: I1101 10:08:08.242048 2372 kubelet.go:387] "Adding apiserver pod source" Nov 1 10:08:08.242090 kubelet[2372]: I1101 10:08:08.242079 2372 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 10:08:08.242606 kubelet[2372]: E1101 10:08:08.242574 2372 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 1 10:08:08.242885 kubelet[2372]: E1101 10:08:08.242855 2372 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.91:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 1 10:08:08.244771 kubelet[2372]: I1101 10:08:08.244747 2372 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Nov 1 10:08:08.245236 kubelet[2372]: I1101 10:08:08.245201 2372 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 1 10:08:08.245236 kubelet[2372]: I1101 10:08:08.245232 2372 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 1 10:08:08.245303 kubelet[2372]: W1101 10:08:08.245287 2372 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 10:08:08.249032 kubelet[2372]: I1101 10:08:08.248996 2372 server.go:1262] "Started kubelet" Nov 1 10:08:08.249161 kubelet[2372]: I1101 10:08:08.249052 2372 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 10:08:08.253362 kubelet[2372]: I1101 10:08:08.253334 2372 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 10:08:08.253584 kubelet[2372]: I1101 10:08:08.253564 2372 server.go:310] "Adding debug handlers to kubelet server" Nov 1 10:08:08.254619 kubelet[2372]: I1101 10:08:08.254584 2372 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 10:08:08.255013 kubelet[2372]: I1101 10:08:08.254988 2372 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 1 10:08:08.255146 kubelet[2372]: E1101 10:08:08.255121 2372 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 10:08:08.256573 kubelet[2372]: I1101 10:08:08.256546 2372 factory.go:223] Registration of the systemd container factory successfully Nov 1 10:08:08.256743 kubelet[2372]: I1101 10:08:08.256718 2372 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 10:08:08.257017 kubelet[2372]: E1101 10:08:08.256987 2372 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.91:6443: connect: connection refused" interval="200ms" Nov 1 10:08:08.257891 kubelet[2372]: I1101 10:08:08.257838 2372 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 10:08:08.260101 kubelet[2372]: I1101 10:08:08.260071 2372 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 1 10:08:08.260334 kubelet[2372]: I1101 10:08:08.257935 2372 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 1 10:08:08.260334 kubelet[2372]: E1101 10:08:08.258618 2372 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 1 10:08:08.260334 kubelet[2372]: E1101 10:08:08.258870 2372 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 10:08:08.260424 kubelet[2372]: I1101 10:08:08.259057 2372 factory.go:223] Registration of the containerd container factory successfully Nov 1 10:08:08.260424 kubelet[2372]: E1101 10:08:08.259328 2372 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.91:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.91:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1873da1ae3454ceb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-01 10:08:08.248970475 +0000 UTC m=+1.236780636,LastTimestamp:2025-11-01 10:08:08.248970475 +0000 UTC m=+1.236780636,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 1 10:08:08.260424 kubelet[2372]: I1101 10:08:08.258054 2372 reconciler.go:29] "Reconciler: start to sync state" Nov 1 10:08:08.260424 kubelet[2372]: I1101 10:08:08.260395 2372 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 10:08:08.274004 kubelet[2372]: I1101 10:08:08.273977 2372 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 10:08:08.274004 kubelet[2372]: I1101 10:08:08.273996 2372 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 10:08:08.274087 kubelet[2372]: I1101 10:08:08.274013 2372 state_mem.go:36] "Initialized new in-memory state store" Nov 1 10:08:08.276869 kubelet[2372]: I1101 10:08:08.276849 2372 policy_none.go:49] "None policy: Start" Nov 1 10:08:08.276869 kubelet[2372]: I1101 10:08:08.276870 2372 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 1 10:08:08.276945 kubelet[2372]: I1101 10:08:08.276881 2372 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 1 10:08:08.278129 kubelet[2372]: I1101 10:08:08.278106 2372 policy_none.go:47] "Start" Nov 1 10:08:08.282714 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 1 10:08:08.285544 kubelet[2372]: I1101 10:08:08.285502 2372 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 1 10:08:08.286902 kubelet[2372]: I1101 10:08:08.286879 2372 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 1 10:08:08.286902 kubelet[2372]: I1101 10:08:08.286906 2372 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 1 10:08:08.287006 kubelet[2372]: I1101 10:08:08.286926 2372 kubelet.go:2427] "Starting kubelet main sync loop" Nov 1 10:08:08.287006 kubelet[2372]: E1101 10:08:08.286965 2372 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 10:08:08.287452 kubelet[2372]: E1101 10:08:08.287382 2372 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 1 10:08:08.293460 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 1 10:08:08.296518 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 1 10:08:08.317502 kubelet[2372]: E1101 10:08:08.317460 2372 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 1 10:08:08.317724 kubelet[2372]: I1101 10:08:08.317665 2372 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 10:08:08.317724 kubelet[2372]: I1101 10:08:08.317680 2372 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 10:08:08.318072 kubelet[2372]: I1101 10:08:08.318049 2372 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 10:08:08.319956 kubelet[2372]: E1101 10:08:08.319919 2372 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 10:08:08.319956 kubelet[2372]: E1101 10:08:08.319963 2372 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 1 10:08:08.398135 systemd[1]: Created slice kubepods-burstable-pod650248b73c97657587dccb7ec98e5de4.slice - libcontainer container kubepods-burstable-pod650248b73c97657587dccb7ec98e5de4.slice. Nov 1 10:08:08.419185 kubelet[2372]: E1101 10:08:08.419130 2372 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 10:08:08.419840 kubelet[2372]: I1101 10:08:08.419808 2372 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 10:08:08.420270 kubelet[2372]: E1101 10:08:08.420224 2372 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.91:6443/api/v1/nodes\": dial tcp 10.0.0.91:6443: connect: connection refused" node="localhost" Nov 1 10:08:08.423157 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice - libcontainer container kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Nov 1 10:08:08.424904 kubelet[2372]: E1101 10:08:08.424877 2372 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 10:08:08.426619 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice - libcontainer container kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Nov 1 10:08:08.428224 kubelet[2372]: E1101 10:08:08.428192 2372 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 10:08:08.457647 kubelet[2372]: E1101 10:08:08.457591 2372 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.91:6443: connect: connection refused" interval="400ms" Nov 1 10:08:08.461986 kubelet[2372]: I1101 10:08:08.461944 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/650248b73c97657587dccb7ec98e5de4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"650248b73c97657587dccb7ec98e5de4\") " pod="kube-system/kube-apiserver-localhost" Nov 1 10:08:08.461986 kubelet[2372]: I1101 10:08:08.461970 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/650248b73c97657587dccb7ec98e5de4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"650248b73c97657587dccb7ec98e5de4\") " pod="kube-system/kube-apiserver-localhost" Nov 1 10:08:08.461986 kubelet[2372]: I1101 10:08:08.461986 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 10:08:08.462110 kubelet[2372]: I1101 10:08:08.462001 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 10:08:08.462110 kubelet[2372]: I1101 10:08:08.462066 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 10:08:08.462110 kubelet[2372]: I1101 10:08:08.462105 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 10:08:08.462180 kubelet[2372]: I1101 10:08:08.462123 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Nov 1 10:08:08.462180 kubelet[2372]: I1101 10:08:08.462160 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/650248b73c97657587dccb7ec98e5de4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"650248b73c97657587dccb7ec98e5de4\") " pod="kube-system/kube-apiserver-localhost" Nov 1 10:08:08.462180 kubelet[2372]: I1101 10:08:08.462173 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 10:08:08.621899 kubelet[2372]: I1101 10:08:08.621812 2372 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 10:08:08.622323 kubelet[2372]: E1101 10:08:08.622279 2372 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.91:6443/api/v1/nodes\": dial tcp 10.0.0.91:6443: connect: connection refused" node="localhost" Nov 1 10:08:08.722897 kubelet[2372]: E1101 10:08:08.722851 2372 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:08.723540 containerd[1601]: time="2025-11-01T10:08:08.723459922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:650248b73c97657587dccb7ec98e5de4,Namespace:kube-system,Attempt:0,}" Nov 1 10:08:08.727840 kubelet[2372]: E1101 10:08:08.727809 2372 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:08.728206 containerd[1601]: time="2025-11-01T10:08:08.728169334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Nov 1 10:08:08.731706 kubelet[2372]: E1101 10:08:08.731660 2372 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:08.732179 containerd[1601]: time="2025-11-01T10:08:08.732072283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Nov 1 10:08:08.858974 kubelet[2372]: E1101 10:08:08.858915 2372 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.91:6443: connect: connection refused" interval="800ms" Nov 1 10:08:09.024382 kubelet[2372]: I1101 10:08:09.024229 2372 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 10:08:09.024562 kubelet[2372]: E1101 10:08:09.024532 2372 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.91:6443/api/v1/nodes\": dial tcp 10.0.0.91:6443: connect: connection refused" node="localhost" Nov 1 10:08:09.322169 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1509176048.mount: Deactivated successfully. Nov 1 10:08:09.329499 containerd[1601]: time="2025-11-01T10:08:09.329447836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 10:08:09.331282 containerd[1601]: time="2025-11-01T10:08:09.331226631Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 1 10:08:09.335249 containerd[1601]: time="2025-11-01T10:08:09.335186658Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 10:08:09.336156 containerd[1601]: time="2025-11-01T10:08:09.336124687Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 10:08:09.338432 containerd[1601]: time="2025-11-01T10:08:09.338355250Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 1 10:08:09.339553 containerd[1601]: time="2025-11-01T10:08:09.339512741Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 10:08:09.340801 containerd[1601]: time="2025-11-01T10:08:09.340741966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 10:08:09.341566 containerd[1601]: time="2025-11-01T10:08:09.341506780Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 613.94151ms" Nov 1 10:08:09.343386 containerd[1601]: time="2025-11-01T10:08:09.341950653Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 1 10:08:09.344237 containerd[1601]: time="2025-11-01T10:08:09.344191645Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 612.073316ms" Nov 1 10:08:09.347002 containerd[1601]: time="2025-11-01T10:08:09.346972240Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 611.630516ms" Nov 1 10:08:09.373702 containerd[1601]: time="2025-11-01T10:08:09.373621908Z" level=info msg="connecting to shim 49c7873c693a496b0ddb0cf2e95f4b9f28adfbcebf3c61d3e525a389e2c873f5" address="unix:///run/containerd/s/8fdf810546949e924bf0c6aa005d93f8cd3be7161226820d7ad6b5a0afa63eab" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:08:09.379583 containerd[1601]: time="2025-11-01T10:08:09.378237393Z" level=info msg="connecting to shim c316f0baba2f2d5e4b96c93af660939d2937e7afa656662d6ebeb82d948f2fb8" address="unix:///run/containerd/s/64cfd4bf325892ea0659f13ceb2b9738beb5f8fa0299425f97b84b4be94c60b3" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:08:09.389391 containerd[1601]: time="2025-11-01T10:08:09.389332521Z" level=info msg="connecting to shim 094474016b9de83d54da5a93d0aa3ae817b7e137ce229a43e3ada5115401a6f6" address="unix:///run/containerd/s/ac007021488efb87f318edd2be37e41e3c4ebe8c0b921fcaaebd00d94ab1918c" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:08:09.406872 systemd[1]: Started cri-containerd-49c7873c693a496b0ddb0cf2e95f4b9f28adfbcebf3c61d3e525a389e2c873f5.scope - libcontainer container 49c7873c693a496b0ddb0cf2e95f4b9f28adfbcebf3c61d3e525a389e2c873f5. Nov 1 10:08:09.410893 systemd[1]: Started cri-containerd-c316f0baba2f2d5e4b96c93af660939d2937e7afa656662d6ebeb82d948f2fb8.scope - libcontainer container c316f0baba2f2d5e4b96c93af660939d2937e7afa656662d6ebeb82d948f2fb8. Nov 1 10:08:09.415145 systemd[1]: Started cri-containerd-094474016b9de83d54da5a93d0aa3ae817b7e137ce229a43e3ada5115401a6f6.scope - libcontainer container 094474016b9de83d54da5a93d0aa3ae817b7e137ce229a43e3ada5115401a6f6. Nov 1 10:08:09.466301 containerd[1601]: time="2025-11-01T10:08:09.466246181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:650248b73c97657587dccb7ec98e5de4,Namespace:kube-system,Attempt:0,} returns sandbox id \"c316f0baba2f2d5e4b96c93af660939d2937e7afa656662d6ebeb82d948f2fb8\"" Nov 1 10:08:09.469133 kubelet[2372]: E1101 10:08:09.469090 2372 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:09.474801 containerd[1601]: time="2025-11-01T10:08:09.474745270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"49c7873c693a496b0ddb0cf2e95f4b9f28adfbcebf3c61d3e525a389e2c873f5\"" Nov 1 10:08:09.476959 kubelet[2372]: E1101 10:08:09.476820 2372 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:09.477752 containerd[1601]: time="2025-11-01T10:08:09.477716452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"094474016b9de83d54da5a93d0aa3ae817b7e137ce229a43e3ada5115401a6f6\"" Nov 1 10:08:09.478460 containerd[1601]: time="2025-11-01T10:08:09.478437013Z" level=info msg="CreateContainer within sandbox \"c316f0baba2f2d5e4b96c93af660939d2937e7afa656662d6ebeb82d948f2fb8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 10:08:09.478511 kubelet[2372]: E1101 10:08:09.478468 2372 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:09.480891 containerd[1601]: time="2025-11-01T10:08:09.480836694Z" level=info msg="CreateContainer within sandbox \"49c7873c693a496b0ddb0cf2e95f4b9f28adfbcebf3c61d3e525a389e2c873f5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 10:08:09.489735 containerd[1601]: time="2025-11-01T10:08:09.489654821Z" level=info msg="Container 908affcf9adf491bb47049d023a04564c3636efac411dbf135e71aa31544e17b: CDI devices from CRI Config.CDIDevices: []" Nov 1 10:08:09.490087 containerd[1601]: time="2025-11-01T10:08:09.490049050Z" level=info msg="CreateContainer within sandbox \"094474016b9de83d54da5a93d0aa3ae817b7e137ce229a43e3ada5115401a6f6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 10:08:09.500145 containerd[1601]: time="2025-11-01T10:08:09.500085422Z" level=info msg="CreateContainer within sandbox \"c316f0baba2f2d5e4b96c93af660939d2937e7afa656662d6ebeb82d948f2fb8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"908affcf9adf491bb47049d023a04564c3636efac411dbf135e71aa31544e17b\"" Nov 1 10:08:09.501340 containerd[1601]: time="2025-11-01T10:08:09.501287266Z" level=info msg="StartContainer for \"908affcf9adf491bb47049d023a04564c3636efac411dbf135e71aa31544e17b\"" Nov 1 10:08:09.502563 containerd[1601]: time="2025-11-01T10:08:09.502528133Z" level=info msg="connecting to shim 908affcf9adf491bb47049d023a04564c3636efac411dbf135e71aa31544e17b" address="unix:///run/containerd/s/64cfd4bf325892ea0659f13ceb2b9738beb5f8fa0299425f97b84b4be94c60b3" protocol=ttrpc version=3 Nov 1 10:08:09.504615 kubelet[2372]: E1101 10:08:09.504568 2372 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 1 10:08:09.506777 containerd[1601]: time="2025-11-01T10:08:09.506683716Z" level=info msg="Container 9a1b2f26acfa130c979e67d88ad7db6b9a76d1bb0b07ddc1b622e1bd43695f73: CDI devices from CRI Config.CDIDevices: []" Nov 1 10:08:09.515287 containerd[1601]: time="2025-11-01T10:08:09.515255381Z" level=info msg="CreateContainer within sandbox \"49c7873c693a496b0ddb0cf2e95f4b9f28adfbcebf3c61d3e525a389e2c873f5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9a1b2f26acfa130c979e67d88ad7db6b9a76d1bb0b07ddc1b622e1bd43695f73\"" Nov 1 10:08:09.515683 containerd[1601]: time="2025-11-01T10:08:09.515635203Z" level=info msg="Container 02995b6c8d566c9cc4a07545ccb034e6f2f0126b6788c7967f649cd6ba17f270: CDI devices from CRI Config.CDIDevices: []" Nov 1 10:08:09.515905 containerd[1601]: time="2025-11-01T10:08:09.515877788Z" level=info msg="StartContainer for \"9a1b2f26acfa130c979e67d88ad7db6b9a76d1bb0b07ddc1b622e1bd43695f73\"" Nov 1 10:08:09.517038 containerd[1601]: time="2025-11-01T10:08:09.516995975Z" level=info msg="connecting to shim 9a1b2f26acfa130c979e67d88ad7db6b9a76d1bb0b07ddc1b622e1bd43695f73" address="unix:///run/containerd/s/8fdf810546949e924bf0c6aa005d93f8cd3be7161226820d7ad6b5a0afa63eab" protocol=ttrpc version=3 Nov 1 10:08:09.523859 containerd[1601]: time="2025-11-01T10:08:09.523834059Z" level=info msg="CreateContainer within sandbox \"094474016b9de83d54da5a93d0aa3ae817b7e137ce229a43e3ada5115401a6f6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"02995b6c8d566c9cc4a07545ccb034e6f2f0126b6788c7967f649cd6ba17f270\"" Nov 1 10:08:09.523911 systemd[1]: Started cri-containerd-908affcf9adf491bb47049d023a04564c3636efac411dbf135e71aa31544e17b.scope - libcontainer container 908affcf9adf491bb47049d023a04564c3636efac411dbf135e71aa31544e17b. Nov 1 10:08:09.524294 containerd[1601]: time="2025-11-01T10:08:09.524276389Z" level=info msg="StartContainer for \"02995b6c8d566c9cc4a07545ccb034e6f2f0126b6788c7967f649cd6ba17f270\"" Nov 1 10:08:09.525624 containerd[1601]: time="2025-11-01T10:08:09.525584482Z" level=info msg="connecting to shim 02995b6c8d566c9cc4a07545ccb034e6f2f0126b6788c7967f649cd6ba17f270" address="unix:///run/containerd/s/ac007021488efb87f318edd2be37e41e3c4ebe8c0b921fcaaebd00d94ab1918c" protocol=ttrpc version=3 Nov 1 10:08:09.555973 systemd[1]: Started cri-containerd-9a1b2f26acfa130c979e67d88ad7db6b9a76d1bb0b07ddc1b622e1bd43695f73.scope - libcontainer container 9a1b2f26acfa130c979e67d88ad7db6b9a76d1bb0b07ddc1b622e1bd43695f73. Nov 1 10:08:09.561102 systemd[1]: Started cri-containerd-02995b6c8d566c9cc4a07545ccb034e6f2f0126b6788c7967f649cd6ba17f270.scope - libcontainer container 02995b6c8d566c9cc4a07545ccb034e6f2f0126b6788c7967f649cd6ba17f270. Nov 1 10:08:09.566568 kubelet[2372]: E1101 10:08:09.566530 2372 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 1 10:08:09.589901 containerd[1601]: time="2025-11-01T10:08:09.588947630Z" level=info msg="StartContainer for \"908affcf9adf491bb47049d023a04564c3636efac411dbf135e71aa31544e17b\" returns successfully" Nov 1 10:08:09.617392 kubelet[2372]: E1101 10:08:09.617362 2372 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.91:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 1 10:08:09.628152 containerd[1601]: time="2025-11-01T10:08:09.628121856Z" level=info msg="StartContainer for \"02995b6c8d566c9cc4a07545ccb034e6f2f0126b6788c7967f649cd6ba17f270\" returns successfully" Nov 1 10:08:09.647270 containerd[1601]: time="2025-11-01T10:08:09.647184135Z" level=info msg="StartContainer for \"9a1b2f26acfa130c979e67d88ad7db6b9a76d1bb0b07ddc1b622e1bd43695f73\" returns successfully" Nov 1 10:08:09.826154 kubelet[2372]: I1101 10:08:09.826102 2372 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 10:08:10.296292 kubelet[2372]: E1101 10:08:10.296254 2372 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 10:08:10.296483 kubelet[2372]: E1101 10:08:10.296376 2372 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:10.296878 kubelet[2372]: E1101 10:08:10.296850 2372 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 10:08:10.296956 kubelet[2372]: E1101 10:08:10.296936 2372 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:10.299448 kubelet[2372]: E1101 10:08:10.299427 2372 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 10:08:10.299533 kubelet[2372]: E1101 10:08:10.299511 2372 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:10.755417 kubelet[2372]: E1101 10:08:10.755272 2372 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 1 10:08:10.842085 kubelet[2372]: I1101 10:08:10.842041 2372 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 1 10:08:10.842085 kubelet[2372]: E1101 10:08:10.842076 2372 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 1 10:08:10.850871 kubelet[2372]: E1101 10:08:10.850815 2372 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 10:08:10.951526 kubelet[2372]: E1101 10:08:10.951462 2372 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 10:08:11.052306 kubelet[2372]: E1101 10:08:11.052141 2372 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 10:08:11.153112 kubelet[2372]: E1101 10:08:11.153042 2372 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 10:08:11.244594 kubelet[2372]: I1101 10:08:11.244517 2372 apiserver.go:52] "Watching apiserver" Nov 1 10:08:11.255564 kubelet[2372]: I1101 10:08:11.255519 2372 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 10:08:11.259822 kubelet[2372]: E1101 10:08:11.259785 2372 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 1 10:08:11.259822 kubelet[2372]: I1101 10:08:11.259812 2372 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 10:08:11.260498 kubelet[2372]: I1101 10:08:11.260463 2372 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 1 10:08:11.261066 kubelet[2372]: E1101 10:08:11.261041 2372 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 1 10:08:11.261066 kubelet[2372]: I1101 10:08:11.261063 2372 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 10:08:11.262064 kubelet[2372]: E1101 10:08:11.262037 2372 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 1 10:08:11.300969 kubelet[2372]: I1101 10:08:11.300939 2372 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 10:08:11.301129 kubelet[2372]: I1101 10:08:11.301096 2372 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 10:08:11.303047 kubelet[2372]: E1101 10:08:11.302936 2372 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 1 10:08:11.303047 kubelet[2372]: E1101 10:08:11.302949 2372 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 1 10:08:11.303148 kubelet[2372]: E1101 10:08:11.303102 2372 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:11.303148 kubelet[2372]: E1101 10:08:11.303102 2372 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:12.934208 systemd[1]: Reload requested from client PID 2658 ('systemctl') (unit session-7.scope)... Nov 1 10:08:12.934232 systemd[1]: Reloading... Nov 1 10:08:13.014731 zram_generator::config[2711]: No configuration found. Nov 1 10:08:13.231403 systemd[1]: Reloading finished in 296 ms. Nov 1 10:08:13.259400 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 10:08:13.281463 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 10:08:13.281800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 10:08:13.281857 systemd[1]: kubelet.service: Consumed 1.690s CPU time, 127.3M memory peak. Nov 1 10:08:13.284531 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 10:08:13.505500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 10:08:13.518021 (kubelet)[2747]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 10:08:13.556111 kubelet[2747]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 10:08:13.556111 kubelet[2747]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 10:08:13.556582 kubelet[2747]: I1101 10:08:13.556174 2747 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 10:08:13.563981 kubelet[2747]: I1101 10:08:13.563928 2747 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 1 10:08:13.563981 kubelet[2747]: I1101 10:08:13.563949 2747 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 10:08:13.563981 kubelet[2747]: I1101 10:08:13.563975 2747 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 1 10:08:13.563981 kubelet[2747]: I1101 10:08:13.563986 2747 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 10:08:13.564233 kubelet[2747]: I1101 10:08:13.564198 2747 server.go:956] "Client rotation is on, will bootstrap in background" Nov 1 10:08:13.565378 kubelet[2747]: I1101 10:08:13.565321 2747 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 1 10:08:13.567914 kubelet[2747]: I1101 10:08:13.567854 2747 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 10:08:13.571539 kubelet[2747]: I1101 10:08:13.571514 2747 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 1 10:08:13.579535 kubelet[2747]: I1101 10:08:13.579493 2747 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 1 10:08:13.579828 kubelet[2747]: I1101 10:08:13.579791 2747 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 10:08:13.579987 kubelet[2747]: I1101 10:08:13.579819 2747 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 10:08:13.580070 kubelet[2747]: I1101 10:08:13.579993 2747 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 10:08:13.580070 kubelet[2747]: I1101 10:08:13.580001 2747 container_manager_linux.go:306] "Creating device plugin manager" Nov 1 10:08:13.580070 kubelet[2747]: I1101 10:08:13.580023 2747 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 1 10:08:13.580718 kubelet[2747]: I1101 10:08:13.580684 2747 state_mem.go:36] "Initialized new in-memory state store" Nov 1 10:08:13.580931 kubelet[2747]: I1101 10:08:13.580916 2747 kubelet.go:475] "Attempting to sync node with API server" Nov 1 10:08:13.580955 kubelet[2747]: I1101 10:08:13.580936 2747 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 10:08:13.580955 kubelet[2747]: I1101 10:08:13.580955 2747 kubelet.go:387] "Adding apiserver pod source" Nov 1 10:08:13.581003 kubelet[2747]: I1101 10:08:13.580978 2747 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 10:08:13.582515 kubelet[2747]: I1101 10:08:13.582482 2747 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Nov 1 10:08:13.583165 kubelet[2747]: I1101 10:08:13.583141 2747 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 1 10:08:13.583215 kubelet[2747]: I1101 10:08:13.583179 2747 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 1 10:08:13.587395 kubelet[2747]: I1101 10:08:13.587368 2747 server.go:1262] "Started kubelet" Nov 1 10:08:13.588011 kubelet[2747]: I1101 10:08:13.587971 2747 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 10:08:13.588063 kubelet[2747]: I1101 10:08:13.588021 2747 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 1 10:08:13.588416 kubelet[2747]: I1101 10:08:13.588383 2747 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 10:08:13.588521 kubelet[2747]: I1101 10:08:13.588482 2747 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 10:08:13.588555 kubelet[2747]: I1101 10:08:13.588526 2747 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 10:08:13.592322 kubelet[2747]: I1101 10:08:13.592293 2747 server.go:310] "Adding debug handlers to kubelet server" Nov 1 10:08:13.593493 kubelet[2747]: I1101 10:08:13.593453 2747 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 10:08:13.596368 kubelet[2747]: I1101 10:08:13.595778 2747 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 1 10:08:13.596368 kubelet[2747]: I1101 10:08:13.595859 2747 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 1 10:08:13.596962 kubelet[2747]: I1101 10:08:13.596931 2747 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 1 10:08:13.597185 kubelet[2747]: I1101 10:08:13.597161 2747 reconciler.go:29] "Reconciler: start to sync state" Nov 1 10:08:13.598754 kubelet[2747]: I1101 10:08:13.598677 2747 factory.go:223] Registration of the systemd container factory successfully Nov 1 10:08:13.598859 kubelet[2747]: I1101 10:08:13.598830 2747 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 10:08:13.601762 kubelet[2747]: E1101 10:08:13.600419 2747 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 10:08:13.601762 kubelet[2747]: I1101 10:08:13.600949 2747 factory.go:223] Registration of the containerd container factory successfully Nov 1 10:08:13.612922 kubelet[2747]: I1101 10:08:13.612880 2747 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 1 10:08:13.612922 kubelet[2747]: I1101 10:08:13.612905 2747 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 1 10:08:13.612922 kubelet[2747]: I1101 10:08:13.612924 2747 kubelet.go:2427] "Starting kubelet main sync loop" Nov 1 10:08:13.613101 kubelet[2747]: E1101 10:08:13.612968 2747 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 10:08:13.635515 kubelet[2747]: I1101 10:08:13.635462 2747 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 10:08:13.635515 kubelet[2747]: I1101 10:08:13.635494 2747 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 10:08:13.635515 kubelet[2747]: I1101 10:08:13.635514 2747 state_mem.go:36] "Initialized new in-memory state store" Nov 1 10:08:13.635754 kubelet[2747]: I1101 10:08:13.635642 2747 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 10:08:13.635754 kubelet[2747]: I1101 10:08:13.635654 2747 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 10:08:13.635754 kubelet[2747]: I1101 10:08:13.635672 2747 policy_none.go:49] "None policy: Start" Nov 1 10:08:13.635754 kubelet[2747]: I1101 10:08:13.635682 2747 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 1 10:08:13.635754 kubelet[2747]: I1101 10:08:13.635724 2747 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 1 10:08:13.635876 kubelet[2747]: I1101 10:08:13.635825 2747 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 1 10:08:13.635876 kubelet[2747]: I1101 10:08:13.635837 2747 policy_none.go:47] "Start" Nov 1 10:08:13.640747 kubelet[2747]: E1101 10:08:13.640722 2747 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 1 10:08:13.641039 kubelet[2747]: I1101 10:08:13.641016 2747 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 10:08:13.641175 kubelet[2747]: I1101 10:08:13.641031 2747 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 10:08:13.641328 kubelet[2747]: I1101 10:08:13.641279 2747 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 10:08:13.641913 kubelet[2747]: E1101 10:08:13.641887 2747 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 10:08:13.714674 kubelet[2747]: I1101 10:08:13.714567 2747 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 10:08:13.715095 kubelet[2747]: I1101 10:08:13.715076 2747 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 10:08:13.715552 kubelet[2747]: I1101 10:08:13.715525 2747 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 10:08:13.747996 kubelet[2747]: I1101 10:08:13.747962 2747 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 10:08:13.753204 kubelet[2747]: I1101 10:08:13.753182 2747 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 1 10:08:13.753274 kubelet[2747]: I1101 10:08:13.753246 2747 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 1 10:08:13.798524 kubelet[2747]: I1101 10:08:13.798393 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 10:08:13.798524 kubelet[2747]: I1101 10:08:13.798435 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 10:08:13.798524 kubelet[2747]: I1101 10:08:13.798456 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Nov 1 10:08:13.798524 kubelet[2747]: I1101 10:08:13.798492 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/650248b73c97657587dccb7ec98e5de4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"650248b73c97657587dccb7ec98e5de4\") " pod="kube-system/kube-apiserver-localhost" Nov 1 10:08:13.798524 kubelet[2747]: I1101 10:08:13.798513 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 10:08:13.798772 kubelet[2747]: I1101 10:08:13.798544 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 10:08:13.798772 kubelet[2747]: I1101 10:08:13.798574 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/650248b73c97657587dccb7ec98e5de4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"650248b73c97657587dccb7ec98e5de4\") " pod="kube-system/kube-apiserver-localhost" Nov 1 10:08:13.798772 kubelet[2747]: I1101 10:08:13.798596 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/650248b73c97657587dccb7ec98e5de4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"650248b73c97657587dccb7ec98e5de4\") " pod="kube-system/kube-apiserver-localhost" Nov 1 10:08:13.798772 kubelet[2747]: I1101 10:08:13.798620 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 10:08:14.022217 kubelet[2747]: E1101 10:08:14.021912 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:14.022217 kubelet[2747]: E1101 10:08:14.022127 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:14.024496 kubelet[2747]: E1101 10:08:14.024446 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:14.582201 kubelet[2747]: I1101 10:08:14.582165 2747 apiserver.go:52] "Watching apiserver" Nov 1 10:08:14.597391 kubelet[2747]: I1101 10:08:14.597356 2747 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 1 10:08:14.624212 kubelet[2747]: E1101 10:08:14.624173 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:14.626656 kubelet[2747]: I1101 10:08:14.624773 2747 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 10:08:14.626656 kubelet[2747]: I1101 10:08:14.624936 2747 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 10:08:14.629706 kubelet[2747]: E1101 10:08:14.629584 2747 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 1 10:08:14.630023 kubelet[2747]: E1101 10:08:14.629849 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:14.630779 kubelet[2747]: E1101 10:08:14.630331 2747 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 1 10:08:14.630779 kubelet[2747]: E1101 10:08:14.630426 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:14.647643 kubelet[2747]: I1101 10:08:14.647566 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.647543333 podStartE2EDuration="1.647543333s" podCreationTimestamp="2025-11-01 10:08:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:08:14.641592333 +0000 UTC m=+1.119826686" watchObservedRunningTime="2025-11-01 10:08:14.647543333 +0000 UTC m=+1.125777686" Nov 1 10:08:14.880745 kubelet[2747]: I1101 10:08:14.879827 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.879808273 podStartE2EDuration="1.879808273s" podCreationTimestamp="2025-11-01 10:08:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:08:14.647717209 +0000 UTC m=+1.125951562" watchObservedRunningTime="2025-11-01 10:08:14.879808273 +0000 UTC m=+1.358042626" Nov 1 10:08:14.921866 kubelet[2747]: I1101 10:08:14.921803 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.9217817080000001 podStartE2EDuration="1.921781708s" podCreationTimestamp="2025-11-01 10:08:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:08:14.880596371 +0000 UTC m=+1.358830714" watchObservedRunningTime="2025-11-01 10:08:14.921781708 +0000 UTC m=+1.400016061" Nov 1 10:08:15.626155 kubelet[2747]: E1101 10:08:15.626106 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:15.626754 kubelet[2747]: E1101 10:08:15.626426 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:15.626929 kubelet[2747]: E1101 10:08:15.626902 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:16.981313 kubelet[2747]: E1101 10:08:16.981243 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:18.982545 kubelet[2747]: I1101 10:08:18.982481 2747 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 10:08:18.983465 containerd[1601]: time="2025-11-01T10:08:18.983223120Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 10:08:18.983913 kubelet[2747]: I1101 10:08:18.983466 2747 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 10:08:19.100633 kubelet[2747]: E1101 10:08:19.100585 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:19.631788 kubelet[2747]: E1101 10:08:19.631743 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:19.956985 systemd[1]: Created slice kubepods-besteffort-pod62013f6d_6b2b_48c8_8504_9dc7550a5746.slice - libcontainer container kubepods-besteffort-pod62013f6d_6b2b_48c8_8504_9dc7550a5746.slice. Nov 1 10:08:20.033270 kubelet[2747]: I1101 10:08:20.033200 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/62013f6d-6b2b-48c8-8504-9dc7550a5746-kube-proxy\") pod \"kube-proxy-85689\" (UID: \"62013f6d-6b2b-48c8-8504-9dc7550a5746\") " pod="kube-system/kube-proxy-85689" Nov 1 10:08:20.033818 kubelet[2747]: I1101 10:08:20.033312 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/62013f6d-6b2b-48c8-8504-9dc7550a5746-xtables-lock\") pod \"kube-proxy-85689\" (UID: \"62013f6d-6b2b-48c8-8504-9dc7550a5746\") " pod="kube-system/kube-proxy-85689" Nov 1 10:08:20.033818 kubelet[2747]: I1101 10:08:20.033338 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r9l6\" (UniqueName: \"kubernetes.io/projected/62013f6d-6b2b-48c8-8504-9dc7550a5746-kube-api-access-4r9l6\") pod \"kube-proxy-85689\" (UID: \"62013f6d-6b2b-48c8-8504-9dc7550a5746\") " pod="kube-system/kube-proxy-85689" Nov 1 10:08:20.033818 kubelet[2747]: I1101 10:08:20.033398 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62013f6d-6b2b-48c8-8504-9dc7550a5746-lib-modules\") pod \"kube-proxy-85689\" (UID: \"62013f6d-6b2b-48c8-8504-9dc7550a5746\") " pod="kube-system/kube-proxy-85689" Nov 1 10:08:20.213984 systemd[1]: Created slice kubepods-besteffort-podfd5ff9f9_3321_43bc_a44c_68d212e6f57e.slice - libcontainer container kubepods-besteffort-podfd5ff9f9_3321_43bc_a44c_68d212e6f57e.slice. Nov 1 10:08:20.234174 kubelet[2747]: I1101 10:08:20.234074 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fd5ff9f9-3321-43bc-a44c-68d212e6f57e-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-m5b5n\" (UID: \"fd5ff9f9-3321-43bc-a44c-68d212e6f57e\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-m5b5n" Nov 1 10:08:20.234174 kubelet[2747]: I1101 10:08:20.234175 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m95pt\" (UniqueName: \"kubernetes.io/projected/fd5ff9f9-3321-43bc-a44c-68d212e6f57e-kube-api-access-m95pt\") pod \"tigera-operator-65cdcdfd6d-m5b5n\" (UID: \"fd5ff9f9-3321-43bc-a44c-68d212e6f57e\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-m5b5n" Nov 1 10:08:20.279078 kubelet[2747]: E1101 10:08:20.279015 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:20.279900 containerd[1601]: time="2025-11-01T10:08:20.279847686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-85689,Uid:62013f6d-6b2b-48c8-8504-9dc7550a5746,Namespace:kube-system,Attempt:0,}" Nov 1 10:08:20.303896 containerd[1601]: time="2025-11-01T10:08:20.303827828Z" level=info msg="connecting to shim 285e5bf16f7ae439ea7b0f0291602360f46ead441fae1747f70a4e14270cd443" address="unix:///run/containerd/s/d8a6789b9c9aa23ee76219726a4b6b190e0f8122641520e9ed586f1ea9f190f4" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:08:20.348897 systemd[1]: Started cri-containerd-285e5bf16f7ae439ea7b0f0291602360f46ead441fae1747f70a4e14270cd443.scope - libcontainer container 285e5bf16f7ae439ea7b0f0291602360f46ead441fae1747f70a4e14270cd443. Nov 1 10:08:20.379684 containerd[1601]: time="2025-11-01T10:08:20.379632704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-85689,Uid:62013f6d-6b2b-48c8-8504-9dc7550a5746,Namespace:kube-system,Attempt:0,} returns sandbox id \"285e5bf16f7ae439ea7b0f0291602360f46ead441fae1747f70a4e14270cd443\"" Nov 1 10:08:20.380602 kubelet[2747]: E1101 10:08:20.380572 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:20.386343 containerd[1601]: time="2025-11-01T10:08:20.386303101Z" level=info msg="CreateContainer within sandbox \"285e5bf16f7ae439ea7b0f0291602360f46ead441fae1747f70a4e14270cd443\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 10:08:20.397964 containerd[1601]: time="2025-11-01T10:08:20.397930288Z" level=info msg="Container f503e4c508b2654636b1786c7f5b45ca4cbb78eac968cccc4fdf79c06a014105: CDI devices from CRI Config.CDIDevices: []" Nov 1 10:08:20.407461 containerd[1601]: time="2025-11-01T10:08:20.407415077Z" level=info msg="CreateContainer within sandbox \"285e5bf16f7ae439ea7b0f0291602360f46ead441fae1747f70a4e14270cd443\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f503e4c508b2654636b1786c7f5b45ca4cbb78eac968cccc4fdf79c06a014105\"" Nov 1 10:08:20.407983 containerd[1601]: time="2025-11-01T10:08:20.407943838Z" level=info msg="StartContainer for \"f503e4c508b2654636b1786c7f5b45ca4cbb78eac968cccc4fdf79c06a014105\"" Nov 1 10:08:20.409284 containerd[1601]: time="2025-11-01T10:08:20.409253263Z" level=info msg="connecting to shim f503e4c508b2654636b1786c7f5b45ca4cbb78eac968cccc4fdf79c06a014105" address="unix:///run/containerd/s/d8a6789b9c9aa23ee76219726a4b6b190e0f8122641520e9ed586f1ea9f190f4" protocol=ttrpc version=3 Nov 1 10:08:20.432830 systemd[1]: Started cri-containerd-f503e4c508b2654636b1786c7f5b45ca4cbb78eac968cccc4fdf79c06a014105.scope - libcontainer container f503e4c508b2654636b1786c7f5b45ca4cbb78eac968cccc4fdf79c06a014105. Nov 1 10:08:20.483610 containerd[1601]: time="2025-11-01T10:08:20.483460493Z" level=info msg="StartContainer for \"f503e4c508b2654636b1786c7f5b45ca4cbb78eac968cccc4fdf79c06a014105\" returns successfully" Nov 1 10:08:20.522724 containerd[1601]: time="2025-11-01T10:08:20.522409843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-m5b5n,Uid:fd5ff9f9-3321-43bc-a44c-68d212e6f57e,Namespace:tigera-operator,Attempt:0,}" Nov 1 10:08:20.570348 containerd[1601]: time="2025-11-01T10:08:20.570217275Z" level=info msg="connecting to shim 051ce58736c1246b6ca31c580ed13cc9089362337c7e7c743623ce06118c97d0" address="unix:///run/containerd/s/f9cccc95d6c56c4fba748d94c198cfd6e3a4b099b8af1333d978cd80a5e1e6cf" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:08:20.621907 systemd[1]: Started cri-containerd-051ce58736c1246b6ca31c580ed13cc9089362337c7e7c743623ce06118c97d0.scope - libcontainer container 051ce58736c1246b6ca31c580ed13cc9089362337c7e7c743623ce06118c97d0. Nov 1 10:08:20.641968 kubelet[2747]: E1101 10:08:20.641931 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:20.642064 kubelet[2747]: E1101 10:08:20.641985 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:20.652835 kubelet[2747]: I1101 10:08:20.652765 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-85689" podStartSLOduration=1.6527477130000001 podStartE2EDuration="1.652747713s" podCreationTimestamp="2025-11-01 10:08:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:08:20.652299375 +0000 UTC m=+7.130533728" watchObservedRunningTime="2025-11-01 10:08:20.652747713 +0000 UTC m=+7.130982066" Nov 1 10:08:20.677527 containerd[1601]: time="2025-11-01T10:08:20.677487287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-m5b5n,Uid:fd5ff9f9-3321-43bc-a44c-68d212e6f57e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"051ce58736c1246b6ca31c580ed13cc9089362337c7e7c743623ce06118c97d0\"" Nov 1 10:08:20.683664 containerd[1601]: time="2025-11-01T10:08:20.683617090Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 1 10:08:21.817257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4143473928.mount: Deactivated successfully. Nov 1 10:08:22.320887 containerd[1601]: time="2025-11-01T10:08:22.320807230Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:08:22.321638 containerd[1601]: time="2025-11-01T10:08:22.321593220Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Nov 1 10:08:22.322723 containerd[1601]: time="2025-11-01T10:08:22.322674714Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:08:22.324614 containerd[1601]: time="2025-11-01T10:08:22.324575149Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:08:22.325151 containerd[1601]: time="2025-11-01T10:08:22.325103407Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.641439999s" Nov 1 10:08:22.325151 containerd[1601]: time="2025-11-01T10:08:22.325140639Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 1 10:08:22.331263 containerd[1601]: time="2025-11-01T10:08:22.331223305Z" level=info msg="CreateContainer within sandbox \"051ce58736c1246b6ca31c580ed13cc9089362337c7e7c743623ce06118c97d0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 1 10:08:22.344715 containerd[1601]: time="2025-11-01T10:08:22.342037698Z" level=info msg="Container 33e1b7e32ff5d36056cf88716ce93d5b7000cff930d49ad267d1023cc4fda363: CDI devices from CRI Config.CDIDevices: []" Nov 1 10:08:22.350045 containerd[1601]: time="2025-11-01T10:08:22.350001734Z" level=info msg="CreateContainer within sandbox \"051ce58736c1246b6ca31c580ed13cc9089362337c7e7c743623ce06118c97d0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"33e1b7e32ff5d36056cf88716ce93d5b7000cff930d49ad267d1023cc4fda363\"" Nov 1 10:08:22.350549 containerd[1601]: time="2025-11-01T10:08:22.350518128Z" level=info msg="StartContainer for \"33e1b7e32ff5d36056cf88716ce93d5b7000cff930d49ad267d1023cc4fda363\"" Nov 1 10:08:22.351336 containerd[1601]: time="2025-11-01T10:08:22.351310630Z" level=info msg="connecting to shim 33e1b7e32ff5d36056cf88716ce93d5b7000cff930d49ad267d1023cc4fda363" address="unix:///run/containerd/s/f9cccc95d6c56c4fba748d94c198cfd6e3a4b099b8af1333d978cd80a5e1e6cf" protocol=ttrpc version=3 Nov 1 10:08:22.371876 systemd[1]: Started cri-containerd-33e1b7e32ff5d36056cf88716ce93d5b7000cff930d49ad267d1023cc4fda363.scope - libcontainer container 33e1b7e32ff5d36056cf88716ce93d5b7000cff930d49ad267d1023cc4fda363. Nov 1 10:08:22.405558 containerd[1601]: time="2025-11-01T10:08:22.405520162Z" level=info msg="StartContainer for \"33e1b7e32ff5d36056cf88716ce93d5b7000cff930d49ad267d1023cc4fda363\" returns successfully" Nov 1 10:08:22.656309 kubelet[2747]: I1101 10:08:22.656054 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-m5b5n" podStartSLOduration=1.007654699 podStartE2EDuration="2.65603906s" podCreationTimestamp="2025-11-01 10:08:20 +0000 UTC" firstStartedPulling="2025-11-01 10:08:20.678769469 +0000 UTC m=+7.157003822" lastFinishedPulling="2025-11-01 10:08:22.32715383 +0000 UTC m=+8.805388183" observedRunningTime="2025-11-01 10:08:22.655939119 +0000 UTC m=+9.134173472" watchObservedRunningTime="2025-11-01 10:08:22.65603906 +0000 UTC m=+9.134273413" Nov 1 10:08:24.274938 kubelet[2747]: E1101 10:08:24.274894 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:26.986735 kubelet[2747]: E1101 10:08:26.986011 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:27.482825 sudo[1820]: pam_unix(sudo:session): session closed for user root Nov 1 10:08:27.485726 sshd[1819]: Connection closed by 10.0.0.1 port 53498 Nov 1 10:08:27.485429 sshd-session[1815]: pam_unix(sshd:session): session closed for user core Nov 1 10:08:27.490949 systemd[1]: sshd@6-10.0.0.91:22-10.0.0.1:53498.service: Deactivated successfully. Nov 1 10:08:27.495268 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 10:08:27.495709 systemd[1]: session-7.scope: Consumed 6.136s CPU time, 218.1M memory peak. Nov 1 10:08:27.497766 systemd-logind[1585]: Session 7 logged out. Waiting for processes to exit. Nov 1 10:08:27.500086 systemd-logind[1585]: Removed session 7. Nov 1 10:08:31.679877 systemd[1]: Created slice kubepods-besteffort-pod4d4f9edf_d029_490b_9137_85c96c27297b.slice - libcontainer container kubepods-besteffort-pod4d4f9edf_d029_490b_9137_85c96c27297b.slice. Nov 1 10:08:31.706044 kubelet[2747]: I1101 10:08:31.705969 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4d4f9edf-d029-490b-9137-85c96c27297b-typha-certs\") pod \"calico-typha-6987d68cb7-g9j4r\" (UID: \"4d4f9edf-d029-490b-9137-85c96c27297b\") " pod="calico-system/calico-typha-6987d68cb7-g9j4r" Nov 1 10:08:31.706044 kubelet[2747]: I1101 10:08:31.706022 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42lvf\" (UniqueName: \"kubernetes.io/projected/4d4f9edf-d029-490b-9137-85c96c27297b-kube-api-access-42lvf\") pod \"calico-typha-6987d68cb7-g9j4r\" (UID: \"4d4f9edf-d029-490b-9137-85c96c27297b\") " pod="calico-system/calico-typha-6987d68cb7-g9j4r" Nov 1 10:08:31.706044 kubelet[2747]: I1101 10:08:31.706040 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d4f9edf-d029-490b-9137-85c96c27297b-tigera-ca-bundle\") pod \"calico-typha-6987d68cb7-g9j4r\" (UID: \"4d4f9edf-d029-490b-9137-85c96c27297b\") " pod="calico-system/calico-typha-6987d68cb7-g9j4r" Nov 1 10:08:31.850798 systemd[1]: Created slice kubepods-besteffort-pod65dff059_71fa_43c7_9aff_12db5f54981a.slice - libcontainer container kubepods-besteffort-pod65dff059_71fa_43c7_9aff_12db5f54981a.slice. Nov 1 10:08:31.907548 kubelet[2747]: I1101 10:08:31.907462 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/65dff059-71fa-43c7-9aff-12db5f54981a-policysync\") pod \"calico-node-lqjg4\" (UID: \"65dff059-71fa-43c7-9aff-12db5f54981a\") " pod="calico-system/calico-node-lqjg4" Nov 1 10:08:31.907548 kubelet[2747]: I1101 10:08:31.907536 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/65dff059-71fa-43c7-9aff-12db5f54981a-cni-net-dir\") pod \"calico-node-lqjg4\" (UID: \"65dff059-71fa-43c7-9aff-12db5f54981a\") " pod="calico-system/calico-node-lqjg4" Nov 1 10:08:31.907800 kubelet[2747]: I1101 10:08:31.907568 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/65dff059-71fa-43c7-9aff-12db5f54981a-node-certs\") pod \"calico-node-lqjg4\" (UID: \"65dff059-71fa-43c7-9aff-12db5f54981a\") " pod="calico-system/calico-node-lqjg4" Nov 1 10:08:31.907800 kubelet[2747]: I1101 10:08:31.907600 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/65dff059-71fa-43c7-9aff-12db5f54981a-cni-log-dir\") pod \"calico-node-lqjg4\" (UID: \"65dff059-71fa-43c7-9aff-12db5f54981a\") " pod="calico-system/calico-node-lqjg4" Nov 1 10:08:31.907800 kubelet[2747]: I1101 10:08:31.907719 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/65dff059-71fa-43c7-9aff-12db5f54981a-var-run-calico\") pod \"calico-node-lqjg4\" (UID: \"65dff059-71fa-43c7-9aff-12db5f54981a\") " pod="calico-system/calico-node-lqjg4" Nov 1 10:08:31.907800 kubelet[2747]: I1101 10:08:31.907748 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/65dff059-71fa-43c7-9aff-12db5f54981a-xtables-lock\") pod \"calico-node-lqjg4\" (UID: \"65dff059-71fa-43c7-9aff-12db5f54981a\") " pod="calico-system/calico-node-lqjg4" Nov 1 10:08:31.907800 kubelet[2747]: I1101 10:08:31.907769 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47jv8\" (UniqueName: \"kubernetes.io/projected/65dff059-71fa-43c7-9aff-12db5f54981a-kube-api-access-47jv8\") pod \"calico-node-lqjg4\" (UID: \"65dff059-71fa-43c7-9aff-12db5f54981a\") " pod="calico-system/calico-node-lqjg4" Nov 1 10:08:31.907923 kubelet[2747]: I1101 10:08:31.907791 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/65dff059-71fa-43c7-9aff-12db5f54981a-cni-bin-dir\") pod \"calico-node-lqjg4\" (UID: \"65dff059-71fa-43c7-9aff-12db5f54981a\") " pod="calico-system/calico-node-lqjg4" Nov 1 10:08:31.907923 kubelet[2747]: I1101 10:08:31.907813 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/65dff059-71fa-43c7-9aff-12db5f54981a-flexvol-driver-host\") pod \"calico-node-lqjg4\" (UID: \"65dff059-71fa-43c7-9aff-12db5f54981a\") " pod="calico-system/calico-node-lqjg4" Nov 1 10:08:31.907923 kubelet[2747]: I1101 10:08:31.907842 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/65dff059-71fa-43c7-9aff-12db5f54981a-lib-modules\") pod \"calico-node-lqjg4\" (UID: \"65dff059-71fa-43c7-9aff-12db5f54981a\") " pod="calico-system/calico-node-lqjg4" Nov 1 10:08:31.907923 kubelet[2747]: I1101 10:08:31.907881 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/65dff059-71fa-43c7-9aff-12db5f54981a-var-lib-calico\") pod \"calico-node-lqjg4\" (UID: \"65dff059-71fa-43c7-9aff-12db5f54981a\") " pod="calico-system/calico-node-lqjg4" Nov 1 10:08:31.907923 kubelet[2747]: I1101 10:08:31.907902 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65dff059-71fa-43c7-9aff-12db5f54981a-tigera-ca-bundle\") pod \"calico-node-lqjg4\" (UID: \"65dff059-71fa-43c7-9aff-12db5f54981a\") " pod="calico-system/calico-node-lqjg4" Nov 1 10:08:31.989468 kubelet[2747]: E1101 10:08:31.988895 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:31.989947 containerd[1601]: time="2025-11-01T10:08:31.989889041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6987d68cb7-g9j4r,Uid:4d4f9edf-d029-490b-9137-85c96c27297b,Namespace:calico-system,Attempt:0,}" Nov 1 10:08:32.012466 kubelet[2747]: E1101 10:08:32.012341 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.012466 kubelet[2747]: W1101 10:08:32.012444 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.012466 kubelet[2747]: E1101 10:08:32.012471 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.017487 kubelet[2747]: E1101 10:08:32.017457 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.017609 kubelet[2747]: W1101 10:08:32.017560 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.017609 kubelet[2747]: E1101 10:08:32.017584 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.025755 kubelet[2747]: E1101 10:08:32.025669 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.025755 kubelet[2747]: W1101 10:08:32.025685 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.025755 kubelet[2747]: E1101 10:08:32.025720 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.026814 containerd[1601]: time="2025-11-01T10:08:32.026773065Z" level=info msg="connecting to shim e0f257531ac86d1275b24f8854a1850ed2f325470d0fed6be523e01337f43ed3" address="unix:///run/containerd/s/f38134ddf0d0f18521a766ca7e782ced7631a99a24b2fc343d14c49d3bba96e9" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:08:32.049903 kubelet[2747]: E1101 10:08:32.049850 2747 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wgv7h" podUID="2d70db43-7a2b-4384-9018-e7385784e621" Nov 1 10:08:32.075846 systemd[1]: Started cri-containerd-e0f257531ac86d1275b24f8854a1850ed2f325470d0fed6be523e01337f43ed3.scope - libcontainer container e0f257531ac86d1275b24f8854a1850ed2f325470d0fed6be523e01337f43ed3. Nov 1 10:08:32.097556 kubelet[2747]: E1101 10:08:32.097421 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.097556 kubelet[2747]: W1101 10:08:32.097444 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.097556 kubelet[2747]: E1101 10:08:32.097464 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.098047 kubelet[2747]: E1101 10:08:32.097773 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.098047 kubelet[2747]: W1101 10:08:32.097781 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.098047 kubelet[2747]: E1101 10:08:32.097790 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.098179 kubelet[2747]: E1101 10:08:32.098166 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.098232 kubelet[2747]: W1101 10:08:32.098222 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.098296 kubelet[2747]: E1101 10:08:32.098285 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.111118 kubelet[2747]: E1101 10:08:32.111102 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.111219 kubelet[2747]: W1101 10:08:32.111186 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.111219 kubelet[2747]: E1101 10:08:32.111199 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.111576 kubelet[2747]: E1101 10:08:32.111564 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.111725 kubelet[2747]: W1101 10:08:32.111631 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.111725 kubelet[2747]: E1101 10:08:32.111644 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.111976 kubelet[2747]: E1101 10:08:32.111964 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.112225 kubelet[2747]: W1101 10:08:32.112211 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.112349 kubelet[2747]: E1101 10:08:32.112337 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.113284 kubelet[2747]: E1101 10:08:32.113215 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.113284 kubelet[2747]: W1101 10:08:32.113227 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.113284 kubelet[2747]: E1101 10:08:32.113237 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.113665 kubelet[2747]: E1101 10:08:32.113630 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.113665 kubelet[2747]: W1101 10:08:32.113641 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.113665 kubelet[2747]: E1101 10:08:32.113650 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.114200 kubelet[2747]: E1101 10:08:32.114102 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.114200 kubelet[2747]: W1101 10:08:32.114112 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.114200 kubelet[2747]: E1101 10:08:32.114120 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.116322 kubelet[2747]: E1101 10:08:32.116282 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.116493 kubelet[2747]: W1101 10:08:32.116479 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.116554 kubelet[2747]: E1101 10:08:32.116543 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.117985 kubelet[2747]: E1101 10:08:32.117972 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.118122 kubelet[2747]: W1101 10:08:32.118016 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.118122 kubelet[2747]: E1101 10:08:32.118027 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.118456 kubelet[2747]: E1101 10:08:32.118409 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.118456 kubelet[2747]: W1101 10:08:32.118420 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.118456 kubelet[2747]: E1101 10:08:32.118429 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.118889 kubelet[2747]: E1101 10:08:32.118866 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.119061 kubelet[2747]: W1101 10:08:32.119020 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.119294 kubelet[2747]: E1101 10:08:32.119159 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.119563 kubelet[2747]: E1101 10:08:32.119524 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.119563 kubelet[2747]: W1101 10:08:32.119534 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.119675 kubelet[2747]: E1101 10:08:32.119635 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.119932 kubelet[2747]: E1101 10:08:32.119878 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.119932 kubelet[2747]: W1101 10:08:32.119888 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.119932 kubelet[2747]: E1101 10:08:32.119896 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.120213 kubelet[2747]: E1101 10:08:32.120179 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.120213 kubelet[2747]: W1101 10:08:32.120190 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.120213 kubelet[2747]: E1101 10:08:32.120198 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.120591 kubelet[2747]: E1101 10:08:32.120539 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.120591 kubelet[2747]: W1101 10:08:32.120550 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.120591 kubelet[2747]: E1101 10:08:32.120558 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.121887 kubelet[2747]: E1101 10:08:32.121777 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.121887 kubelet[2747]: W1101 10:08:32.121790 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.121887 kubelet[2747]: E1101 10:08:32.121799 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.122136 kubelet[2747]: E1101 10:08:32.122039 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.122136 kubelet[2747]: W1101 10:08:32.122049 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.122136 kubelet[2747]: E1101 10:08:32.122058 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.122588 kubelet[2747]: E1101 10:08:32.122341 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.122588 kubelet[2747]: W1101 10:08:32.122353 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.122588 kubelet[2747]: E1101 10:08:32.122361 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.122823 kubelet[2747]: E1101 10:08:32.122787 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.122823 kubelet[2747]: W1101 10:08:32.122818 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.122885 kubelet[2747]: E1101 10:08:32.122845 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.123389 kubelet[2747]: I1101 10:08:32.123362 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2d70db43-7a2b-4384-9018-e7385784e621-registration-dir\") pod \"csi-node-driver-wgv7h\" (UID: \"2d70db43-7a2b-4384-9018-e7385784e621\") " pod="calico-system/csi-node-driver-wgv7h" Nov 1 10:08:32.123679 kubelet[2747]: E1101 10:08:32.123656 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.123679 kubelet[2747]: W1101 10:08:32.123673 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.123761 kubelet[2747]: E1101 10:08:32.123682 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.123761 kubelet[2747]: I1101 10:08:32.123716 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d70db43-7a2b-4384-9018-e7385784e621-kubelet-dir\") pod \"csi-node-driver-wgv7h\" (UID: \"2d70db43-7a2b-4384-9018-e7385784e621\") " pod="calico-system/csi-node-driver-wgv7h" Nov 1 10:08:32.123946 kubelet[2747]: E1101 10:08:32.123923 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.123946 kubelet[2747]: W1101 10:08:32.123939 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.123946 kubelet[2747]: E1101 10:08:32.123947 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.124026 kubelet[2747]: I1101 10:08:32.123974 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2d70db43-7a2b-4384-9018-e7385784e621-varrun\") pod \"csi-node-driver-wgv7h\" (UID: \"2d70db43-7a2b-4384-9018-e7385784e621\") " pod="calico-system/csi-node-driver-wgv7h" Nov 1 10:08:32.124805 kubelet[2747]: E1101 10:08:32.124783 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.124805 kubelet[2747]: W1101 10:08:32.124802 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.124888 kubelet[2747]: E1101 10:08:32.124811 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.124888 kubelet[2747]: I1101 10:08:32.124837 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2d70db43-7a2b-4384-9018-e7385784e621-socket-dir\") pod \"csi-node-driver-wgv7h\" (UID: \"2d70db43-7a2b-4384-9018-e7385784e621\") " pod="calico-system/csi-node-driver-wgv7h" Nov 1 10:08:32.125596 kubelet[2747]: E1101 10:08:32.125552 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.125596 kubelet[2747]: W1101 10:08:32.125572 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.125596 kubelet[2747]: E1101 10:08:32.125581 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.125910 kubelet[2747]: I1101 10:08:32.125863 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdjj2\" (UniqueName: \"kubernetes.io/projected/2d70db43-7a2b-4384-9018-e7385784e621-kube-api-access-wdjj2\") pod \"csi-node-driver-wgv7h\" (UID: \"2d70db43-7a2b-4384-9018-e7385784e621\") " pod="calico-system/csi-node-driver-wgv7h" Nov 1 10:08:32.126213 kubelet[2747]: E1101 10:08:32.126194 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.126213 kubelet[2747]: W1101 10:08:32.126209 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.126272 kubelet[2747]: E1101 10:08:32.126218 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.126519 kubelet[2747]: E1101 10:08:32.126471 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.126519 kubelet[2747]: W1101 10:08:32.126485 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.126519 kubelet[2747]: E1101 10:08:32.126493 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.126764 kubelet[2747]: E1101 10:08:32.126741 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.126764 kubelet[2747]: W1101 10:08:32.126755 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.126764 kubelet[2747]: E1101 10:08:32.126763 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.127657 kubelet[2747]: E1101 10:08:32.127613 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.127987 kubelet[2747]: W1101 10:08:32.127950 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.128044 kubelet[2747]: E1101 10:08:32.127994 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.129861 kubelet[2747]: E1101 10:08:32.129830 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.129861 kubelet[2747]: W1101 10:08:32.129845 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.129861 kubelet[2747]: E1101 10:08:32.129856 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.131073 kubelet[2747]: E1101 10:08:32.131045 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.131073 kubelet[2747]: W1101 10:08:32.131065 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.131073 kubelet[2747]: E1101 10:08:32.131075 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.131746 kubelet[2747]: E1101 10:08:32.131725 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.131881 kubelet[2747]: W1101 10:08:32.131740 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.131881 kubelet[2747]: E1101 10:08:32.131867 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.132471 kubelet[2747]: E1101 10:08:32.132447 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.132471 kubelet[2747]: W1101 10:08:32.132467 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.132639 kubelet[2747]: E1101 10:08:32.132481 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.134857 kubelet[2747]: E1101 10:08:32.134823 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.134857 kubelet[2747]: W1101 10:08:32.134848 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.134919 kubelet[2747]: E1101 10:08:32.134862 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.135113 kubelet[2747]: E1101 10:08:32.135092 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.135113 kubelet[2747]: W1101 10:08:32.135106 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.135187 kubelet[2747]: E1101 10:08:32.135117 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.163656 kubelet[2747]: E1101 10:08:32.163352 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:32.165081 containerd[1601]: time="2025-11-01T10:08:32.165015224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lqjg4,Uid:65dff059-71fa-43c7-9aff-12db5f54981a,Namespace:calico-system,Attempt:0,}" Nov 1 10:08:32.166195 containerd[1601]: time="2025-11-01T10:08:32.166157446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6987d68cb7-g9j4r,Uid:4d4f9edf-d029-490b-9137-85c96c27297b,Namespace:calico-system,Attempt:0,} returns sandbox id \"e0f257531ac86d1275b24f8854a1850ed2f325470d0fed6be523e01337f43ed3\"" Nov 1 10:08:32.167184 kubelet[2747]: E1101 10:08:32.167160 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:32.169876 containerd[1601]: time="2025-11-01T10:08:32.169843492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 1 10:08:32.193597 containerd[1601]: time="2025-11-01T10:08:32.193540026Z" level=info msg="connecting to shim 397c5b388f95301f06abd86ad904b21f2cb6c7a4a5dee2bf5ee952073fc11a22" address="unix:///run/containerd/s/05549f62ea6047eba3ec0b072703ce2c83a0c48b93112bc1d2ed15074712c3a2" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:08:32.225857 systemd[1]: Started cri-containerd-397c5b388f95301f06abd86ad904b21f2cb6c7a4a5dee2bf5ee952073fc11a22.scope - libcontainer container 397c5b388f95301f06abd86ad904b21f2cb6c7a4a5dee2bf5ee952073fc11a22. Nov 1 10:08:32.227306 kubelet[2747]: E1101 10:08:32.227281 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.227306 kubelet[2747]: W1101 10:08:32.227301 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.227442 kubelet[2747]: E1101 10:08:32.227318 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.227587 kubelet[2747]: E1101 10:08:32.227568 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.227587 kubelet[2747]: W1101 10:08:32.227580 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.227587 kubelet[2747]: E1101 10:08:32.227588 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.228000 kubelet[2747]: E1101 10:08:32.227973 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.228000 kubelet[2747]: W1101 10:08:32.227986 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.228000 kubelet[2747]: E1101 10:08:32.227995 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.228276 kubelet[2747]: E1101 10:08:32.228257 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.228276 kubelet[2747]: W1101 10:08:32.228270 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.228276 kubelet[2747]: E1101 10:08:32.228279 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.228558 kubelet[2747]: E1101 10:08:32.228541 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.228558 kubelet[2747]: W1101 10:08:32.228553 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.228634 kubelet[2747]: E1101 10:08:32.228562 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.228845 kubelet[2747]: E1101 10:08:32.228824 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.228845 kubelet[2747]: W1101 10:08:32.228836 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.228845 kubelet[2747]: E1101 10:08:32.228846 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.229218 kubelet[2747]: E1101 10:08:32.229196 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.229218 kubelet[2747]: W1101 10:08:32.229209 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.229218 kubelet[2747]: E1101 10:08:32.229218 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.229505 kubelet[2747]: E1101 10:08:32.229485 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.229505 kubelet[2747]: W1101 10:08:32.229497 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.229505 kubelet[2747]: E1101 10:08:32.229506 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.229852 kubelet[2747]: E1101 10:08:32.229834 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.229852 kubelet[2747]: W1101 10:08:32.229849 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.229852 kubelet[2747]: E1101 10:08:32.229858 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.230164 kubelet[2747]: E1101 10:08:32.230147 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.230164 kubelet[2747]: W1101 10:08:32.230159 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.230244 kubelet[2747]: E1101 10:08:32.230168 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.230570 kubelet[2747]: E1101 10:08:32.230552 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.230570 kubelet[2747]: W1101 10:08:32.230565 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.230650 kubelet[2747]: E1101 10:08:32.230574 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.231463 kubelet[2747]: E1101 10:08:32.231068 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.231463 kubelet[2747]: W1101 10:08:32.231108 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.231463 kubelet[2747]: E1101 10:08:32.231118 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.231463 kubelet[2747]: E1101 10:08:32.231394 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.231463 kubelet[2747]: W1101 10:08:32.231403 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.231463 kubelet[2747]: E1101 10:08:32.231411 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.231796 kubelet[2747]: E1101 10:08:32.231684 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.231796 kubelet[2747]: W1101 10:08:32.231719 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.231796 kubelet[2747]: E1101 10:08:32.231727 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.231980 kubelet[2747]: E1101 10:08:32.231955 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.231980 kubelet[2747]: W1101 10:08:32.231971 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.231980 kubelet[2747]: E1101 10:08:32.231979 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.232254 kubelet[2747]: E1101 10:08:32.232223 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.232254 kubelet[2747]: W1101 10:08:32.232247 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.232254 kubelet[2747]: E1101 10:08:32.232256 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.232582 kubelet[2747]: E1101 10:08:32.232562 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.232582 kubelet[2747]: W1101 10:08:32.232576 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.232657 kubelet[2747]: E1101 10:08:32.232585 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.232908 kubelet[2747]: E1101 10:08:32.232889 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.232908 kubelet[2747]: W1101 10:08:32.232903 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.232980 kubelet[2747]: E1101 10:08:32.232912 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.233129 kubelet[2747]: E1101 10:08:32.233109 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.233129 kubelet[2747]: W1101 10:08:32.233123 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.233129 kubelet[2747]: E1101 10:08:32.233131 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.233382 kubelet[2747]: E1101 10:08:32.233360 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.233382 kubelet[2747]: W1101 10:08:32.233376 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.233382 kubelet[2747]: E1101 10:08:32.233385 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.233649 kubelet[2747]: E1101 10:08:32.233628 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.233649 kubelet[2747]: W1101 10:08:32.233642 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.233649 kubelet[2747]: E1101 10:08:32.233650 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.233913 kubelet[2747]: E1101 10:08:32.233892 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.233913 kubelet[2747]: W1101 10:08:32.233905 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.233913 kubelet[2747]: E1101 10:08:32.233913 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.234176 kubelet[2747]: E1101 10:08:32.234147 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.234176 kubelet[2747]: W1101 10:08:32.234159 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.234176 kubelet[2747]: E1101 10:08:32.234167 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.234496 kubelet[2747]: E1101 10:08:32.234463 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.234496 kubelet[2747]: W1101 10:08:32.234486 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.234496 kubelet[2747]: E1101 10:08:32.234494 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.235031 kubelet[2747]: E1101 10:08:32.234781 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.235031 kubelet[2747]: W1101 10:08:32.234794 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.235031 kubelet[2747]: E1101 10:08:32.234804 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.242853 kubelet[2747]: E1101 10:08:32.242752 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:32.242853 kubelet[2747]: W1101 10:08:32.242772 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:32.242853 kubelet[2747]: E1101 10:08:32.242791 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:32.260569 containerd[1601]: time="2025-11-01T10:08:32.260520786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lqjg4,Uid:65dff059-71fa-43c7-9aff-12db5f54981a,Namespace:calico-system,Attempt:0,} returns sandbox id \"397c5b388f95301f06abd86ad904b21f2cb6c7a4a5dee2bf5ee952073fc11a22\"" Nov 1 10:08:32.261492 kubelet[2747]: E1101 10:08:32.261459 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:32.407017 update_engine[1590]: I20251101 10:08:32.406901 1590 update_attempter.cc:509] Updating boot flags... Nov 1 10:08:33.482735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount879067560.mount: Deactivated successfully. Nov 1 10:08:33.614462 kubelet[2747]: E1101 10:08:33.614384 2747 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wgv7h" podUID="2d70db43-7a2b-4384-9018-e7385784e621" Nov 1 10:08:34.381440 containerd[1601]: time="2025-11-01T10:08:34.381375667Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:08:34.382271 containerd[1601]: time="2025-11-01T10:08:34.382217239Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Nov 1 10:08:34.383385 containerd[1601]: time="2025-11-01T10:08:34.383345912Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:08:34.385399 containerd[1601]: time="2025-11-01T10:08:34.385362945Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:08:34.385957 containerd[1601]: time="2025-11-01T10:08:34.385916230Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.216034628s" Nov 1 10:08:34.386002 containerd[1601]: time="2025-11-01T10:08:34.385955605Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 1 10:08:34.393232 containerd[1601]: time="2025-11-01T10:08:34.393205911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 1 10:08:34.413921 containerd[1601]: time="2025-11-01T10:08:34.413869694Z" level=info msg="CreateContainer within sandbox \"e0f257531ac86d1275b24f8854a1850ed2f325470d0fed6be523e01337f43ed3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 1 10:08:34.421291 containerd[1601]: time="2025-11-01T10:08:34.421237072Z" level=info msg="Container d0ac65cb6af1d57a441b6910c6c08c806bea8e72ce0c0cfd179c04c987b6bd2e: CDI devices from CRI Config.CDIDevices: []" Nov 1 10:08:34.429248 containerd[1601]: time="2025-11-01T10:08:34.429206938Z" level=info msg="CreateContainer within sandbox \"e0f257531ac86d1275b24f8854a1850ed2f325470d0fed6be523e01337f43ed3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d0ac65cb6af1d57a441b6910c6c08c806bea8e72ce0c0cfd179c04c987b6bd2e\"" Nov 1 10:08:34.429768 containerd[1601]: time="2025-11-01T10:08:34.429739064Z" level=info msg="StartContainer for \"d0ac65cb6af1d57a441b6910c6c08c806bea8e72ce0c0cfd179c04c987b6bd2e\"" Nov 1 10:08:34.430822 containerd[1601]: time="2025-11-01T10:08:34.430772106Z" level=info msg="connecting to shim d0ac65cb6af1d57a441b6910c6c08c806bea8e72ce0c0cfd179c04c987b6bd2e" address="unix:///run/containerd/s/f38134ddf0d0f18521a766ca7e782ced7631a99a24b2fc343d14c49d3bba96e9" protocol=ttrpc version=3 Nov 1 10:08:34.453839 systemd[1]: Started cri-containerd-d0ac65cb6af1d57a441b6910c6c08c806bea8e72ce0c0cfd179c04c987b6bd2e.scope - libcontainer container d0ac65cb6af1d57a441b6910c6c08c806bea8e72ce0c0cfd179c04c987b6bd2e. Nov 1 10:08:34.509815 containerd[1601]: time="2025-11-01T10:08:34.509745779Z" level=info msg="StartContainer for \"d0ac65cb6af1d57a441b6910c6c08c806bea8e72ce0c0cfd179c04c987b6bd2e\" returns successfully" Nov 1 10:08:34.686302 kubelet[2747]: E1101 10:08:34.685865 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:34.739359 kubelet[2747]: E1101 10:08:34.739267 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:34.739359 kubelet[2747]: W1101 10:08:34.739312 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:34.739359 kubelet[2747]: E1101 10:08:34.739333 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:34.739912 kubelet[2747]: E1101 10:08:34.739879 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:34.739912 kubelet[2747]: W1101 10:08:34.739894 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:34.739912 kubelet[2747]: E1101 10:08:34.739903 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:34.740139 kubelet[2747]: E1101 10:08:34.740055 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:34.740139 kubelet[2747]: W1101 10:08:34.740063 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:34.740139 kubelet[2747]: E1101 10:08:34.740070 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:34.740139 kubelet[2747]: E1101 10:08:34.740258 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:34.740139 kubelet[2747]: W1101 10:08:34.740278 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:34.740139 kubelet[2747]: E1101 10:08:34.740285 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:34.740139 kubelet[2747]: E1101 10:08:34.740442 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:34.740139 kubelet[2747]: W1101 10:08:34.740449 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:34.740139 kubelet[2747]: E1101 10:08:34.740457 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:34.740882 kubelet[2747]: E1101 10:08:34.740743 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:34.740882 kubelet[2747]: W1101 10:08:34.740752 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:34.740882 kubelet[2747]: E1101 10:08:34.740761 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:34.740953 kubelet[2747]: E1101 10:08:34.740926 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:34.740953 kubelet[2747]: W1101 10:08:34.740934 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:34.740953 kubelet[2747]: E1101 10:08:34.740942 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:34.741357 kubelet[2747]: E1101 10:08:34.741321 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:34.741357 kubelet[2747]: W1101 10:08:34.741336 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:34.741357 kubelet[2747]: E1101 10:08:34.741345 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:34.741626 kubelet[2747]: E1101 10:08:34.741597 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:34.741626 kubelet[2747]: W1101 10:08:34.741609 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:34.741626 kubelet[2747]: E1101 10:08:34.741617 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:34.742067 kubelet[2747]: E1101 10:08:34.741812 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:34.742067 kubelet[2747]: W1101 10:08:34.742064 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:34.742142 kubelet[2747]: E1101 10:08:34.742075 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:34.742670 kubelet[2747]: E1101 10:08:34.742652 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:34.742670 kubelet[2747]: W1101 10:08:34.742665 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:34.742758 kubelet[2747]: E1101 10:08:34.742675 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:34.742963 kubelet[2747]: E1101 10:08:34.742920 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:34.742963 kubelet[2747]: W1101 10:08:34.742954 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:34.742963 kubelet[2747]: E1101 10:08:34.742965 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:34.743743 kubelet[2747]: E1101 10:08:34.743205 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:34.743743 kubelet[2747]: W1101 10:08:34.743215 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:34.743743 kubelet[2747]: E1101 10:08:34.743224 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:34.743743 kubelet[2747]: E1101 10:08:34.743406 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:34.743743 kubelet[2747]: W1101 10:08:34.743414 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:34.743743 kubelet[2747]: E1101 10:08:34.743422 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:34.743743 kubelet[2747]: E1101 10:08:34.743653 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:34.743743 kubelet[2747]: W1101 10:08:34.743680 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:34.743743 kubelet[2747]: E1101 10:08:34.743719 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:34.744790 kubelet[2747]: E1101 10:08:34.744682 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:34.744790 kubelet[2747]: W1101 10:08:34.744735 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:34.744790 kubelet[2747]: E1101 10:08:34.744747 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:34.746038 kubelet[2747]: E1101 10:08:34.746021 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:34.746038 kubelet[2747]: W1101 10:08:34.746033 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:34.746115 kubelet[2747]: E1101 10:08:34.746042 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:34.746270 kubelet[2747]: E1101 10:08:34.746258 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:34.746270 kubelet[2747]: W1101 10:08:34.746267 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:34.746335 kubelet[2747]: E1101 10:08:34.746275 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:34.746527 kubelet[2747]: E1101 10:08:34.746514 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:34.746527 kubelet[2747]: W1101 10:08:34.746524 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:34.746584 kubelet[2747]: E1101 10:08:34.746533 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:34.746780 kubelet[2747]: E1101 10:08:34.746766 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:34.746780 kubelet[2747]: W1101 10:08:34.746777 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:34.746862 kubelet[2747]: E1101 10:08:34.746785 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:34.747030 kubelet[2747]: E1101 10:08:34.747014 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:34.747030 kubelet[2747]: W1101 10:08:34.747026 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:34.747115 kubelet[2747]: E1101 10:08:34.747036 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:34.747315 kubelet[2747]: E1101 10:08:34.747284 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:34.747315 kubelet[2747]: W1101 10:08:34.747303 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:34.747315 kubelet[2747]: E1101 10:08:34.747312 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:34.747647 kubelet[2747]: E1101 10:08:34.747637 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:34.747647 kubelet[2747]: W1101 10:08:34.747645 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:34.747724 kubelet[2747]: E1101 10:08:34.747653 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:34.747872 kubelet[2747]: E1101 10:08:34.747861 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:34.747943 kubelet[2747]: W1101 10:08:34.747873 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:34.747943 kubelet[2747]: E1101 10:08:34.747882 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:34.748087 kubelet[2747]: E1101 10:08:34.748073 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:34.748087 kubelet[2747]: W1101 10:08:34.748082 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:34.748156 kubelet[2747]: E1101 10:08:34.748090 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:34.748321 kubelet[2747]: E1101 10:08:34.748290 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:34.748321 kubelet[2747]: W1101 10:08:34.748315 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:34.748396 kubelet[2747]: E1101 10:08:34.748330 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:34.748525 kubelet[2747]: E1101 10:08:34.748513 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:34.748525 kubelet[2747]: W1101 10:08:34.748522 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:34.748596 kubelet[2747]: E1101 10:08:34.748530 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:34.748789 kubelet[2747]: E1101 10:08:34.748772 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:34.748789 kubelet[2747]: W1101 10:08:34.748784 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:34.748855 kubelet[2747]: E1101 10:08:34.748793 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:34.749239 kubelet[2747]: E1101 10:08:34.749172 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:34.749239 kubelet[2747]: W1101 10:08:34.749197 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:34.749239 kubelet[2747]: E1101 10:08:34.749210 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:34.749442 kubelet[2747]: E1101 10:08:34.749420 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:34.749442 kubelet[2747]: W1101 10:08:34.749434 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:34.749442 kubelet[2747]: E1101 10:08:34.749443 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:34.749745 kubelet[2747]: E1101 10:08:34.749719 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:34.749745 kubelet[2747]: W1101 10:08:34.749735 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:34.749745 kubelet[2747]: E1101 10:08:34.749744 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:34.750291 kubelet[2747]: E1101 10:08:34.750043 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:34.750291 kubelet[2747]: W1101 10:08:34.750057 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:34.750291 kubelet[2747]: E1101 10:08:34.750069 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:34.750447 kubelet[2747]: E1101 10:08:34.750410 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:08:34.750447 kubelet[2747]: W1101 10:08:34.750424 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:08:34.750447 kubelet[2747]: E1101 10:08:34.750433 2747 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:08:35.604171 containerd[1601]: time="2025-11-01T10:08:35.604108354Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:08:35.605006 containerd[1601]: time="2025-11-01T10:08:35.604958601Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Nov 1 10:08:35.606299 containerd[1601]: time="2025-11-01T10:08:35.606255100Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:08:35.608461 containerd[1601]: time="2025-11-01T10:08:35.608426653Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:08:35.609109 containerd[1601]: time="2025-11-01T10:08:35.609065701Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.21582788s" Nov 1 10:08:35.609109 containerd[1601]: time="2025-11-01T10:08:35.609098673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 1 10:08:35.617167 containerd[1601]: time="2025-11-01T10:08:35.617117655Z" level=info msg="CreateContainer within sandbox \"397c5b388f95301f06abd86ad904b21f2cb6c7a4a5dee2bf5ee952073fc11a22\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 1 10:08:35.617981 kubelet[2747]: E1101 10:08:35.617926 2747 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wgv7h" podUID="2d70db43-7a2b-4384-9018-e7385784e621" Nov 1 10:08:35.628650 containerd[1601]: time="2025-11-01T10:08:35.628595191Z" level=info msg="Container 582268defad078a02e67e7603b9c0598bb6ea0e1dbe38b101a3f2920dfdd974b: CDI devices from CRI Config.CDIDevices: []" Nov 1 10:08:35.637630 containerd[1601]: time="2025-11-01T10:08:35.637583244Z" level=info msg="CreateContainer within sandbox \"397c5b388f95301f06abd86ad904b21f2cb6c7a4a5dee2bf5ee952073fc11a22\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"582268defad078a02e67e7603b9c0598bb6ea0e1dbe38b101a3f2920dfdd974b\"" Nov 1 10:08:35.638061 containerd[1601]: time="2025-11-01T10:08:35.638024217Z" level=info msg="StartContainer for \"582268defad078a02e67e7603b9c0598bb6ea0e1dbe38b101a3f2920dfdd974b\"" Nov 1 10:08:35.639380 containerd[1601]: time="2025-11-01T10:08:35.639329563Z" level=info msg="connecting to shim 582268defad078a02e67e7603b9c0598bb6ea0e1dbe38b101a3f2920dfdd974b" address="unix:///run/containerd/s/05549f62ea6047eba3ec0b072703ce2c83a0c48b93112bc1d2ed15074712c3a2" protocol=ttrpc version=3 Nov 1 10:08:35.662821 systemd[1]: Started cri-containerd-582268defad078a02e67e7603b9c0598bb6ea0e1dbe38b101a3f2920dfdd974b.scope - libcontainer container 582268defad078a02e67e7603b9c0598bb6ea0e1dbe38b101a3f2920dfdd974b. Nov 1 10:08:35.695614 kubelet[2747]: E1101 10:08:35.695463 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:35.718430 kubelet[2747]: I1101 10:08:35.716666 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6987d68cb7-g9j4r" podStartSLOduration=2.491647253 podStartE2EDuration="4.716651229s" podCreationTimestamp="2025-11-01 10:08:31 +0000 UTC" firstStartedPulling="2025-11-01 10:08:32.168073923 +0000 UTC m=+18.646308276" lastFinishedPulling="2025-11-01 10:08:34.393077909 +0000 UTC m=+20.871312252" observedRunningTime="2025-11-01 10:08:34.720896708 +0000 UTC m=+21.199131061" watchObservedRunningTime="2025-11-01 10:08:35.716651229 +0000 UTC m=+22.194885582" Nov 1 10:08:35.723011 containerd[1601]: time="2025-11-01T10:08:35.722961372Z" level=info msg="StartContainer for \"582268defad078a02e67e7603b9c0598bb6ea0e1dbe38b101a3f2920dfdd974b\" returns successfully" Nov 1 10:08:35.737813 systemd[1]: cri-containerd-582268defad078a02e67e7603b9c0598bb6ea0e1dbe38b101a3f2920dfdd974b.scope: Deactivated successfully. Nov 1 10:08:35.740921 containerd[1601]: time="2025-11-01T10:08:35.740769891Z" level=info msg="received exit event container_id:\"582268defad078a02e67e7603b9c0598bb6ea0e1dbe38b101a3f2920dfdd974b\" id:\"582268defad078a02e67e7603b9c0598bb6ea0e1dbe38b101a3f2920dfdd974b\" pid:3456 exited_at:{seconds:1761991715 nanos:739827008}" Nov 1 10:08:35.764481 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-582268defad078a02e67e7603b9c0598bb6ea0e1dbe38b101a3f2920dfdd974b-rootfs.mount: Deactivated successfully. Nov 1 10:08:36.698884 kubelet[2747]: E1101 10:08:36.698829 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:36.698884 kubelet[2747]: E1101 10:08:36.698871 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:36.700198 containerd[1601]: time="2025-11-01T10:08:36.700143062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 1 10:08:37.613911 kubelet[2747]: E1101 10:08:37.613813 2747 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wgv7h" podUID="2d70db43-7a2b-4384-9018-e7385784e621" Nov 1 10:08:38.926880 containerd[1601]: time="2025-11-01T10:08:38.926824605Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:08:38.927767 containerd[1601]: time="2025-11-01T10:08:38.927712831Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Nov 1 10:08:38.928987 containerd[1601]: time="2025-11-01T10:08:38.928952831Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:08:38.930893 containerd[1601]: time="2025-11-01T10:08:38.930865300Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:08:38.931413 containerd[1601]: time="2025-11-01T10:08:38.931373108Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.231177939s" Nov 1 10:08:38.931413 containerd[1601]: time="2025-11-01T10:08:38.931408915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 1 10:08:38.937668 containerd[1601]: time="2025-11-01T10:08:38.937627049Z" level=info msg="CreateContainer within sandbox \"397c5b388f95301f06abd86ad904b21f2cb6c7a4a5dee2bf5ee952073fc11a22\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 1 10:08:38.946168 containerd[1601]: time="2025-11-01T10:08:38.946133521Z" level=info msg="Container c1fcdee3402251515f42bc68fc08ab9ca078a94120de884abb345f805cf579ad: CDI devices from CRI Config.CDIDevices: []" Nov 1 10:08:38.955989 containerd[1601]: time="2025-11-01T10:08:38.955946287Z" level=info msg="CreateContainer within sandbox \"397c5b388f95301f06abd86ad904b21f2cb6c7a4a5dee2bf5ee952073fc11a22\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c1fcdee3402251515f42bc68fc08ab9ca078a94120de884abb345f805cf579ad\"" Nov 1 10:08:38.957028 containerd[1601]: time="2025-11-01T10:08:38.956999895Z" level=info msg="StartContainer for \"c1fcdee3402251515f42bc68fc08ab9ca078a94120de884abb345f805cf579ad\"" Nov 1 10:08:38.958338 containerd[1601]: time="2025-11-01T10:08:38.958299137Z" level=info msg="connecting to shim c1fcdee3402251515f42bc68fc08ab9ca078a94120de884abb345f805cf579ad" address="unix:///run/containerd/s/05549f62ea6047eba3ec0b072703ce2c83a0c48b93112bc1d2ed15074712c3a2" protocol=ttrpc version=3 Nov 1 10:08:38.982841 systemd[1]: Started cri-containerd-c1fcdee3402251515f42bc68fc08ab9ca078a94120de884abb345f805cf579ad.scope - libcontainer container c1fcdee3402251515f42bc68fc08ab9ca078a94120de884abb345f805cf579ad. Nov 1 10:08:39.028253 containerd[1601]: time="2025-11-01T10:08:39.028188707Z" level=info msg="StartContainer for \"c1fcdee3402251515f42bc68fc08ab9ca078a94120de884abb345f805cf579ad\" returns successfully" Nov 1 10:08:39.613359 kubelet[2747]: E1101 10:08:39.613290 2747 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wgv7h" podUID="2d70db43-7a2b-4384-9018-e7385784e621" Nov 1 10:08:40.026261 kubelet[2747]: E1101 10:08:40.026071 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:40.223484 systemd[1]: cri-containerd-c1fcdee3402251515f42bc68fc08ab9ca078a94120de884abb345f805cf579ad.scope: Deactivated successfully. Nov 1 10:08:40.223923 systemd[1]: cri-containerd-c1fcdee3402251515f42bc68fc08ab9ca078a94120de884abb345f805cf579ad.scope: Consumed 648ms CPU time, 181.7M memory peak, 3.5M read from disk, 171.3M written to disk. Nov 1 10:08:40.262525 containerd[1601]: time="2025-11-01T10:08:40.262457510Z" level=info msg="received exit event container_id:\"c1fcdee3402251515f42bc68fc08ab9ca078a94120de884abb345f805cf579ad\" id:\"c1fcdee3402251515f42bc68fc08ab9ca078a94120de884abb345f805cf579ad\" pid:3521 exited_at:{seconds:1761991720 nanos:223813163}" Nov 1 10:08:40.304604 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1fcdee3402251515f42bc68fc08ab9ca078a94120de884abb345f805cf579ad-rootfs.mount: Deactivated successfully. Nov 1 10:08:40.337477 kubelet[2747]: I1101 10:08:40.337440 2747 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 1 10:08:40.531748 systemd[1]: Created slice kubepods-besteffort-pod8cb58978_7fd1_4b27_8e4f_8d93d2102825.slice - libcontainer container kubepods-besteffort-pod8cb58978_7fd1_4b27_8e4f_8d93d2102825.slice. Nov 1 10:08:40.543165 systemd[1]: Created slice kubepods-burstable-pod22e74b9c_2f6c_481f_9e41_938b85854239.slice - libcontainer container kubepods-burstable-pod22e74b9c_2f6c_481f_9e41_938b85854239.slice. Nov 1 10:08:40.551507 systemd[1]: Created slice kubepods-besteffort-pod2dc8436d_b4aa_4090_a31c_cb311723721e.slice - libcontainer container kubepods-besteffort-pod2dc8436d_b4aa_4090_a31c_cb311723721e.slice. Nov 1 10:08:40.559257 systemd[1]: Created slice kubepods-burstable-pod644d8165_5889_4fa8_a643_99fd8cf0c4f1.slice - libcontainer container kubepods-burstable-pod644d8165_5889_4fa8_a643_99fd8cf0c4f1.slice. Nov 1 10:08:40.567319 systemd[1]: Created slice kubepods-besteffort-pod5d708427_4ed7_49ae_b59a_606507c6e8d8.slice - libcontainer container kubepods-besteffort-pod5d708427_4ed7_49ae_b59a_606507c6e8d8.slice. Nov 1 10:08:40.575178 systemd[1]: Created slice kubepods-besteffort-pod85c17161_0d10_41f6_9c26_623222730002.slice - libcontainer container kubepods-besteffort-pod85c17161_0d10_41f6_9c26_623222730002.slice. Nov 1 10:08:40.581756 systemd[1]: Created slice kubepods-besteffort-pod157d34c2_941f_430c_9ff6_c3d7ebcb5c55.slice - libcontainer container kubepods-besteffort-pod157d34c2_941f_430c_9ff6_c3d7ebcb5c55.slice. Nov 1 10:08:40.587035 kubelet[2747]: I1101 10:08:40.586990 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/157d34c2-941f-430c-9ff6-c3d7ebcb5c55-goldmane-key-pair\") pod \"goldmane-7c778bb748-rw5k5\" (UID: \"157d34c2-941f-430c-9ff6-c3d7ebcb5c55\") " pod="calico-system/goldmane-7c778bb748-rw5k5" Nov 1 10:08:40.587035 kubelet[2747]: I1101 10:08:40.587037 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5d708427-4ed7-49ae-b59a-606507c6e8d8-calico-apiserver-certs\") pod \"calico-apiserver-988dffdbc-fdbhp\" (UID: \"5d708427-4ed7-49ae-b59a-606507c6e8d8\") " pod="calico-apiserver/calico-apiserver-988dffdbc-fdbhp" Nov 1 10:08:40.587267 kubelet[2747]: I1101 10:08:40.587058 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/85c17161-0d10-41f6-9c26-623222730002-whisker-backend-key-pair\") pod \"whisker-557fcbf88c-vqz4j\" (UID: \"85c17161-0d10-41f6-9c26-623222730002\") " pod="calico-system/whisker-557fcbf88c-vqz4j" Nov 1 10:08:40.587267 kubelet[2747]: I1101 10:08:40.587077 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx8r7\" (UniqueName: \"kubernetes.io/projected/22e74b9c-2f6c-481f-9e41-938b85854239-kube-api-access-vx8r7\") pod \"coredns-66bc5c9577-hgd94\" (UID: \"22e74b9c-2f6c-481f-9e41-938b85854239\") " pod="kube-system/coredns-66bc5c9577-hgd94" Nov 1 10:08:40.587267 kubelet[2747]: I1101 10:08:40.587139 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85c17161-0d10-41f6-9c26-623222730002-whisker-ca-bundle\") pod \"whisker-557fcbf88c-vqz4j\" (UID: \"85c17161-0d10-41f6-9c26-623222730002\") " pod="calico-system/whisker-557fcbf88c-vqz4j" Nov 1 10:08:40.587267 kubelet[2747]: I1101 10:08:40.587181 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbfjz\" (UniqueName: \"kubernetes.io/projected/85c17161-0d10-41f6-9c26-623222730002-kube-api-access-lbfjz\") pod \"whisker-557fcbf88c-vqz4j\" (UID: \"85c17161-0d10-41f6-9c26-623222730002\") " pod="calico-system/whisker-557fcbf88c-vqz4j" Nov 1 10:08:40.587267 kubelet[2747]: I1101 10:08:40.587255 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7fst\" (UniqueName: \"kubernetes.io/projected/8cb58978-7fd1-4b27-8e4f-8d93d2102825-kube-api-access-x7fst\") pod \"calico-kube-controllers-7f8c559b4d-57dz9\" (UID: \"8cb58978-7fd1-4b27-8e4f-8d93d2102825\") " pod="calico-system/calico-kube-controllers-7f8c559b4d-57dz9" Nov 1 10:08:40.587501 kubelet[2747]: I1101 10:08:40.587285 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7d54\" (UniqueName: \"kubernetes.io/projected/5d708427-4ed7-49ae-b59a-606507c6e8d8-kube-api-access-s7d54\") pod \"calico-apiserver-988dffdbc-fdbhp\" (UID: \"5d708427-4ed7-49ae-b59a-606507c6e8d8\") " pod="calico-apiserver/calico-apiserver-988dffdbc-fdbhp" Nov 1 10:08:40.587501 kubelet[2747]: I1101 10:08:40.587304 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkm7g\" (UniqueName: \"kubernetes.io/projected/644d8165-5889-4fa8-a643-99fd8cf0c4f1-kube-api-access-gkm7g\") pod \"coredns-66bc5c9577-sghzp\" (UID: \"644d8165-5889-4fa8-a643-99fd8cf0c4f1\") " pod="kube-system/coredns-66bc5c9577-sghzp" Nov 1 10:08:40.587501 kubelet[2747]: I1101 10:08:40.587324 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v98l\" (UniqueName: \"kubernetes.io/projected/2dc8436d-b4aa-4090-a31c-cb311723721e-kube-api-access-7v98l\") pod \"calico-apiserver-988dffdbc-59586\" (UID: \"2dc8436d-b4aa-4090-a31c-cb311723721e\") " pod="calico-apiserver/calico-apiserver-988dffdbc-59586" Nov 1 10:08:40.587501 kubelet[2747]: I1101 10:08:40.587339 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/157d34c2-941f-430c-9ff6-c3d7ebcb5c55-config\") pod \"goldmane-7c778bb748-rw5k5\" (UID: \"157d34c2-941f-430c-9ff6-c3d7ebcb5c55\") " pod="calico-system/goldmane-7c778bb748-rw5k5" Nov 1 10:08:40.587501 kubelet[2747]: I1101 10:08:40.587368 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2dc8436d-b4aa-4090-a31c-cb311723721e-calico-apiserver-certs\") pod \"calico-apiserver-988dffdbc-59586\" (UID: \"2dc8436d-b4aa-4090-a31c-cb311723721e\") " pod="calico-apiserver/calico-apiserver-988dffdbc-59586" Nov 1 10:08:40.587664 kubelet[2747]: I1101 10:08:40.587384 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/644d8165-5889-4fa8-a643-99fd8cf0c4f1-config-volume\") pod \"coredns-66bc5c9577-sghzp\" (UID: \"644d8165-5889-4fa8-a643-99fd8cf0c4f1\") " pod="kube-system/coredns-66bc5c9577-sghzp" Nov 1 10:08:40.587664 kubelet[2747]: I1101 10:08:40.587432 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22e74b9c-2f6c-481f-9e41-938b85854239-config-volume\") pod \"coredns-66bc5c9577-hgd94\" (UID: \"22e74b9c-2f6c-481f-9e41-938b85854239\") " pod="kube-system/coredns-66bc5c9577-hgd94" Nov 1 10:08:40.587664 kubelet[2747]: I1101 10:08:40.587454 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2klpw\" (UniqueName: \"kubernetes.io/projected/157d34c2-941f-430c-9ff6-c3d7ebcb5c55-kube-api-access-2klpw\") pod \"goldmane-7c778bb748-rw5k5\" (UID: \"157d34c2-941f-430c-9ff6-c3d7ebcb5c55\") " pod="calico-system/goldmane-7c778bb748-rw5k5" Nov 1 10:08:40.587664 kubelet[2747]: I1101 10:08:40.587473 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8cb58978-7fd1-4b27-8e4f-8d93d2102825-tigera-ca-bundle\") pod \"calico-kube-controllers-7f8c559b4d-57dz9\" (UID: \"8cb58978-7fd1-4b27-8e4f-8d93d2102825\") " pod="calico-system/calico-kube-controllers-7f8c559b4d-57dz9" Nov 1 10:08:40.587664 kubelet[2747]: I1101 10:08:40.587501 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/157d34c2-941f-430c-9ff6-c3d7ebcb5c55-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-rw5k5\" (UID: \"157d34c2-941f-430c-9ff6-c3d7ebcb5c55\") " pod="calico-system/goldmane-7c778bb748-rw5k5" Nov 1 10:08:40.844914 containerd[1601]: time="2025-11-01T10:08:40.844847647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f8c559b4d-57dz9,Uid:8cb58978-7fd1-4b27-8e4f-8d93d2102825,Namespace:calico-system,Attempt:0,}" Nov 1 10:08:40.849855 kubelet[2747]: E1101 10:08:40.849822 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:40.850519 containerd[1601]: time="2025-11-01T10:08:40.850463166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-hgd94,Uid:22e74b9c-2f6c-481f-9e41-938b85854239,Namespace:kube-system,Attempt:0,}" Nov 1 10:08:40.857008 containerd[1601]: time="2025-11-01T10:08:40.856965536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-988dffdbc-59586,Uid:2dc8436d-b4aa-4090-a31c-cb311723721e,Namespace:calico-apiserver,Attempt:0,}" Nov 1 10:08:40.865724 kubelet[2747]: E1101 10:08:40.865621 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:40.872762 containerd[1601]: time="2025-11-01T10:08:40.871902811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sghzp,Uid:644d8165-5889-4fa8-a643-99fd8cf0c4f1,Namespace:kube-system,Attempt:0,}" Nov 1 10:08:40.878572 containerd[1601]: time="2025-11-01T10:08:40.878515360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-988dffdbc-fdbhp,Uid:5d708427-4ed7-49ae-b59a-606507c6e8d8,Namespace:calico-apiserver,Attempt:0,}" Nov 1 10:08:40.881351 containerd[1601]: time="2025-11-01T10:08:40.880366641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-557fcbf88c-vqz4j,Uid:85c17161-0d10-41f6-9c26-623222730002,Namespace:calico-system,Attempt:0,}" Nov 1 10:08:40.890147 containerd[1601]: time="2025-11-01T10:08:40.890106046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-rw5k5,Uid:157d34c2-941f-430c-9ff6-c3d7ebcb5c55,Namespace:calico-system,Attempt:0,}" Nov 1 10:08:41.005851 containerd[1601]: time="2025-11-01T10:08:41.005057451Z" level=error msg="Failed to destroy network for sandbox \"5a47256dbd0862a89cb58892433d3a4150deb11d306010821c9274bdef6ebae4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:08:41.005851 containerd[1601]: time="2025-11-01T10:08:41.005277105Z" level=error msg="Failed to destroy network for sandbox \"cac6c7d5d27015437940532d1c825710f18615ec6e7497fa12ba7a93cfc74b47\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:08:41.010964 containerd[1601]: time="2025-11-01T10:08:41.010854509Z" level=error msg="Failed to destroy network for sandbox \"6c81ab5bad6d624c96ffa519c8d7faa64b7dbfb7aa0b8b246064caf90ec19835\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:08:41.011447 containerd[1601]: time="2025-11-01T10:08:41.011388866Z" level=error msg="Failed to destroy network for sandbox \"51df6209a4d8815082d1f30d68bc9798e27f86219db1091a131265d677a440e6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:08:41.013033 containerd[1601]: time="2025-11-01T10:08:41.013002437Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-988dffdbc-fdbhp,Uid:5d708427-4ed7-49ae-b59a-606507c6e8d8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a47256dbd0862a89cb58892433d3a4150deb11d306010821c9274bdef6ebae4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:08:41.016257 containerd[1601]: time="2025-11-01T10:08:41.016151012Z" level=error msg="Failed to destroy network for sandbox \"a8669c3a1e26284b631256ae0e73c488b2dfff99fa5b02751e6cc25272de6885\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:08:41.020442 containerd[1601]: time="2025-11-01T10:08:41.020380013Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-557fcbf88c-vqz4j,Uid:85c17161-0d10-41f6-9c26-623222730002,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cac6c7d5d27015437940532d1c825710f18615ec6e7497fa12ba7a93cfc74b47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:08:41.021298 containerd[1601]: time="2025-11-01T10:08:41.021253349Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-988dffdbc-59586,Uid:2dc8436d-b4aa-4090-a31c-cb311723721e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c81ab5bad6d624c96ffa519c8d7faa64b7dbfb7aa0b8b246064caf90ec19835\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:08:41.021835 containerd[1601]: time="2025-11-01T10:08:41.021803577Z" level=error msg="Failed to destroy network for sandbox \"1a5edaed2e8bd70e2a7993d89e744fe89abfd4317cd684419667e83515652554\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:08:41.023504 containerd[1601]: time="2025-11-01T10:08:41.023463746Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f8c559b4d-57dz9,Uid:8cb58978-7fd1-4b27-8e4f-8d93d2102825,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"51df6209a4d8815082d1f30d68bc9798e27f86219db1091a131265d677a440e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:08:41.024700 containerd[1601]: time="2025-11-01T10:08:41.024641145Z" level=error msg="Failed to destroy network for sandbox \"e778aec65c39c1efbd239bc859f085488b22add7c7c495279ee8beebabce0b25\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:08:41.025625 containerd[1601]: time="2025-11-01T10:08:41.025583141Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-hgd94,Uid:22e74b9c-2f6c-481f-9e41-938b85854239,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8669c3a1e26284b631256ae0e73c488b2dfff99fa5b02751e6cc25272de6885\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:08:41.026742 containerd[1601]: time="2025-11-01T10:08:41.026600098Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-rw5k5,Uid:157d34c2-941f-430c-9ff6-c3d7ebcb5c55,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a5edaed2e8bd70e2a7993d89e744fe89abfd4317cd684419667e83515652554\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:08:41.026830 kubelet[2747]: E1101 10:08:41.026750 2747 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c81ab5bad6d624c96ffa519c8d7faa64b7dbfb7aa0b8b246064caf90ec19835\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:08:41.026830 kubelet[2747]: E1101 10:08:41.026747 2747 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a47256dbd0862a89cb58892433d3a4150deb11d306010821c9274bdef6ebae4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:08:41.026830 kubelet[2747]: E1101 10:08:41.026817 2747 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c81ab5bad6d624c96ffa519c8d7faa64b7dbfb7aa0b8b246064caf90ec19835\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-988dffdbc-59586" Nov 1 10:08:41.026941 kubelet[2747]: E1101 10:08:41.026831 2747 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a47256dbd0862a89cb58892433d3a4150deb11d306010821c9274bdef6ebae4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-988dffdbc-fdbhp" Nov 1 10:08:41.026941 kubelet[2747]: E1101 10:08:41.026856 2747 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a47256dbd0862a89cb58892433d3a4150deb11d306010821c9274bdef6ebae4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-988dffdbc-fdbhp" Nov 1 10:08:41.026941 kubelet[2747]: E1101 10:08:41.026865 2747 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cac6c7d5d27015437940532d1c825710f18615ec6e7497fa12ba7a93cfc74b47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:08:41.026941 kubelet[2747]: E1101 10:08:41.026884 2747 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cac6c7d5d27015437940532d1c825710f18615ec6e7497fa12ba7a93cfc74b47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-557fcbf88c-vqz4j" Nov 1 10:08:41.027039 kubelet[2747]: E1101 10:08:41.026901 2747 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cac6c7d5d27015437940532d1c825710f18615ec6e7497fa12ba7a93cfc74b47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-557fcbf88c-vqz4j" Nov 1 10:08:41.027039 kubelet[2747]: E1101 10:08:41.026920 2747 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-988dffdbc-fdbhp_calico-apiserver(5d708427-4ed7-49ae-b59a-606507c6e8d8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-988dffdbc-fdbhp_calico-apiserver(5d708427-4ed7-49ae-b59a-606507c6e8d8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5a47256dbd0862a89cb58892433d3a4150deb11d306010821c9274bdef6ebae4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-988dffdbc-fdbhp" podUID="5d708427-4ed7-49ae-b59a-606507c6e8d8" Nov 1 10:08:41.027039 kubelet[2747]: E1101 10:08:41.026958 2747 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-557fcbf88c-vqz4j_calico-system(85c17161-0d10-41f6-9c26-623222730002)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-557fcbf88c-vqz4j_calico-system(85c17161-0d10-41f6-9c26-623222730002)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cac6c7d5d27015437940532d1c825710f18615ec6e7497fa12ba7a93cfc74b47\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-557fcbf88c-vqz4j" podUID="85c17161-0d10-41f6-9c26-623222730002" Nov 1 10:08:41.027174 kubelet[2747]: E1101 10:08:41.026837 2747 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c81ab5bad6d624c96ffa519c8d7faa64b7dbfb7aa0b8b246064caf90ec19835\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-988dffdbc-59586" Nov 1 10:08:41.027174 kubelet[2747]: E1101 10:08:41.027006 2747 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-988dffdbc-59586_calico-apiserver(2dc8436d-b4aa-4090-a31c-cb311723721e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-988dffdbc-59586_calico-apiserver(2dc8436d-b4aa-4090-a31c-cb311723721e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6c81ab5bad6d624c96ffa519c8d7faa64b7dbfb7aa0b8b246064caf90ec19835\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-988dffdbc-59586" podUID="2dc8436d-b4aa-4090-a31c-cb311723721e" Nov 1 10:08:41.027174 kubelet[2747]: E1101 10:08:41.027034 2747 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51df6209a4d8815082d1f30d68bc9798e27f86219db1091a131265d677a440e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:08:41.027285 kubelet[2747]: E1101 10:08:41.027050 2747 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51df6209a4d8815082d1f30d68bc9798e27f86219db1091a131265d677a440e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f8c559b4d-57dz9" Nov 1 10:08:41.027285 kubelet[2747]: E1101 10:08:41.027063 2747 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51df6209a4d8815082d1f30d68bc9798e27f86219db1091a131265d677a440e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f8c559b4d-57dz9" Nov 1 10:08:41.027285 kubelet[2747]: E1101 10:08:41.027088 2747 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f8c559b4d-57dz9_calico-system(8cb58978-7fd1-4b27-8e4f-8d93d2102825)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f8c559b4d-57dz9_calico-system(8cb58978-7fd1-4b27-8e4f-8d93d2102825)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"51df6209a4d8815082d1f30d68bc9798e27f86219db1091a131265d677a440e6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f8c559b4d-57dz9" podUID="8cb58978-7fd1-4b27-8e4f-8d93d2102825" Nov 1 10:08:41.027396 kubelet[2747]: E1101 10:08:41.027113 2747 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8669c3a1e26284b631256ae0e73c488b2dfff99fa5b02751e6cc25272de6885\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:08:41.027396 kubelet[2747]: E1101 10:08:41.027126 2747 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8669c3a1e26284b631256ae0e73c488b2dfff99fa5b02751e6cc25272de6885\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-hgd94" Nov 1 10:08:41.027396 kubelet[2747]: E1101 10:08:41.027139 2747 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8669c3a1e26284b631256ae0e73c488b2dfff99fa5b02751e6cc25272de6885\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-hgd94" Nov 1 10:08:41.027476 kubelet[2747]: E1101 10:08:41.027162 2747 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-hgd94_kube-system(22e74b9c-2f6c-481f-9e41-938b85854239)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-hgd94_kube-system(22e74b9c-2f6c-481f-9e41-938b85854239)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a8669c3a1e26284b631256ae0e73c488b2dfff99fa5b02751e6cc25272de6885\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-hgd94" podUID="22e74b9c-2f6c-481f-9e41-938b85854239" Nov 1 10:08:41.027864 kubelet[2747]: E1101 10:08:41.027605 2747 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a5edaed2e8bd70e2a7993d89e744fe89abfd4317cd684419667e83515652554\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:08:41.027864 kubelet[2747]: E1101 10:08:41.027680 2747 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a5edaed2e8bd70e2a7993d89e744fe89abfd4317cd684419667e83515652554\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-rw5k5" Nov 1 10:08:41.027864 kubelet[2747]: E1101 10:08:41.027724 2747 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a5edaed2e8bd70e2a7993d89e744fe89abfd4317cd684419667e83515652554\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-rw5k5" Nov 1 10:08:41.027979 kubelet[2747]: E1101 10:08:41.027789 2747 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-rw5k5_calico-system(157d34c2-941f-430c-9ff6-c3d7ebcb5c55)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-rw5k5_calico-system(157d34c2-941f-430c-9ff6-c3d7ebcb5c55)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1a5edaed2e8bd70e2a7993d89e744fe89abfd4317cd684419667e83515652554\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-rw5k5" podUID="157d34c2-941f-430c-9ff6-c3d7ebcb5c55" Nov 1 10:08:41.029958 containerd[1601]: time="2025-11-01T10:08:41.029908684Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sghzp,Uid:644d8165-5889-4fa8-a643-99fd8cf0c4f1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e778aec65c39c1efbd239bc859f085488b22add7c7c495279ee8beebabce0b25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:08:41.030076 kubelet[2747]: E1101 10:08:41.030046 2747 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e778aec65c39c1efbd239bc859f085488b22add7c7c495279ee8beebabce0b25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:08:41.030130 kubelet[2747]: E1101 10:08:41.030078 2747 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e778aec65c39c1efbd239bc859f085488b22add7c7c495279ee8beebabce0b25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-sghzp" Nov 1 10:08:41.030130 kubelet[2747]: E1101 10:08:41.030094 2747 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e778aec65c39c1efbd239bc859f085488b22add7c7c495279ee8beebabce0b25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-sghzp" Nov 1 10:08:41.030200 kubelet[2747]: E1101 10:08:41.030123 2747 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-sghzp_kube-system(644d8165-5889-4fa8-a643-99fd8cf0c4f1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-sghzp_kube-system(644d8165-5889-4fa8-a643-99fd8cf0c4f1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e778aec65c39c1efbd239bc859f085488b22add7c7c495279ee8beebabce0b25\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-sghzp" podUID="644d8165-5889-4fa8-a643-99fd8cf0c4f1" Nov 1 10:08:41.032413 kubelet[2747]: E1101 10:08:41.032389 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:41.033115 containerd[1601]: time="2025-11-01T10:08:41.033076375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 1 10:08:41.619501 systemd[1]: Created slice kubepods-besteffort-pod2d70db43_7a2b_4384_9018_e7385784e621.slice - libcontainer container kubepods-besteffort-pod2d70db43_7a2b_4384_9018_e7385784e621.slice. Nov 1 10:08:41.736399 containerd[1601]: time="2025-11-01T10:08:41.736326634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wgv7h,Uid:2d70db43-7a2b-4384-9018-e7385784e621,Namespace:calico-system,Attempt:0,}" Nov 1 10:08:41.830636 containerd[1601]: time="2025-11-01T10:08:41.830572080Z" level=error msg="Failed to destroy network for sandbox \"69b0d39ecb4dc7fc9c4422267307927ed984c2c966bb7c5ab53ae3e4f5c3dbf7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:08:41.833025 systemd[1]: run-netns-cni\x2d735b852c\x2def65\x2d137d\x2d38d8\x2d82e89846416f.mount: Deactivated successfully. Nov 1 10:08:41.834149 containerd[1601]: time="2025-11-01T10:08:41.833644933Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wgv7h,Uid:2d70db43-7a2b-4384-9018-e7385784e621,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"69b0d39ecb4dc7fc9c4422267307927ed984c2c966bb7c5ab53ae3e4f5c3dbf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:08:41.834278 kubelet[2747]: E1101 10:08:41.833900 2747 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69b0d39ecb4dc7fc9c4422267307927ed984c2c966bb7c5ab53ae3e4f5c3dbf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:08:41.834278 kubelet[2747]: E1101 10:08:41.833957 2747 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69b0d39ecb4dc7fc9c4422267307927ed984c2c966bb7c5ab53ae3e4f5c3dbf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wgv7h" Nov 1 10:08:41.834278 kubelet[2747]: E1101 10:08:41.833976 2747 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69b0d39ecb4dc7fc9c4422267307927ed984c2c966bb7c5ab53ae3e4f5c3dbf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wgv7h" Nov 1 10:08:41.834383 kubelet[2747]: E1101 10:08:41.834026 2747 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wgv7h_calico-system(2d70db43-7a2b-4384-9018-e7385784e621)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wgv7h_calico-system(2d70db43-7a2b-4384-9018-e7385784e621)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"69b0d39ecb4dc7fc9c4422267307927ed984c2c966bb7c5ab53ae3e4f5c3dbf7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wgv7h" podUID="2d70db43-7a2b-4384-9018-e7385784e621" Nov 1 10:08:49.138220 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1194723412.mount: Deactivated successfully. Nov 1 10:08:49.336525 containerd[1601]: time="2025-11-01T10:08:49.336435221Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:08:49.352383 containerd[1601]: time="2025-11-01T10:08:49.337301861Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Nov 1 10:08:49.352383 containerd[1601]: time="2025-11-01T10:08:49.338570228Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:08:49.352505 containerd[1601]: time="2025-11-01T10:08:49.341213620Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.308103541s" Nov 1 10:08:49.352505 containerd[1601]: time="2025-11-01T10:08:49.352482718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 1 10:08:49.352931 containerd[1601]: time="2025-11-01T10:08:49.352858765Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:08:49.372863 containerd[1601]: time="2025-11-01T10:08:49.372816227Z" level=info msg="CreateContainer within sandbox \"397c5b388f95301f06abd86ad904b21f2cb6c7a4a5dee2bf5ee952073fc11a22\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 1 10:08:49.387305 containerd[1601]: time="2025-11-01T10:08:49.387251520Z" level=info msg="Container e855d2b9f08d3d9ceb3dd90e968dfc54afecea10b8c901afa1222268a4268817: CDI devices from CRI Config.CDIDevices: []" Nov 1 10:08:49.388867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1513022326.mount: Deactivated successfully. Nov 1 10:08:49.396744 containerd[1601]: time="2025-11-01T10:08:49.396680577Z" level=info msg="CreateContainer within sandbox \"397c5b388f95301f06abd86ad904b21f2cb6c7a4a5dee2bf5ee952073fc11a22\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e855d2b9f08d3d9ceb3dd90e968dfc54afecea10b8c901afa1222268a4268817\"" Nov 1 10:08:49.397231 containerd[1601]: time="2025-11-01T10:08:49.397206686Z" level=info msg="StartContainer for \"e855d2b9f08d3d9ceb3dd90e968dfc54afecea10b8c901afa1222268a4268817\"" Nov 1 10:08:49.398717 containerd[1601]: time="2025-11-01T10:08:49.398669359Z" level=info msg="connecting to shim e855d2b9f08d3d9ceb3dd90e968dfc54afecea10b8c901afa1222268a4268817" address="unix:///run/containerd/s/05549f62ea6047eba3ec0b072703ce2c83a0c48b93112bc1d2ed15074712c3a2" protocol=ttrpc version=3 Nov 1 10:08:49.425840 systemd[1]: Started cri-containerd-e855d2b9f08d3d9ceb3dd90e968dfc54afecea10b8c901afa1222268a4268817.scope - libcontainer container e855d2b9f08d3d9ceb3dd90e968dfc54afecea10b8c901afa1222268a4268817. Nov 1 10:08:49.484308 containerd[1601]: time="2025-11-01T10:08:49.484268255Z" level=info msg="StartContainer for \"e855d2b9f08d3d9ceb3dd90e968dfc54afecea10b8c901afa1222268a4268817\" returns successfully" Nov 1 10:08:49.556036 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 1 10:08:49.557052 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 1 10:08:49.743891 kubelet[2747]: I1101 10:08:49.743756 2747 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85c17161-0d10-41f6-9c26-623222730002-whisker-ca-bundle\") pod \"85c17161-0d10-41f6-9c26-623222730002\" (UID: \"85c17161-0d10-41f6-9c26-623222730002\") " Nov 1 10:08:49.743891 kubelet[2747]: I1101 10:08:49.743814 2747 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/85c17161-0d10-41f6-9c26-623222730002-whisker-backend-key-pair\") pod \"85c17161-0d10-41f6-9c26-623222730002\" (UID: \"85c17161-0d10-41f6-9c26-623222730002\") " Nov 1 10:08:49.743891 kubelet[2747]: I1101 10:08:49.743834 2747 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbfjz\" (UniqueName: \"kubernetes.io/projected/85c17161-0d10-41f6-9c26-623222730002-kube-api-access-lbfjz\") pod \"85c17161-0d10-41f6-9c26-623222730002\" (UID: \"85c17161-0d10-41f6-9c26-623222730002\") " Nov 1 10:08:49.744894 kubelet[2747]: I1101 10:08:49.744868 2747 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85c17161-0d10-41f6-9c26-623222730002-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "85c17161-0d10-41f6-9c26-623222730002" (UID: "85c17161-0d10-41f6-9c26-623222730002"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 10:08:49.748887 kubelet[2747]: I1101 10:08:49.748667 2747 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85c17161-0d10-41f6-9c26-623222730002-kube-api-access-lbfjz" (OuterVolumeSpecName: "kube-api-access-lbfjz") pod "85c17161-0d10-41f6-9c26-623222730002" (UID: "85c17161-0d10-41f6-9c26-623222730002"). InnerVolumeSpecName "kube-api-access-lbfjz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 10:08:49.748887 kubelet[2747]: I1101 10:08:49.748819 2747 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85c17161-0d10-41f6-9c26-623222730002-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "85c17161-0d10-41f6-9c26-623222730002" (UID: "85c17161-0d10-41f6-9c26-623222730002"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 10:08:49.845047 kubelet[2747]: I1101 10:08:49.844978 2747 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85c17161-0d10-41f6-9c26-623222730002-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 1 10:08:49.845047 kubelet[2747]: I1101 10:08:49.845009 2747 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/85c17161-0d10-41f6-9c26-623222730002-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 1 10:08:49.845047 kubelet[2747]: I1101 10:08:49.845028 2747 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lbfjz\" (UniqueName: \"kubernetes.io/projected/85c17161-0d10-41f6-9c26-623222730002-kube-api-access-lbfjz\") on node \"localhost\" DevicePath \"\"" Nov 1 10:08:50.053674 kubelet[2747]: E1101 10:08:50.053520 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:50.060555 systemd[1]: Removed slice kubepods-besteffort-pod85c17161_0d10_41f6_9c26_623222730002.slice - libcontainer container kubepods-besteffort-pod85c17161_0d10_41f6_9c26_623222730002.slice. Nov 1 10:08:50.068473 kubelet[2747]: I1101 10:08:50.068402 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-lqjg4" podStartSLOduration=1.9769100480000001 podStartE2EDuration="19.06838148s" podCreationTimestamp="2025-11-01 10:08:31 +0000 UTC" firstStartedPulling="2025-11-01 10:08:32.2620714 +0000 UTC m=+18.740305753" lastFinishedPulling="2025-11-01 10:08:49.353542832 +0000 UTC m=+35.831777185" observedRunningTime="2025-11-01 10:08:50.068093357 +0000 UTC m=+36.546327720" watchObservedRunningTime="2025-11-01 10:08:50.06838148 +0000 UTC m=+36.546615833" Nov 1 10:08:50.122598 systemd[1]: Created slice kubepods-besteffort-pod925c93c8_2f46_4e42_abc0_187db9f33983.slice - libcontainer container kubepods-besteffort-pod925c93c8_2f46_4e42_abc0_187db9f33983.slice. Nov 1 10:08:50.139299 systemd[1]: var-lib-kubelet-pods-85c17161\x2d0d10\x2d41f6\x2d9c26\x2d623222730002-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlbfjz.mount: Deactivated successfully. Nov 1 10:08:50.139436 systemd[1]: var-lib-kubelet-pods-85c17161\x2d0d10\x2d41f6\x2d9c26\x2d623222730002-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 1 10:08:50.146488 kubelet[2747]: I1101 10:08:50.146430 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmlm7\" (UniqueName: \"kubernetes.io/projected/925c93c8-2f46-4e42-abc0-187db9f33983-kube-api-access-cmlm7\") pod \"whisker-8988d9647-8v76c\" (UID: \"925c93c8-2f46-4e42-abc0-187db9f33983\") " pod="calico-system/whisker-8988d9647-8v76c" Nov 1 10:08:50.146488 kubelet[2747]: I1101 10:08:50.146474 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/925c93c8-2f46-4e42-abc0-187db9f33983-whisker-backend-key-pair\") pod \"whisker-8988d9647-8v76c\" (UID: \"925c93c8-2f46-4e42-abc0-187db9f33983\") " pod="calico-system/whisker-8988d9647-8v76c" Nov 1 10:08:50.146585 kubelet[2747]: I1101 10:08:50.146531 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/925c93c8-2f46-4e42-abc0-187db9f33983-whisker-ca-bundle\") pod \"whisker-8988d9647-8v76c\" (UID: \"925c93c8-2f46-4e42-abc0-187db9f33983\") " pod="calico-system/whisker-8988d9647-8v76c" Nov 1 10:08:50.430823 containerd[1601]: time="2025-11-01T10:08:50.430764578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8988d9647-8v76c,Uid:925c93c8-2f46-4e42-abc0-187db9f33983,Namespace:calico-system,Attempt:0,}" Nov 1 10:08:50.583823 systemd-networkd[1502]: cali24349538338: Link UP Nov 1 10:08:50.584062 systemd-networkd[1502]: cali24349538338: Gained carrier Nov 1 10:08:50.603031 containerd[1601]: 2025-11-01 10:08:50.454 [INFO][3905] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 10:08:50.603031 containerd[1601]: 2025-11-01 10:08:50.473 [INFO][3905] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--8988d9647--8v76c-eth0 whisker-8988d9647- calico-system 925c93c8-2f46-4e42-abc0-187db9f33983 905 0 2025-11-01 10:08:50 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:8988d9647 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-8988d9647-8v76c eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali24349538338 [] [] }} ContainerID="69460bcf5c17fd520ae64bd6a01d9f8e4471203ac8314e7aab526f13c71c7e40" Namespace="calico-system" Pod="whisker-8988d9647-8v76c" WorkloadEndpoint="localhost-k8s-whisker--8988d9647--8v76c-" Nov 1 10:08:50.603031 containerd[1601]: 2025-11-01 10:08:50.474 [INFO][3905] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="69460bcf5c17fd520ae64bd6a01d9f8e4471203ac8314e7aab526f13c71c7e40" Namespace="calico-system" Pod="whisker-8988d9647-8v76c" WorkloadEndpoint="localhost-k8s-whisker--8988d9647--8v76c-eth0" Nov 1 10:08:50.603031 containerd[1601]: 2025-11-01 10:08:50.538 [INFO][3919] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="69460bcf5c17fd520ae64bd6a01d9f8e4471203ac8314e7aab526f13c71c7e40" HandleID="k8s-pod-network.69460bcf5c17fd520ae64bd6a01d9f8e4471203ac8314e7aab526f13c71c7e40" Workload="localhost-k8s-whisker--8988d9647--8v76c-eth0" Nov 1 10:08:50.603357 containerd[1601]: 2025-11-01 10:08:50.539 [INFO][3919] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="69460bcf5c17fd520ae64bd6a01d9f8e4471203ac8314e7aab526f13c71c7e40" HandleID="k8s-pod-network.69460bcf5c17fd520ae64bd6a01d9f8e4471203ac8314e7aab526f13c71c7e40" Workload="localhost-k8s-whisker--8988d9647--8v76c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000b92f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-8988d9647-8v76c", "timestamp":"2025-11-01 10:08:50.53862368 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 10:08:50.603357 containerd[1601]: 2025-11-01 10:08:50.539 [INFO][3919] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 10:08:50.603357 containerd[1601]: 2025-11-01 10:08:50.539 [INFO][3919] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 10:08:50.603357 containerd[1601]: 2025-11-01 10:08:50.539 [INFO][3919] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 10:08:50.603357 containerd[1601]: 2025-11-01 10:08:50.547 [INFO][3919] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.69460bcf5c17fd520ae64bd6a01d9f8e4471203ac8314e7aab526f13c71c7e40" host="localhost" Nov 1 10:08:50.603357 containerd[1601]: 2025-11-01 10:08:50.552 [INFO][3919] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 10:08:50.603357 containerd[1601]: 2025-11-01 10:08:50.556 [INFO][3919] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 10:08:50.603357 containerd[1601]: 2025-11-01 10:08:50.558 [INFO][3919] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 10:08:50.603357 containerd[1601]: 2025-11-01 10:08:50.560 [INFO][3919] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 10:08:50.603357 containerd[1601]: 2025-11-01 10:08:50.560 [INFO][3919] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.69460bcf5c17fd520ae64bd6a01d9f8e4471203ac8314e7aab526f13c71c7e40" host="localhost" Nov 1 10:08:50.603588 containerd[1601]: 2025-11-01 10:08:50.562 [INFO][3919] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.69460bcf5c17fd520ae64bd6a01d9f8e4471203ac8314e7aab526f13c71c7e40 Nov 1 10:08:50.603588 containerd[1601]: 2025-11-01 10:08:50.566 [INFO][3919] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.69460bcf5c17fd520ae64bd6a01d9f8e4471203ac8314e7aab526f13c71c7e40" host="localhost" Nov 1 10:08:50.603588 containerd[1601]: 2025-11-01 10:08:50.571 [INFO][3919] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.69460bcf5c17fd520ae64bd6a01d9f8e4471203ac8314e7aab526f13c71c7e40" host="localhost" Nov 1 10:08:50.603588 containerd[1601]: 2025-11-01 10:08:50.571 [INFO][3919] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.69460bcf5c17fd520ae64bd6a01d9f8e4471203ac8314e7aab526f13c71c7e40" host="localhost" Nov 1 10:08:50.603588 containerd[1601]: 2025-11-01 10:08:50.571 [INFO][3919] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 10:08:50.603588 containerd[1601]: 2025-11-01 10:08:50.571 [INFO][3919] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="69460bcf5c17fd520ae64bd6a01d9f8e4471203ac8314e7aab526f13c71c7e40" HandleID="k8s-pod-network.69460bcf5c17fd520ae64bd6a01d9f8e4471203ac8314e7aab526f13c71c7e40" Workload="localhost-k8s-whisker--8988d9647--8v76c-eth0" Nov 1 10:08:50.603797 containerd[1601]: 2025-11-01 10:08:50.575 [INFO][3905] cni-plugin/k8s.go 418: Populated endpoint ContainerID="69460bcf5c17fd520ae64bd6a01d9f8e4471203ac8314e7aab526f13c71c7e40" Namespace="calico-system" Pod="whisker-8988d9647-8v76c" WorkloadEndpoint="localhost-k8s-whisker--8988d9647--8v76c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--8988d9647--8v76c-eth0", GenerateName:"whisker-8988d9647-", Namespace:"calico-system", SelfLink:"", UID:"925c93c8-2f46-4e42-abc0-187db9f33983", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 8, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8988d9647", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-8988d9647-8v76c", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali24349538338", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:08:50.603797 containerd[1601]: 2025-11-01 10:08:50.575 [INFO][3905] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="69460bcf5c17fd520ae64bd6a01d9f8e4471203ac8314e7aab526f13c71c7e40" Namespace="calico-system" Pod="whisker-8988d9647-8v76c" WorkloadEndpoint="localhost-k8s-whisker--8988d9647--8v76c-eth0" Nov 1 10:08:50.603911 containerd[1601]: 2025-11-01 10:08:50.575 [INFO][3905] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali24349538338 ContainerID="69460bcf5c17fd520ae64bd6a01d9f8e4471203ac8314e7aab526f13c71c7e40" Namespace="calico-system" Pod="whisker-8988d9647-8v76c" WorkloadEndpoint="localhost-k8s-whisker--8988d9647--8v76c-eth0" Nov 1 10:08:50.603911 containerd[1601]: 2025-11-01 10:08:50.583 [INFO][3905] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="69460bcf5c17fd520ae64bd6a01d9f8e4471203ac8314e7aab526f13c71c7e40" Namespace="calico-system" Pod="whisker-8988d9647-8v76c" WorkloadEndpoint="localhost-k8s-whisker--8988d9647--8v76c-eth0" Nov 1 10:08:50.603966 containerd[1601]: 2025-11-01 10:08:50.584 [INFO][3905] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="69460bcf5c17fd520ae64bd6a01d9f8e4471203ac8314e7aab526f13c71c7e40" Namespace="calico-system" Pod="whisker-8988d9647-8v76c" WorkloadEndpoint="localhost-k8s-whisker--8988d9647--8v76c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--8988d9647--8v76c-eth0", GenerateName:"whisker-8988d9647-", Namespace:"calico-system", SelfLink:"", UID:"925c93c8-2f46-4e42-abc0-187db9f33983", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 8, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8988d9647", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"69460bcf5c17fd520ae64bd6a01d9f8e4471203ac8314e7aab526f13c71c7e40", Pod:"whisker-8988d9647-8v76c", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali24349538338", MAC:"a2:eb:77:75:ac:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:08:50.604046 containerd[1601]: 2025-11-01 10:08:50.598 [INFO][3905] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="69460bcf5c17fd520ae64bd6a01d9f8e4471203ac8314e7aab526f13c71c7e40" Namespace="calico-system" Pod="whisker-8988d9647-8v76c" WorkloadEndpoint="localhost-k8s-whisker--8988d9647--8v76c-eth0" Nov 1 10:08:51.080548 kubelet[2747]: I1101 10:08:51.080477 2747 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 10:08:51.082913 kubelet[2747]: E1101 10:08:51.081354 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:51.175905 containerd[1601]: time="2025-11-01T10:08:51.175848338Z" level=info msg="connecting to shim 69460bcf5c17fd520ae64bd6a01d9f8e4471203ac8314e7aab526f13c71c7e40" address="unix:///run/containerd/s/9753466e280b346809d02ada9c9534b613ff16687ec8cac24dba4d17a45c2233" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:08:51.206816 systemd[1]: Started cri-containerd-69460bcf5c17fd520ae64bd6a01d9f8e4471203ac8314e7aab526f13c71c7e40.scope - libcontainer container 69460bcf5c17fd520ae64bd6a01d9f8e4471203ac8314e7aab526f13c71c7e40. Nov 1 10:08:51.221229 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 10:08:51.267866 containerd[1601]: time="2025-11-01T10:08:51.267820391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8988d9647-8v76c,Uid:925c93c8-2f46-4e42-abc0-187db9f33983,Namespace:calico-system,Attempt:0,} returns sandbox id \"69460bcf5c17fd520ae64bd6a01d9f8e4471203ac8314e7aab526f13c71c7e40\"" Nov 1 10:08:51.270720 containerd[1601]: time="2025-11-01T10:08:51.270666472Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 10:08:51.363942 systemd-networkd[1502]: vxlan.calico: Link UP Nov 1 10:08:51.363954 systemd-networkd[1502]: vxlan.calico: Gained carrier Nov 1 10:08:51.607947 containerd[1601]: time="2025-11-01T10:08:51.607900220Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:08:51.615982 kubelet[2747]: I1101 10:08:51.615883 2747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85c17161-0d10-41f6-9c26-623222730002" path="/var/lib/kubelet/pods/85c17161-0d10-41f6-9c26-623222730002/volumes" Nov 1 10:08:51.640003 containerd[1601]: time="2025-11-01T10:08:51.637716766Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 1 10:08:51.640003 containerd[1601]: time="2025-11-01T10:08:51.637791065Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 10:08:51.640003 containerd[1601]: time="2025-11-01T10:08:51.639162735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 10:08:51.640197 kubelet[2747]: E1101 10:08:51.638015 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 10:08:51.640197 kubelet[2747]: E1101 10:08:51.638059 2747 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 10:08:51.640197 kubelet[2747]: E1101 10:08:51.638143 2747 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-8988d9647-8v76c_calico-system(925c93c8-2f46-4e42-abc0-187db9f33983): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 10:08:51.769602 kubelet[2747]: E1101 10:08:51.769548 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:51.770243 containerd[1601]: time="2025-11-01T10:08:51.770132287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sghzp,Uid:644d8165-5889-4fa8-a643-99fd8cf0c4f1,Namespace:kube-system,Attempt:0,}" Nov 1 10:08:51.870746 systemd[1]: Started sshd@7-10.0.0.91:22-10.0.0.1:58332.service - OpenSSH per-connection server daemon (10.0.0.1:58332). Nov 1 10:08:51.875023 systemd-networkd[1502]: calif1d330bedad: Link UP Nov 1 10:08:51.875325 systemd-networkd[1502]: calif1d330bedad: Gained carrier Nov 1 10:08:51.894821 containerd[1601]: 2025-11-01 10:08:51.812 [INFO][4184] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--sghzp-eth0 coredns-66bc5c9577- kube-system 644d8165-5889-4fa8-a643-99fd8cf0c4f1 834 0 2025-11-01 10:08:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-sghzp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif1d330bedad [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="12cf67bf4400442fc3cd388bbda94ef5ec0465f70b0c67c11fa1aacc23e0b1a2" Namespace="kube-system" Pod="coredns-66bc5c9577-sghzp" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--sghzp-" Nov 1 10:08:51.894821 containerd[1601]: 2025-11-01 10:08:51.813 [INFO][4184] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="12cf67bf4400442fc3cd388bbda94ef5ec0465f70b0c67c11fa1aacc23e0b1a2" Namespace="kube-system" Pod="coredns-66bc5c9577-sghzp" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--sghzp-eth0" Nov 1 10:08:51.894821 containerd[1601]: 2025-11-01 10:08:51.839 [INFO][4197] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="12cf67bf4400442fc3cd388bbda94ef5ec0465f70b0c67c11fa1aacc23e0b1a2" HandleID="k8s-pod-network.12cf67bf4400442fc3cd388bbda94ef5ec0465f70b0c67c11fa1aacc23e0b1a2" Workload="localhost-k8s-coredns--66bc5c9577--sghzp-eth0" Nov 1 10:08:51.895024 containerd[1601]: 2025-11-01 10:08:51.839 [INFO][4197] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="12cf67bf4400442fc3cd388bbda94ef5ec0465f70b0c67c11fa1aacc23e0b1a2" HandleID="k8s-pod-network.12cf67bf4400442fc3cd388bbda94ef5ec0465f70b0c67c11fa1aacc23e0b1a2" Workload="localhost-k8s-coredns--66bc5c9577--sghzp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00019f5e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-sghzp", "timestamp":"2025-11-01 10:08:51.839478625 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 10:08:51.895024 containerd[1601]: 2025-11-01 10:08:51.839 [INFO][4197] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 10:08:51.895024 containerd[1601]: 2025-11-01 10:08:51.839 [INFO][4197] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 10:08:51.895024 containerd[1601]: 2025-11-01 10:08:51.839 [INFO][4197] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 10:08:51.895024 containerd[1601]: 2025-11-01 10:08:51.846 [INFO][4197] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.12cf67bf4400442fc3cd388bbda94ef5ec0465f70b0c67c11fa1aacc23e0b1a2" host="localhost" Nov 1 10:08:51.895024 containerd[1601]: 2025-11-01 10:08:51.849 [INFO][4197] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 10:08:51.895024 containerd[1601]: 2025-11-01 10:08:51.853 [INFO][4197] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 10:08:51.895024 containerd[1601]: 2025-11-01 10:08:51.854 [INFO][4197] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 10:08:51.895024 containerd[1601]: 2025-11-01 10:08:51.856 [INFO][4197] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 10:08:51.895024 containerd[1601]: 2025-11-01 10:08:51.856 [INFO][4197] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.12cf67bf4400442fc3cd388bbda94ef5ec0465f70b0c67c11fa1aacc23e0b1a2" host="localhost" Nov 1 10:08:51.895284 containerd[1601]: 2025-11-01 10:08:51.858 [INFO][4197] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.12cf67bf4400442fc3cd388bbda94ef5ec0465f70b0c67c11fa1aacc23e0b1a2 Nov 1 10:08:51.895284 containerd[1601]: 2025-11-01 10:08:51.861 [INFO][4197] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.12cf67bf4400442fc3cd388bbda94ef5ec0465f70b0c67c11fa1aacc23e0b1a2" host="localhost" Nov 1 10:08:51.895284 containerd[1601]: 2025-11-01 10:08:51.865 [INFO][4197] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.12cf67bf4400442fc3cd388bbda94ef5ec0465f70b0c67c11fa1aacc23e0b1a2" host="localhost" Nov 1 10:08:51.895284 containerd[1601]: 2025-11-01 10:08:51.866 [INFO][4197] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.12cf67bf4400442fc3cd388bbda94ef5ec0465f70b0c67c11fa1aacc23e0b1a2" host="localhost" Nov 1 10:08:51.895284 containerd[1601]: 2025-11-01 10:08:51.866 [INFO][4197] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 10:08:51.895284 containerd[1601]: 2025-11-01 10:08:51.866 [INFO][4197] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="12cf67bf4400442fc3cd388bbda94ef5ec0465f70b0c67c11fa1aacc23e0b1a2" HandleID="k8s-pod-network.12cf67bf4400442fc3cd388bbda94ef5ec0465f70b0c67c11fa1aacc23e0b1a2" Workload="localhost-k8s-coredns--66bc5c9577--sghzp-eth0" Nov 1 10:08:51.895412 containerd[1601]: 2025-11-01 10:08:51.871 [INFO][4184] cni-plugin/k8s.go 418: Populated endpoint ContainerID="12cf67bf4400442fc3cd388bbda94ef5ec0465f70b0c67c11fa1aacc23e0b1a2" Namespace="kube-system" Pod="coredns-66bc5c9577-sghzp" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--sghzp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--sghzp-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"644d8165-5889-4fa8-a643-99fd8cf0c4f1", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 8, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-sghzp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif1d330bedad", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:08:51.895412 containerd[1601]: 2025-11-01 10:08:51.872 [INFO][4184] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="12cf67bf4400442fc3cd388bbda94ef5ec0465f70b0c67c11fa1aacc23e0b1a2" Namespace="kube-system" Pod="coredns-66bc5c9577-sghzp" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--sghzp-eth0" Nov 1 10:08:51.895412 containerd[1601]: 2025-11-01 10:08:51.872 [INFO][4184] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif1d330bedad ContainerID="12cf67bf4400442fc3cd388bbda94ef5ec0465f70b0c67c11fa1aacc23e0b1a2" Namespace="kube-system" Pod="coredns-66bc5c9577-sghzp" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--sghzp-eth0" Nov 1 10:08:51.895412 containerd[1601]: 2025-11-01 10:08:51.875 [INFO][4184] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="12cf67bf4400442fc3cd388bbda94ef5ec0465f70b0c67c11fa1aacc23e0b1a2" Namespace="kube-system" Pod="coredns-66bc5c9577-sghzp" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--sghzp-eth0" Nov 1 10:08:51.895412 containerd[1601]: 2025-11-01 10:08:51.875 [INFO][4184] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="12cf67bf4400442fc3cd388bbda94ef5ec0465f70b0c67c11fa1aacc23e0b1a2" Namespace="kube-system" Pod="coredns-66bc5c9577-sghzp" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--sghzp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--sghzp-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"644d8165-5889-4fa8-a643-99fd8cf0c4f1", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 8, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"12cf67bf4400442fc3cd388bbda94ef5ec0465f70b0c67c11fa1aacc23e0b1a2", Pod:"coredns-66bc5c9577-sghzp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif1d330bedad", MAC:"be:01:49:8a:a1:db", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:08:51.895412 containerd[1601]: 2025-11-01 10:08:51.890 [INFO][4184] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="12cf67bf4400442fc3cd388bbda94ef5ec0465f70b0c67c11fa1aacc23e0b1a2" Namespace="kube-system" Pod="coredns-66bc5c9577-sghzp" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--sghzp-eth0" Nov 1 10:08:51.921560 containerd[1601]: time="2025-11-01T10:08:51.921296053Z" level=info msg="connecting to shim 12cf67bf4400442fc3cd388bbda94ef5ec0465f70b0c67c11fa1aacc23e0b1a2" address="unix:///run/containerd/s/29e9c1bf0335a89f304b124ce0c869abbe868581319d56b75aa84af357eaab6f" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:08:51.951037 systemd[1]: Started cri-containerd-12cf67bf4400442fc3cd388bbda94ef5ec0465f70b0c67c11fa1aacc23e0b1a2.scope - libcontainer container 12cf67bf4400442fc3cd388bbda94ef5ec0465f70b0c67c11fa1aacc23e0b1a2. Nov 1 10:08:51.976154 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 10:08:51.980101 sshd[4206]: Accepted publickey for core from 10.0.0.1 port 58332 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:08:51.982885 sshd-session[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:08:51.992359 systemd-logind[1585]: New session 8 of user core. Nov 1 10:08:52.001950 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 1 10:08:52.013428 containerd[1601]: time="2025-11-01T10:08:52.013353832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sghzp,Uid:644d8165-5889-4fa8-a643-99fd8cf0c4f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"12cf67bf4400442fc3cd388bbda94ef5ec0465f70b0c67c11fa1aacc23e0b1a2\"" Nov 1 10:08:52.014673 kubelet[2747]: E1101 10:08:52.014612 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:52.019624 containerd[1601]: time="2025-11-01T10:08:52.019585848Z" level=info msg="CreateContainer within sandbox \"12cf67bf4400442fc3cd388bbda94ef5ec0465f70b0c67c11fa1aacc23e0b1a2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 10:08:52.034674 containerd[1601]: time="2025-11-01T10:08:52.034602681Z" level=info msg="Container 319ea577b8bfbc824024a9050e5a68f49452a5c04fc0968582b8db9f0606db53: CDI devices from CRI Config.CDIDevices: []" Nov 1 10:08:52.041557 containerd[1601]: time="2025-11-01T10:08:52.041523834Z" level=info msg="CreateContainer within sandbox \"12cf67bf4400442fc3cd388bbda94ef5ec0465f70b0c67c11fa1aacc23e0b1a2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"319ea577b8bfbc824024a9050e5a68f49452a5c04fc0968582b8db9f0606db53\"" Nov 1 10:08:52.043086 containerd[1601]: time="2025-11-01T10:08:52.043062336Z" level=info msg="StartContainer for \"319ea577b8bfbc824024a9050e5a68f49452a5c04fc0968582b8db9f0606db53\"" Nov 1 10:08:52.044657 containerd[1601]: time="2025-11-01T10:08:52.044633069Z" level=info msg="connecting to shim 319ea577b8bfbc824024a9050e5a68f49452a5c04fc0968582b8db9f0606db53" address="unix:///run/containerd/s/29e9c1bf0335a89f304b124ce0c869abbe868581319d56b75aa84af357eaab6f" protocol=ttrpc version=3 Nov 1 10:08:52.072855 systemd[1]: Started cri-containerd-319ea577b8bfbc824024a9050e5a68f49452a5c04fc0968582b8db9f0606db53.scope - libcontainer container 319ea577b8bfbc824024a9050e5a68f49452a5c04fc0968582b8db9f0606db53. Nov 1 10:08:52.079925 containerd[1601]: time="2025-11-01T10:08:52.079877892Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:08:52.081169 containerd[1601]: time="2025-11-01T10:08:52.081122591Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 10:08:52.081233 containerd[1601]: time="2025-11-01T10:08:52.081215406Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 1 10:08:52.082731 kubelet[2747]: E1101 10:08:52.081336 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 10:08:52.082731 kubelet[2747]: E1101 10:08:52.081380 2747 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 10:08:52.082731 kubelet[2747]: E1101 10:08:52.081447 2747 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-8988d9647-8v76c_calico-system(925c93c8-2f46-4e42-abc0-187db9f33983): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 10:08:52.082731 kubelet[2747]: E1101 10:08:52.081486 2747 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8988d9647-8v76c" podUID="925c93c8-2f46-4e42-abc0-187db9f33983" Nov 1 10:08:52.095336 kubelet[2747]: E1101 10:08:52.095262 2747 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8988d9647-8v76c" podUID="925c93c8-2f46-4e42-abc0-187db9f33983" Nov 1 10:08:52.131088 sshd[4269]: Connection closed by 10.0.0.1 port 58332 Nov 1 10:08:52.131587 sshd-session[4206]: pam_unix(sshd:session): session closed for user core Nov 1 10:08:52.135377 containerd[1601]: time="2025-11-01T10:08:52.135332530Z" level=info msg="StartContainer for \"319ea577b8bfbc824024a9050e5a68f49452a5c04fc0968582b8db9f0606db53\" returns successfully" Nov 1 10:08:52.139117 systemd[1]: sshd@7-10.0.0.91:22-10.0.0.1:58332.service: Deactivated successfully. Nov 1 10:08:52.141322 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 10:08:52.142134 systemd-logind[1585]: Session 8 logged out. Waiting for processes to exit. Nov 1 10:08:52.143527 systemd-logind[1585]: Removed session 8. Nov 1 10:08:52.182142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3912666140.mount: Deactivated successfully. Nov 1 10:08:52.187850 systemd-networkd[1502]: cali24349538338: Gained IPv6LL Nov 1 10:08:52.617197 containerd[1601]: time="2025-11-01T10:08:52.617127376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wgv7h,Uid:2d70db43-7a2b-4384-9018-e7385784e621,Namespace:calico-system,Attempt:0,}" Nov 1 10:08:52.618791 containerd[1601]: time="2025-11-01T10:08:52.618745277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-988dffdbc-fdbhp,Uid:5d708427-4ed7-49ae-b59a-606507c6e8d8,Namespace:calico-apiserver,Attempt:0,}" Nov 1 10:08:52.726673 systemd-networkd[1502]: calia474b3d4b61: Link UP Nov 1 10:08:52.727240 systemd-networkd[1502]: calia474b3d4b61: Gained carrier Nov 1 10:08:52.740401 containerd[1601]: 2025-11-01 10:08:52.659 [INFO][4335] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--wgv7h-eth0 csi-node-driver- calico-system 2d70db43-7a2b-4384-9018-e7385784e621 715 0 2025-11-01 10:08:32 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-wgv7h eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia474b3d4b61 [] [] }} ContainerID="c0fe4a68bc98fb82624d2f10f4b2fc75fbddfde1c121ed71ac24c3de5e4a6ff2" Namespace="calico-system" Pod="csi-node-driver-wgv7h" WorkloadEndpoint="localhost-k8s-csi--node--driver--wgv7h-" Nov 1 10:08:52.740401 containerd[1601]: 2025-11-01 10:08:52.659 [INFO][4335] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c0fe4a68bc98fb82624d2f10f4b2fc75fbddfde1c121ed71ac24c3de5e4a6ff2" Namespace="calico-system" Pod="csi-node-driver-wgv7h" WorkloadEndpoint="localhost-k8s-csi--node--driver--wgv7h-eth0" Nov 1 10:08:52.740401 containerd[1601]: 2025-11-01 10:08:52.691 [INFO][4363] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c0fe4a68bc98fb82624d2f10f4b2fc75fbddfde1c121ed71ac24c3de5e4a6ff2" HandleID="k8s-pod-network.c0fe4a68bc98fb82624d2f10f4b2fc75fbddfde1c121ed71ac24c3de5e4a6ff2" Workload="localhost-k8s-csi--node--driver--wgv7h-eth0" Nov 1 10:08:52.740401 containerd[1601]: 2025-11-01 10:08:52.691 [INFO][4363] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c0fe4a68bc98fb82624d2f10f4b2fc75fbddfde1c121ed71ac24c3de5e4a6ff2" HandleID="k8s-pod-network.c0fe4a68bc98fb82624d2f10f4b2fc75fbddfde1c121ed71ac24c3de5e4a6ff2" Workload="localhost-k8s-csi--node--driver--wgv7h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002defd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-wgv7h", "timestamp":"2025-11-01 10:08:52.691640865 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 10:08:52.740401 containerd[1601]: 2025-11-01 10:08:52.692 [INFO][4363] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 10:08:52.740401 containerd[1601]: 2025-11-01 10:08:52.692 [INFO][4363] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 10:08:52.740401 containerd[1601]: 2025-11-01 10:08:52.692 [INFO][4363] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 10:08:52.740401 containerd[1601]: 2025-11-01 10:08:52.698 [INFO][4363] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c0fe4a68bc98fb82624d2f10f4b2fc75fbddfde1c121ed71ac24c3de5e4a6ff2" host="localhost" Nov 1 10:08:52.740401 containerd[1601]: 2025-11-01 10:08:52.702 [INFO][4363] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 10:08:52.740401 containerd[1601]: 2025-11-01 10:08:52.705 [INFO][4363] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 10:08:52.740401 containerd[1601]: 2025-11-01 10:08:52.706 [INFO][4363] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 10:08:52.740401 containerd[1601]: 2025-11-01 10:08:52.708 [INFO][4363] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 10:08:52.740401 containerd[1601]: 2025-11-01 10:08:52.708 [INFO][4363] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c0fe4a68bc98fb82624d2f10f4b2fc75fbddfde1c121ed71ac24c3de5e4a6ff2" host="localhost" Nov 1 10:08:52.740401 containerd[1601]: 2025-11-01 10:08:52.709 [INFO][4363] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c0fe4a68bc98fb82624d2f10f4b2fc75fbddfde1c121ed71ac24c3de5e4a6ff2 Nov 1 10:08:52.740401 containerd[1601]: 2025-11-01 10:08:52.712 [INFO][4363] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c0fe4a68bc98fb82624d2f10f4b2fc75fbddfde1c121ed71ac24c3de5e4a6ff2" host="localhost" Nov 1 10:08:52.740401 containerd[1601]: 2025-11-01 10:08:52.716 [INFO][4363] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.c0fe4a68bc98fb82624d2f10f4b2fc75fbddfde1c121ed71ac24c3de5e4a6ff2" host="localhost" Nov 1 10:08:52.740401 containerd[1601]: 2025-11-01 10:08:52.716 [INFO][4363] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.c0fe4a68bc98fb82624d2f10f4b2fc75fbddfde1c121ed71ac24c3de5e4a6ff2" host="localhost" Nov 1 10:08:52.740401 containerd[1601]: 2025-11-01 10:08:52.716 [INFO][4363] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 10:08:52.740401 containerd[1601]: 2025-11-01 10:08:52.716 [INFO][4363] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="c0fe4a68bc98fb82624d2f10f4b2fc75fbddfde1c121ed71ac24c3de5e4a6ff2" HandleID="k8s-pod-network.c0fe4a68bc98fb82624d2f10f4b2fc75fbddfde1c121ed71ac24c3de5e4a6ff2" Workload="localhost-k8s-csi--node--driver--wgv7h-eth0" Nov 1 10:08:52.741016 containerd[1601]: 2025-11-01 10:08:52.721 [INFO][4335] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c0fe4a68bc98fb82624d2f10f4b2fc75fbddfde1c121ed71ac24c3de5e4a6ff2" Namespace="calico-system" Pod="csi-node-driver-wgv7h" WorkloadEndpoint="localhost-k8s-csi--node--driver--wgv7h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wgv7h-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2d70db43-7a2b-4384-9018-e7385784e621", ResourceVersion:"715", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 8, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-wgv7h", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia474b3d4b61", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:08:52.741016 containerd[1601]: 2025-11-01 10:08:52.721 [INFO][4335] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="c0fe4a68bc98fb82624d2f10f4b2fc75fbddfde1c121ed71ac24c3de5e4a6ff2" Namespace="calico-system" Pod="csi-node-driver-wgv7h" WorkloadEndpoint="localhost-k8s-csi--node--driver--wgv7h-eth0" Nov 1 10:08:52.741016 containerd[1601]: 2025-11-01 10:08:52.721 [INFO][4335] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia474b3d4b61 ContainerID="c0fe4a68bc98fb82624d2f10f4b2fc75fbddfde1c121ed71ac24c3de5e4a6ff2" Namespace="calico-system" Pod="csi-node-driver-wgv7h" WorkloadEndpoint="localhost-k8s-csi--node--driver--wgv7h-eth0" Nov 1 10:08:52.741016 containerd[1601]: 2025-11-01 10:08:52.728 [INFO][4335] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c0fe4a68bc98fb82624d2f10f4b2fc75fbddfde1c121ed71ac24c3de5e4a6ff2" Namespace="calico-system" Pod="csi-node-driver-wgv7h" WorkloadEndpoint="localhost-k8s-csi--node--driver--wgv7h-eth0" Nov 1 10:08:52.741016 containerd[1601]: 2025-11-01 10:08:52.729 [INFO][4335] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c0fe4a68bc98fb82624d2f10f4b2fc75fbddfde1c121ed71ac24c3de5e4a6ff2" Namespace="calico-system" Pod="csi-node-driver-wgv7h" WorkloadEndpoint="localhost-k8s-csi--node--driver--wgv7h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wgv7h-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2d70db43-7a2b-4384-9018-e7385784e621", ResourceVersion:"715", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 8, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c0fe4a68bc98fb82624d2f10f4b2fc75fbddfde1c121ed71ac24c3de5e4a6ff2", Pod:"csi-node-driver-wgv7h", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia474b3d4b61", MAC:"92:d7:4c:ce:17:6c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:08:52.741016 containerd[1601]: 2025-11-01 10:08:52.737 [INFO][4335] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c0fe4a68bc98fb82624d2f10f4b2fc75fbddfde1c121ed71ac24c3de5e4a6ff2" Namespace="calico-system" Pod="csi-node-driver-wgv7h" WorkloadEndpoint="localhost-k8s-csi--node--driver--wgv7h-eth0" Nov 1 10:08:52.762680 containerd[1601]: time="2025-11-01T10:08:52.762622486Z" level=info msg="connecting to shim c0fe4a68bc98fb82624d2f10f4b2fc75fbddfde1c121ed71ac24c3de5e4a6ff2" address="unix:///run/containerd/s/85e4da12ab4395100f067e935c9cd768f504af18c8e832ba7beab1c056c86aa8" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:08:52.793875 systemd[1]: Started cri-containerd-c0fe4a68bc98fb82624d2f10f4b2fc75fbddfde1c121ed71ac24c3de5e4a6ff2.scope - libcontainer container c0fe4a68bc98fb82624d2f10f4b2fc75fbddfde1c121ed71ac24c3de5e4a6ff2. Nov 1 10:08:52.815081 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 10:08:52.839284 containerd[1601]: time="2025-11-01T10:08:52.839235052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wgv7h,Uid:2d70db43-7a2b-4384-9018-e7385784e621,Namespace:calico-system,Attempt:0,} returns sandbox id \"c0fe4a68bc98fb82624d2f10f4b2fc75fbddfde1c121ed71ac24c3de5e4a6ff2\"" Nov 1 10:08:52.842381 containerd[1601]: time="2025-11-01T10:08:52.842274577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 10:08:52.847111 systemd-networkd[1502]: calicab3e14894b: Link UP Nov 1 10:08:52.847341 systemd-networkd[1502]: calicab3e14894b: Gained carrier Nov 1 10:08:52.861250 containerd[1601]: 2025-11-01 10:08:52.663 [INFO][4348] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--988dffdbc--fdbhp-eth0 calico-apiserver-988dffdbc- calico-apiserver 5d708427-4ed7-49ae-b59a-606507c6e8d8 831 0 2025-11-01 10:08:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:988dffdbc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-988dffdbc-fdbhp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicab3e14894b [] [] }} ContainerID="2a08cac831ecb96ca51f24e5b33b6ab22ea01feb7bb7d51acd4a06adde124921" Namespace="calico-apiserver" Pod="calico-apiserver-988dffdbc-fdbhp" WorkloadEndpoint="localhost-k8s-calico--apiserver--988dffdbc--fdbhp-" Nov 1 10:08:52.861250 containerd[1601]: 2025-11-01 10:08:52.663 [INFO][4348] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2a08cac831ecb96ca51f24e5b33b6ab22ea01feb7bb7d51acd4a06adde124921" Namespace="calico-apiserver" Pod="calico-apiserver-988dffdbc-fdbhp" WorkloadEndpoint="localhost-k8s-calico--apiserver--988dffdbc--fdbhp-eth0" Nov 1 10:08:52.861250 containerd[1601]: 2025-11-01 10:08:52.692 [INFO][4370] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2a08cac831ecb96ca51f24e5b33b6ab22ea01feb7bb7d51acd4a06adde124921" HandleID="k8s-pod-network.2a08cac831ecb96ca51f24e5b33b6ab22ea01feb7bb7d51acd4a06adde124921" Workload="localhost-k8s-calico--apiserver--988dffdbc--fdbhp-eth0" Nov 1 10:08:52.861250 containerd[1601]: 2025-11-01 10:08:52.692 [INFO][4370] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2a08cac831ecb96ca51f24e5b33b6ab22ea01feb7bb7d51acd4a06adde124921" HandleID="k8s-pod-network.2a08cac831ecb96ca51f24e5b33b6ab22ea01feb7bb7d51acd4a06adde124921" Workload="localhost-k8s-calico--apiserver--988dffdbc--fdbhp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000287590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-988dffdbc-fdbhp", "timestamp":"2025-11-01 10:08:52.69271261 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 10:08:52.861250 containerd[1601]: 2025-11-01 10:08:52.693 [INFO][4370] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 10:08:52.861250 containerd[1601]: 2025-11-01 10:08:52.716 [INFO][4370] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 10:08:52.861250 containerd[1601]: 2025-11-01 10:08:52.717 [INFO][4370] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 10:08:52.861250 containerd[1601]: 2025-11-01 10:08:52.800 [INFO][4370] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2a08cac831ecb96ca51f24e5b33b6ab22ea01feb7bb7d51acd4a06adde124921" host="localhost" Nov 1 10:08:52.861250 containerd[1601]: 2025-11-01 10:08:52.809 [INFO][4370] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 10:08:52.861250 containerd[1601]: 2025-11-01 10:08:52.818 [INFO][4370] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 10:08:52.861250 containerd[1601]: 2025-11-01 10:08:52.822 [INFO][4370] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 10:08:52.861250 containerd[1601]: 2025-11-01 10:08:52.824 [INFO][4370] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 10:08:52.861250 containerd[1601]: 2025-11-01 10:08:52.824 [INFO][4370] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2a08cac831ecb96ca51f24e5b33b6ab22ea01feb7bb7d51acd4a06adde124921" host="localhost" Nov 1 10:08:52.861250 containerd[1601]: 2025-11-01 10:08:52.828 [INFO][4370] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2a08cac831ecb96ca51f24e5b33b6ab22ea01feb7bb7d51acd4a06adde124921 Nov 1 10:08:52.861250 containerd[1601]: 2025-11-01 10:08:52.832 [INFO][4370] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2a08cac831ecb96ca51f24e5b33b6ab22ea01feb7bb7d51acd4a06adde124921" host="localhost" Nov 1 10:08:52.861250 containerd[1601]: 2025-11-01 10:08:52.838 [INFO][4370] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.2a08cac831ecb96ca51f24e5b33b6ab22ea01feb7bb7d51acd4a06adde124921" host="localhost" Nov 1 10:08:52.861250 containerd[1601]: 2025-11-01 10:08:52.838 [INFO][4370] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.2a08cac831ecb96ca51f24e5b33b6ab22ea01feb7bb7d51acd4a06adde124921" host="localhost" Nov 1 10:08:52.861250 containerd[1601]: 2025-11-01 10:08:52.838 [INFO][4370] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 10:08:52.861250 containerd[1601]: 2025-11-01 10:08:52.838 [INFO][4370] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="2a08cac831ecb96ca51f24e5b33b6ab22ea01feb7bb7d51acd4a06adde124921" HandleID="k8s-pod-network.2a08cac831ecb96ca51f24e5b33b6ab22ea01feb7bb7d51acd4a06adde124921" Workload="localhost-k8s-calico--apiserver--988dffdbc--fdbhp-eth0" Nov 1 10:08:52.861831 containerd[1601]: 2025-11-01 10:08:52.843 [INFO][4348] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2a08cac831ecb96ca51f24e5b33b6ab22ea01feb7bb7d51acd4a06adde124921" Namespace="calico-apiserver" Pod="calico-apiserver-988dffdbc-fdbhp" WorkloadEndpoint="localhost-k8s-calico--apiserver--988dffdbc--fdbhp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--988dffdbc--fdbhp-eth0", GenerateName:"calico-apiserver-988dffdbc-", Namespace:"calico-apiserver", SelfLink:"", UID:"5d708427-4ed7-49ae-b59a-606507c6e8d8", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 8, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"988dffdbc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-988dffdbc-fdbhp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicab3e14894b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:08:52.861831 containerd[1601]: 2025-11-01 10:08:52.843 [INFO][4348] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="2a08cac831ecb96ca51f24e5b33b6ab22ea01feb7bb7d51acd4a06adde124921" Namespace="calico-apiserver" Pod="calico-apiserver-988dffdbc-fdbhp" WorkloadEndpoint="localhost-k8s-calico--apiserver--988dffdbc--fdbhp-eth0" Nov 1 10:08:52.861831 containerd[1601]: 2025-11-01 10:08:52.843 [INFO][4348] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicab3e14894b ContainerID="2a08cac831ecb96ca51f24e5b33b6ab22ea01feb7bb7d51acd4a06adde124921" Namespace="calico-apiserver" Pod="calico-apiserver-988dffdbc-fdbhp" WorkloadEndpoint="localhost-k8s-calico--apiserver--988dffdbc--fdbhp-eth0" Nov 1 10:08:52.861831 containerd[1601]: 2025-11-01 10:08:52.849 [INFO][4348] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2a08cac831ecb96ca51f24e5b33b6ab22ea01feb7bb7d51acd4a06adde124921" Namespace="calico-apiserver" Pod="calico-apiserver-988dffdbc-fdbhp" WorkloadEndpoint="localhost-k8s-calico--apiserver--988dffdbc--fdbhp-eth0" Nov 1 10:08:52.861831 containerd[1601]: 2025-11-01 10:08:52.849 [INFO][4348] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2a08cac831ecb96ca51f24e5b33b6ab22ea01feb7bb7d51acd4a06adde124921" Namespace="calico-apiserver" Pod="calico-apiserver-988dffdbc-fdbhp" WorkloadEndpoint="localhost-k8s-calico--apiserver--988dffdbc--fdbhp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--988dffdbc--fdbhp-eth0", GenerateName:"calico-apiserver-988dffdbc-", Namespace:"calico-apiserver", SelfLink:"", UID:"5d708427-4ed7-49ae-b59a-606507c6e8d8", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 8, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"988dffdbc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2a08cac831ecb96ca51f24e5b33b6ab22ea01feb7bb7d51acd4a06adde124921", Pod:"calico-apiserver-988dffdbc-fdbhp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicab3e14894b", MAC:"32:b7:f3:5a:1a:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:08:52.861831 containerd[1601]: 2025-11-01 10:08:52.856 [INFO][4348] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2a08cac831ecb96ca51f24e5b33b6ab22ea01feb7bb7d51acd4a06adde124921" Namespace="calico-apiserver" Pod="calico-apiserver-988dffdbc-fdbhp" WorkloadEndpoint="localhost-k8s-calico--apiserver--988dffdbc--fdbhp-eth0" Nov 1 10:08:52.885421 containerd[1601]: time="2025-11-01T10:08:52.884866383Z" level=info msg="connecting to shim 2a08cac831ecb96ca51f24e5b33b6ab22ea01feb7bb7d51acd4a06adde124921" address="unix:///run/containerd/s/87893e7b5ba05244ea3348e46e8f12215a75f9125325bdfe4915d7dcf5391558" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:08:52.907845 systemd[1]: Started cri-containerd-2a08cac831ecb96ca51f24e5b33b6ab22ea01feb7bb7d51acd4a06adde124921.scope - libcontainer container 2a08cac831ecb96ca51f24e5b33b6ab22ea01feb7bb7d51acd4a06adde124921. Nov 1 10:08:52.924668 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 10:08:52.956964 containerd[1601]: time="2025-11-01T10:08:52.956896112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-988dffdbc-fdbhp,Uid:5d708427-4ed7-49ae-b59a-606507c6e8d8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2a08cac831ecb96ca51f24e5b33b6ab22ea01feb7bb7d51acd4a06adde124921\"" Nov 1 10:08:53.019886 systemd-networkd[1502]: calif1d330bedad: Gained IPv6LL Nov 1 10:08:53.101271 kubelet[2747]: E1101 10:08:53.101230 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:53.107096 kubelet[2747]: E1101 10:08:53.107029 2747 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8988d9647-8v76c" podUID="925c93c8-2f46-4e42-abc0-187db9f33983" Nov 1 10:08:53.117356 kubelet[2747]: I1101 10:08:53.117278 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-sghzp" podStartSLOduration=33.117264371 podStartE2EDuration="33.117264371s" podCreationTimestamp="2025-11-01 10:08:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:08:53.116054347 +0000 UTC m=+39.594288700" watchObservedRunningTime="2025-11-01 10:08:53.117264371 +0000 UTC m=+39.595498714" Nov 1 10:08:53.231565 containerd[1601]: time="2025-11-01T10:08:53.231412343Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:08:53.232563 containerd[1601]: time="2025-11-01T10:08:53.232515236Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 10:08:53.232676 containerd[1601]: time="2025-11-01T10:08:53.232607589Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 1 10:08:53.232895 kubelet[2747]: E1101 10:08:53.232856 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 10:08:53.232957 kubelet[2747]: E1101 10:08:53.232900 2747 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 10:08:53.233158 kubelet[2747]: E1101 10:08:53.233105 2747 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-wgv7h_calico-system(2d70db43-7a2b-4384-9018-e7385784e621): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 10:08:53.233372 containerd[1601]: time="2025-11-01T10:08:53.233298769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 10:08:53.275920 systemd-networkd[1502]: vxlan.calico: Gained IPv6LL Nov 1 10:08:53.564445 containerd[1601]: time="2025-11-01T10:08:53.564325303Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:08:53.565465 containerd[1601]: time="2025-11-01T10:08:53.565412958Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 10:08:53.565516 containerd[1601]: time="2025-11-01T10:08:53.565449236Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 1 10:08:53.565666 kubelet[2747]: E1101 10:08:53.565620 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 10:08:53.565740 kubelet[2747]: E1101 10:08:53.565667 2747 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 10:08:53.565871 kubelet[2747]: E1101 10:08:53.565845 2747 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-988dffdbc-fdbhp_calico-apiserver(5d708427-4ed7-49ae-b59a-606507c6e8d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 10:08:53.565871 kubelet[2747]: E1101 10:08:53.565885 2747 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-988dffdbc-fdbhp" podUID="5d708427-4ed7-49ae-b59a-606507c6e8d8" Nov 1 10:08:53.566058 containerd[1601]: time="2025-11-01T10:08:53.566035268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 10:08:53.617064 containerd[1601]: time="2025-11-01T10:08:53.617019928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f8c559b4d-57dz9,Uid:8cb58978-7fd1-4b27-8e4f-8d93d2102825,Namespace:calico-system,Attempt:0,}" Nov 1 10:08:53.717286 systemd-networkd[1502]: cali978884b566e: Link UP Nov 1 10:08:53.718192 systemd-networkd[1502]: cali978884b566e: Gained carrier Nov 1 10:08:53.730240 containerd[1601]: 2025-11-01 10:08:53.655 [INFO][4497] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7f8c559b4d--57dz9-eth0 calico-kube-controllers-7f8c559b4d- calico-system 8cb58978-7fd1-4b27-8e4f-8d93d2102825 827 0 2025-11-01 10:08:32 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7f8c559b4d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7f8c559b4d-57dz9 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali978884b566e [] [] }} ContainerID="829ef6c9799bbed6fd562cab847c19237d6c1d5bb554f7a0f52b02576f5e5e47" Namespace="calico-system" Pod="calico-kube-controllers-7f8c559b4d-57dz9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f8c559b4d--57dz9-" Nov 1 10:08:53.730240 containerd[1601]: 2025-11-01 10:08:53.655 [INFO][4497] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="829ef6c9799bbed6fd562cab847c19237d6c1d5bb554f7a0f52b02576f5e5e47" Namespace="calico-system" Pod="calico-kube-controllers-7f8c559b4d-57dz9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f8c559b4d--57dz9-eth0" Nov 1 10:08:53.730240 containerd[1601]: 2025-11-01 10:08:53.683 [INFO][4511] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="829ef6c9799bbed6fd562cab847c19237d6c1d5bb554f7a0f52b02576f5e5e47" HandleID="k8s-pod-network.829ef6c9799bbed6fd562cab847c19237d6c1d5bb554f7a0f52b02576f5e5e47" Workload="localhost-k8s-calico--kube--controllers--7f8c559b4d--57dz9-eth0" Nov 1 10:08:53.730240 containerd[1601]: 2025-11-01 10:08:53.683 [INFO][4511] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="829ef6c9799bbed6fd562cab847c19237d6c1d5bb554f7a0f52b02576f5e5e47" HandleID="k8s-pod-network.829ef6c9799bbed6fd562cab847c19237d6c1d5bb554f7a0f52b02576f5e5e47" Workload="localhost-k8s-calico--kube--controllers--7f8c559b4d--57dz9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c6fd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7f8c559b4d-57dz9", "timestamp":"2025-11-01 10:08:53.683796081 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 10:08:53.730240 containerd[1601]: 2025-11-01 10:08:53.684 [INFO][4511] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 10:08:53.730240 containerd[1601]: 2025-11-01 10:08:53.684 [INFO][4511] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 10:08:53.730240 containerd[1601]: 2025-11-01 10:08:53.684 [INFO][4511] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 10:08:53.730240 containerd[1601]: 2025-11-01 10:08:53.690 [INFO][4511] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.829ef6c9799bbed6fd562cab847c19237d6c1d5bb554f7a0f52b02576f5e5e47" host="localhost" Nov 1 10:08:53.730240 containerd[1601]: 2025-11-01 10:08:53.694 [INFO][4511] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 10:08:53.730240 containerd[1601]: 2025-11-01 10:08:53.697 [INFO][4511] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 10:08:53.730240 containerd[1601]: 2025-11-01 10:08:53.699 [INFO][4511] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 10:08:53.730240 containerd[1601]: 2025-11-01 10:08:53.701 [INFO][4511] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 10:08:53.730240 containerd[1601]: 2025-11-01 10:08:53.701 [INFO][4511] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.829ef6c9799bbed6fd562cab847c19237d6c1d5bb554f7a0f52b02576f5e5e47" host="localhost" Nov 1 10:08:53.730240 containerd[1601]: 2025-11-01 10:08:53.702 [INFO][4511] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.829ef6c9799bbed6fd562cab847c19237d6c1d5bb554f7a0f52b02576f5e5e47 Nov 1 10:08:53.730240 containerd[1601]: 2025-11-01 10:08:53.705 [INFO][4511] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.829ef6c9799bbed6fd562cab847c19237d6c1d5bb554f7a0f52b02576f5e5e47" host="localhost" Nov 1 10:08:53.730240 containerd[1601]: 2025-11-01 10:08:53.710 [INFO][4511] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.829ef6c9799bbed6fd562cab847c19237d6c1d5bb554f7a0f52b02576f5e5e47" host="localhost" Nov 1 10:08:53.730240 containerd[1601]: 2025-11-01 10:08:53.710 [INFO][4511] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.829ef6c9799bbed6fd562cab847c19237d6c1d5bb554f7a0f52b02576f5e5e47" host="localhost" Nov 1 10:08:53.730240 containerd[1601]: 2025-11-01 10:08:53.710 [INFO][4511] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 10:08:53.730240 containerd[1601]: 2025-11-01 10:08:53.710 [INFO][4511] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="829ef6c9799bbed6fd562cab847c19237d6c1d5bb554f7a0f52b02576f5e5e47" HandleID="k8s-pod-network.829ef6c9799bbed6fd562cab847c19237d6c1d5bb554f7a0f52b02576f5e5e47" Workload="localhost-k8s-calico--kube--controllers--7f8c559b4d--57dz9-eth0" Nov 1 10:08:53.731631 containerd[1601]: 2025-11-01 10:08:53.714 [INFO][4497] cni-plugin/k8s.go 418: Populated endpoint ContainerID="829ef6c9799bbed6fd562cab847c19237d6c1d5bb554f7a0f52b02576f5e5e47" Namespace="calico-system" Pod="calico-kube-controllers-7f8c559b4d-57dz9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f8c559b4d--57dz9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7f8c559b4d--57dz9-eth0", GenerateName:"calico-kube-controllers-7f8c559b4d-", Namespace:"calico-system", SelfLink:"", UID:"8cb58978-7fd1-4b27-8e4f-8d93d2102825", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 8, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f8c559b4d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7f8c559b4d-57dz9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali978884b566e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:08:53.731631 containerd[1601]: 2025-11-01 10:08:53.714 [INFO][4497] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="829ef6c9799bbed6fd562cab847c19237d6c1d5bb554f7a0f52b02576f5e5e47" Namespace="calico-system" Pod="calico-kube-controllers-7f8c559b4d-57dz9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f8c559b4d--57dz9-eth0" Nov 1 10:08:53.731631 containerd[1601]: 2025-11-01 10:08:53.714 [INFO][4497] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali978884b566e ContainerID="829ef6c9799bbed6fd562cab847c19237d6c1d5bb554f7a0f52b02576f5e5e47" Namespace="calico-system" Pod="calico-kube-controllers-7f8c559b4d-57dz9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f8c559b4d--57dz9-eth0" Nov 1 10:08:53.731631 containerd[1601]: 2025-11-01 10:08:53.718 [INFO][4497] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="829ef6c9799bbed6fd562cab847c19237d6c1d5bb554f7a0f52b02576f5e5e47" Namespace="calico-system" Pod="calico-kube-controllers-7f8c559b4d-57dz9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f8c559b4d--57dz9-eth0" Nov 1 10:08:53.731631 containerd[1601]: 2025-11-01 10:08:53.718 [INFO][4497] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="829ef6c9799bbed6fd562cab847c19237d6c1d5bb554f7a0f52b02576f5e5e47" Namespace="calico-system" Pod="calico-kube-controllers-7f8c559b4d-57dz9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f8c559b4d--57dz9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7f8c559b4d--57dz9-eth0", GenerateName:"calico-kube-controllers-7f8c559b4d-", Namespace:"calico-system", SelfLink:"", UID:"8cb58978-7fd1-4b27-8e4f-8d93d2102825", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 8, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f8c559b4d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"829ef6c9799bbed6fd562cab847c19237d6c1d5bb554f7a0f52b02576f5e5e47", Pod:"calico-kube-controllers-7f8c559b4d-57dz9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali978884b566e", MAC:"92:2f:3e:be:ce:88", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:08:53.731631 containerd[1601]: 2025-11-01 10:08:53.725 [INFO][4497] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="829ef6c9799bbed6fd562cab847c19237d6c1d5bb554f7a0f52b02576f5e5e47" Namespace="calico-system" Pod="calico-kube-controllers-7f8c559b4d-57dz9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f8c559b4d--57dz9-eth0" Nov 1 10:08:53.755823 containerd[1601]: time="2025-11-01T10:08:53.755723999Z" level=info msg="connecting to shim 829ef6c9799bbed6fd562cab847c19237d6c1d5bb554f7a0f52b02576f5e5e47" address="unix:///run/containerd/s/6b021cae9d09f877266d71c5aa62a261cc6504187ca24df2a1ba9a09d4b3bd07" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:08:53.785846 systemd[1]: Started cri-containerd-829ef6c9799bbed6fd562cab847c19237d6c1d5bb554f7a0f52b02576f5e5e47.scope - libcontainer container 829ef6c9799bbed6fd562cab847c19237d6c1d5bb554f7a0f52b02576f5e5e47. Nov 1 10:08:53.799239 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 10:08:53.831728 containerd[1601]: time="2025-11-01T10:08:53.831658618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f8c559b4d-57dz9,Uid:8cb58978-7fd1-4b27-8e4f-8d93d2102825,Namespace:calico-system,Attempt:0,} returns sandbox id \"829ef6c9799bbed6fd562cab847c19237d6c1d5bb554f7a0f52b02576f5e5e47\"" Nov 1 10:08:53.910639 containerd[1601]: time="2025-11-01T10:08:53.910597585Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:08:53.911807 containerd[1601]: time="2025-11-01T10:08:53.911766724Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 10:08:53.911884 containerd[1601]: time="2025-11-01T10:08:53.911832497Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 1 10:08:53.911994 kubelet[2747]: E1101 10:08:53.911947 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 10:08:53.911994 kubelet[2747]: E1101 10:08:53.911988 2747 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 10:08:53.912208 kubelet[2747]: E1101 10:08:53.912115 2747 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-wgv7h_calico-system(2d70db43-7a2b-4384-9018-e7385784e621): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 10:08:53.912208 kubelet[2747]: E1101 10:08:53.912165 2747 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wgv7h" podUID="2d70db43-7a2b-4384-9018-e7385784e621" Nov 1 10:08:53.912355 containerd[1601]: time="2025-11-01T10:08:53.912235074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 10:08:53.916842 systemd-networkd[1502]: calia474b3d4b61: Gained IPv6LL Nov 1 10:08:54.107378 kubelet[2747]: E1101 10:08:54.107060 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:54.108406 kubelet[2747]: E1101 10:08:54.108363 2747 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-988dffdbc-fdbhp" podUID="5d708427-4ed7-49ae-b59a-606507c6e8d8" Nov 1 10:08:54.108560 kubelet[2747]: E1101 10:08:54.108482 2747 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wgv7h" podUID="2d70db43-7a2b-4384-9018-e7385784e621" Nov 1 10:08:54.235876 systemd-networkd[1502]: calicab3e14894b: Gained IPv6LL Nov 1 10:08:54.297620 containerd[1601]: time="2025-11-01T10:08:54.297552761Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:08:54.298728 containerd[1601]: time="2025-11-01T10:08:54.298669710Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 10:08:54.298814 containerd[1601]: time="2025-11-01T10:08:54.298742316Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 1 10:08:54.298948 kubelet[2747]: E1101 10:08:54.298900 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 10:08:54.299011 kubelet[2747]: E1101 10:08:54.298950 2747 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 10:08:54.299035 kubelet[2747]: E1101 10:08:54.299021 2747 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7f8c559b4d-57dz9_calico-system(8cb58978-7fd1-4b27-8e4f-8d93d2102825): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 10:08:54.299075 kubelet[2747]: E1101 10:08:54.299049 2747 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f8c559b4d-57dz9" podUID="8cb58978-7fd1-4b27-8e4f-8d93d2102825" Nov 1 10:08:54.617402 containerd[1601]: time="2025-11-01T10:08:54.617332515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-988dffdbc-59586,Uid:2dc8436d-b4aa-4090-a31c-cb311723721e,Namespace:calico-apiserver,Attempt:0,}" Nov 1 10:08:54.618792 kubelet[2747]: E1101 10:08:54.618735 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:54.619278 containerd[1601]: time="2025-11-01T10:08:54.619234269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-hgd94,Uid:22e74b9c-2f6c-481f-9e41-938b85854239,Namespace:kube-system,Attempt:0,}" Nov 1 10:08:54.621371 containerd[1601]: time="2025-11-01T10:08:54.621322194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-rw5k5,Uid:157d34c2-941f-430c-9ff6-c3d7ebcb5c55,Namespace:calico-system,Attempt:0,}" Nov 1 10:08:54.761312 systemd-networkd[1502]: cali26a41958811: Link UP Nov 1 10:08:54.762613 systemd-networkd[1502]: cali26a41958811: Gained carrier Nov 1 10:08:54.775253 containerd[1601]: 2025-11-01 10:08:54.686 [INFO][4602] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--rw5k5-eth0 goldmane-7c778bb748- calico-system 157d34c2-941f-430c-9ff6-c3d7ebcb5c55 832 0 2025-11-01 10:08:29 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-rw5k5 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali26a41958811 [] [] }} ContainerID="56e077f650e955ec22771923207de9a7082560cf07a880cef53bfe75a987589c" Namespace="calico-system" Pod="goldmane-7c778bb748-rw5k5" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--rw5k5-" Nov 1 10:08:54.775253 containerd[1601]: 2025-11-01 10:08:54.686 [INFO][4602] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="56e077f650e955ec22771923207de9a7082560cf07a880cef53bfe75a987589c" Namespace="calico-system" Pod="goldmane-7c778bb748-rw5k5" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--rw5k5-eth0" Nov 1 10:08:54.775253 containerd[1601]: 2025-11-01 10:08:54.723 [INFO][4628] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="56e077f650e955ec22771923207de9a7082560cf07a880cef53bfe75a987589c" HandleID="k8s-pod-network.56e077f650e955ec22771923207de9a7082560cf07a880cef53bfe75a987589c" Workload="localhost-k8s-goldmane--7c778bb748--rw5k5-eth0" Nov 1 10:08:54.775253 containerd[1601]: 2025-11-01 10:08:54.723 [INFO][4628] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="56e077f650e955ec22771923207de9a7082560cf07a880cef53bfe75a987589c" HandleID="k8s-pod-network.56e077f650e955ec22771923207de9a7082560cf07a880cef53bfe75a987589c" Workload="localhost-k8s-goldmane--7c778bb748--rw5k5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e70e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-rw5k5", "timestamp":"2025-11-01 10:08:54.723481712 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 10:08:54.775253 containerd[1601]: 2025-11-01 10:08:54.723 [INFO][4628] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 10:08:54.775253 containerd[1601]: 2025-11-01 10:08:54.724 [INFO][4628] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 10:08:54.775253 containerd[1601]: 2025-11-01 10:08:54.724 [INFO][4628] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 10:08:54.775253 containerd[1601]: 2025-11-01 10:08:54.731 [INFO][4628] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.56e077f650e955ec22771923207de9a7082560cf07a880cef53bfe75a987589c" host="localhost" Nov 1 10:08:54.775253 containerd[1601]: 2025-11-01 10:08:54.735 [INFO][4628] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 10:08:54.775253 containerd[1601]: 2025-11-01 10:08:54.739 [INFO][4628] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 10:08:54.775253 containerd[1601]: 2025-11-01 10:08:54.740 [INFO][4628] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 10:08:54.775253 containerd[1601]: 2025-11-01 10:08:54.742 [INFO][4628] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 10:08:54.775253 containerd[1601]: 2025-11-01 10:08:54.742 [INFO][4628] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.56e077f650e955ec22771923207de9a7082560cf07a880cef53bfe75a987589c" host="localhost" Nov 1 10:08:54.775253 containerd[1601]: 2025-11-01 10:08:54.743 [INFO][4628] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.56e077f650e955ec22771923207de9a7082560cf07a880cef53bfe75a987589c Nov 1 10:08:54.775253 containerd[1601]: 2025-11-01 10:08:54.746 [INFO][4628] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.56e077f650e955ec22771923207de9a7082560cf07a880cef53bfe75a987589c" host="localhost" Nov 1 10:08:54.775253 containerd[1601]: 2025-11-01 10:08:54.752 [INFO][4628] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.56e077f650e955ec22771923207de9a7082560cf07a880cef53bfe75a987589c" host="localhost" Nov 1 10:08:54.775253 containerd[1601]: 2025-11-01 10:08:54.752 [INFO][4628] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.56e077f650e955ec22771923207de9a7082560cf07a880cef53bfe75a987589c" host="localhost" Nov 1 10:08:54.775253 containerd[1601]: 2025-11-01 10:08:54.752 [INFO][4628] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 10:08:54.775253 containerd[1601]: 2025-11-01 10:08:54.752 [INFO][4628] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="56e077f650e955ec22771923207de9a7082560cf07a880cef53bfe75a987589c" HandleID="k8s-pod-network.56e077f650e955ec22771923207de9a7082560cf07a880cef53bfe75a987589c" Workload="localhost-k8s-goldmane--7c778bb748--rw5k5-eth0" Nov 1 10:08:54.777267 containerd[1601]: 2025-11-01 10:08:54.757 [INFO][4602] cni-plugin/k8s.go 418: Populated endpoint ContainerID="56e077f650e955ec22771923207de9a7082560cf07a880cef53bfe75a987589c" Namespace="calico-system" Pod="goldmane-7c778bb748-rw5k5" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--rw5k5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--rw5k5-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"157d34c2-941f-430c-9ff6-c3d7ebcb5c55", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 8, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-rw5k5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali26a41958811", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:08:54.777267 containerd[1601]: 2025-11-01 10:08:54.757 [INFO][4602] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="56e077f650e955ec22771923207de9a7082560cf07a880cef53bfe75a987589c" Namespace="calico-system" Pod="goldmane-7c778bb748-rw5k5" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--rw5k5-eth0" Nov 1 10:08:54.777267 containerd[1601]: 2025-11-01 10:08:54.757 [INFO][4602] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali26a41958811 ContainerID="56e077f650e955ec22771923207de9a7082560cf07a880cef53bfe75a987589c" Namespace="calico-system" Pod="goldmane-7c778bb748-rw5k5" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--rw5k5-eth0" Nov 1 10:08:54.777267 containerd[1601]: 2025-11-01 10:08:54.762 [INFO][4602] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="56e077f650e955ec22771923207de9a7082560cf07a880cef53bfe75a987589c" Namespace="calico-system" Pod="goldmane-7c778bb748-rw5k5" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--rw5k5-eth0" Nov 1 10:08:54.777267 containerd[1601]: 2025-11-01 10:08:54.763 [INFO][4602] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="56e077f650e955ec22771923207de9a7082560cf07a880cef53bfe75a987589c" Namespace="calico-system" Pod="goldmane-7c778bb748-rw5k5" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--rw5k5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--rw5k5-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"157d34c2-941f-430c-9ff6-c3d7ebcb5c55", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 8, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"56e077f650e955ec22771923207de9a7082560cf07a880cef53bfe75a987589c", Pod:"goldmane-7c778bb748-rw5k5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali26a41958811", MAC:"1e:a2:97:08:6f:28", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:08:54.777267 containerd[1601]: 2025-11-01 10:08:54.772 [INFO][4602] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="56e077f650e955ec22771923207de9a7082560cf07a880cef53bfe75a987589c" Namespace="calico-system" Pod="goldmane-7c778bb748-rw5k5" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--rw5k5-eth0" Nov 1 10:08:54.797143 containerd[1601]: time="2025-11-01T10:08:54.797085951Z" level=info msg="connecting to shim 56e077f650e955ec22771923207de9a7082560cf07a880cef53bfe75a987589c" address="unix:///run/containerd/s/6b892293d0b65ec7fca0ca16ca8221c245734f3b70bf5ff6bf3e7b5ae93be820" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:08:54.830846 systemd[1]: Started cri-containerd-56e077f650e955ec22771923207de9a7082560cf07a880cef53bfe75a987589c.scope - libcontainer container 56e077f650e955ec22771923207de9a7082560cf07a880cef53bfe75a987589c. Nov 1 10:08:54.847312 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 10:08:54.874303 systemd-networkd[1502]: cali6cc185197ed: Link UP Nov 1 10:08:54.875468 systemd-networkd[1502]: cali6cc185197ed: Gained carrier Nov 1 10:08:54.890101 containerd[1601]: 2025-11-01 10:08:54.690 [INFO][4575] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--988dffdbc--59586-eth0 calico-apiserver-988dffdbc- calico-apiserver 2dc8436d-b4aa-4090-a31c-cb311723721e 830 0 2025-11-01 10:08:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:988dffdbc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-988dffdbc-59586 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6cc185197ed [] [] }} ContainerID="0b73818878c68f08cf1c22373f6e067538ae88b9352693e4a39723998e839ea3" Namespace="calico-apiserver" Pod="calico-apiserver-988dffdbc-59586" WorkloadEndpoint="localhost-k8s-calico--apiserver--988dffdbc--59586-" Nov 1 10:08:54.890101 containerd[1601]: 2025-11-01 10:08:54.690 [INFO][4575] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0b73818878c68f08cf1c22373f6e067538ae88b9352693e4a39723998e839ea3" Namespace="calico-apiserver" Pod="calico-apiserver-988dffdbc-59586" WorkloadEndpoint="localhost-k8s-calico--apiserver--988dffdbc--59586-eth0" Nov 1 10:08:54.890101 containerd[1601]: 2025-11-01 10:08:54.725 [INFO][4635] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0b73818878c68f08cf1c22373f6e067538ae88b9352693e4a39723998e839ea3" HandleID="k8s-pod-network.0b73818878c68f08cf1c22373f6e067538ae88b9352693e4a39723998e839ea3" Workload="localhost-k8s-calico--apiserver--988dffdbc--59586-eth0" Nov 1 10:08:54.890101 containerd[1601]: 2025-11-01 10:08:54.726 [INFO][4635] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0b73818878c68f08cf1c22373f6e067538ae88b9352693e4a39723998e839ea3" HandleID="k8s-pod-network.0b73818878c68f08cf1c22373f6e067538ae88b9352693e4a39723998e839ea3" Workload="localhost-k8s-calico--apiserver--988dffdbc--59586-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002defd0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-988dffdbc-59586", "timestamp":"2025-11-01 10:08:54.725946625 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 10:08:54.890101 containerd[1601]: 2025-11-01 10:08:54.726 [INFO][4635] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 10:08:54.890101 containerd[1601]: 2025-11-01 10:08:54.752 [INFO][4635] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 10:08:54.890101 containerd[1601]: 2025-11-01 10:08:54.752 [INFO][4635] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 10:08:54.890101 containerd[1601]: 2025-11-01 10:08:54.832 [INFO][4635] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0b73818878c68f08cf1c22373f6e067538ae88b9352693e4a39723998e839ea3" host="localhost" Nov 1 10:08:54.890101 containerd[1601]: 2025-11-01 10:08:54.838 [INFO][4635] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 10:08:54.890101 containerd[1601]: 2025-11-01 10:08:54.844 [INFO][4635] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 10:08:54.890101 containerd[1601]: 2025-11-01 10:08:54.845 [INFO][4635] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 10:08:54.890101 containerd[1601]: 2025-11-01 10:08:54.848 [INFO][4635] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 10:08:54.890101 containerd[1601]: 2025-11-01 10:08:54.848 [INFO][4635] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0b73818878c68f08cf1c22373f6e067538ae88b9352693e4a39723998e839ea3" host="localhost" Nov 1 10:08:54.890101 containerd[1601]: 2025-11-01 10:08:54.850 [INFO][4635] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0b73818878c68f08cf1c22373f6e067538ae88b9352693e4a39723998e839ea3 Nov 1 10:08:54.890101 containerd[1601]: 2025-11-01 10:08:54.856 [INFO][4635] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0b73818878c68f08cf1c22373f6e067538ae88b9352693e4a39723998e839ea3" host="localhost" Nov 1 10:08:54.890101 containerd[1601]: 2025-11-01 10:08:54.861 [INFO][4635] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.0b73818878c68f08cf1c22373f6e067538ae88b9352693e4a39723998e839ea3" host="localhost" Nov 1 10:08:54.890101 containerd[1601]: 2025-11-01 10:08:54.863 [INFO][4635] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.0b73818878c68f08cf1c22373f6e067538ae88b9352693e4a39723998e839ea3" host="localhost" Nov 1 10:08:54.890101 containerd[1601]: 2025-11-01 10:08:54.863 [INFO][4635] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 10:08:54.890101 containerd[1601]: 2025-11-01 10:08:54.863 [INFO][4635] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="0b73818878c68f08cf1c22373f6e067538ae88b9352693e4a39723998e839ea3" HandleID="k8s-pod-network.0b73818878c68f08cf1c22373f6e067538ae88b9352693e4a39723998e839ea3" Workload="localhost-k8s-calico--apiserver--988dffdbc--59586-eth0" Nov 1 10:08:54.890655 containerd[1601]: 2025-11-01 10:08:54.867 [INFO][4575] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0b73818878c68f08cf1c22373f6e067538ae88b9352693e4a39723998e839ea3" Namespace="calico-apiserver" Pod="calico-apiserver-988dffdbc-59586" WorkloadEndpoint="localhost-k8s-calico--apiserver--988dffdbc--59586-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--988dffdbc--59586-eth0", GenerateName:"calico-apiserver-988dffdbc-", Namespace:"calico-apiserver", SelfLink:"", UID:"2dc8436d-b4aa-4090-a31c-cb311723721e", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 8, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"988dffdbc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-988dffdbc-59586", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6cc185197ed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:08:54.890655 containerd[1601]: 2025-11-01 10:08:54.867 [INFO][4575] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="0b73818878c68f08cf1c22373f6e067538ae88b9352693e4a39723998e839ea3" Namespace="calico-apiserver" Pod="calico-apiserver-988dffdbc-59586" WorkloadEndpoint="localhost-k8s-calico--apiserver--988dffdbc--59586-eth0" Nov 1 10:08:54.890655 containerd[1601]: 2025-11-01 10:08:54.867 [INFO][4575] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6cc185197ed ContainerID="0b73818878c68f08cf1c22373f6e067538ae88b9352693e4a39723998e839ea3" Namespace="calico-apiserver" Pod="calico-apiserver-988dffdbc-59586" WorkloadEndpoint="localhost-k8s-calico--apiserver--988dffdbc--59586-eth0" Nov 1 10:08:54.890655 containerd[1601]: 2025-11-01 10:08:54.875 [INFO][4575] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0b73818878c68f08cf1c22373f6e067538ae88b9352693e4a39723998e839ea3" Namespace="calico-apiserver" Pod="calico-apiserver-988dffdbc-59586" WorkloadEndpoint="localhost-k8s-calico--apiserver--988dffdbc--59586-eth0" Nov 1 10:08:54.890655 containerd[1601]: 2025-11-01 10:08:54.876 [INFO][4575] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0b73818878c68f08cf1c22373f6e067538ae88b9352693e4a39723998e839ea3" Namespace="calico-apiserver" Pod="calico-apiserver-988dffdbc-59586" WorkloadEndpoint="localhost-k8s-calico--apiserver--988dffdbc--59586-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--988dffdbc--59586-eth0", GenerateName:"calico-apiserver-988dffdbc-", Namespace:"calico-apiserver", SelfLink:"", UID:"2dc8436d-b4aa-4090-a31c-cb311723721e", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 8, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"988dffdbc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0b73818878c68f08cf1c22373f6e067538ae88b9352693e4a39723998e839ea3", Pod:"calico-apiserver-988dffdbc-59586", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6cc185197ed", MAC:"da:24:13:11:a1:11", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:08:54.890655 containerd[1601]: 2025-11-01 10:08:54.885 [INFO][4575] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0b73818878c68f08cf1c22373f6e067538ae88b9352693e4a39723998e839ea3" Namespace="calico-apiserver" Pod="calico-apiserver-988dffdbc-59586" WorkloadEndpoint="localhost-k8s-calico--apiserver--988dffdbc--59586-eth0" Nov 1 10:08:54.893355 containerd[1601]: time="2025-11-01T10:08:54.893032842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-rw5k5,Uid:157d34c2-941f-430c-9ff6-c3d7ebcb5c55,Namespace:calico-system,Attempt:0,} returns sandbox id \"56e077f650e955ec22771923207de9a7082560cf07a880cef53bfe75a987589c\"" Nov 1 10:08:54.895865 containerd[1601]: time="2025-11-01T10:08:54.895817777Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 10:08:54.916447 containerd[1601]: time="2025-11-01T10:08:54.916402766Z" level=info msg="connecting to shim 0b73818878c68f08cf1c22373f6e067538ae88b9352693e4a39723998e839ea3" address="unix:///run/containerd/s/e98a2ccc0c34846f8a6627ec3f0fa8226da5b6ed071c268a9aa58b6c4170a30c" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:08:54.938956 systemd[1]: Started cri-containerd-0b73818878c68f08cf1c22373f6e067538ae88b9352693e4a39723998e839ea3.scope - libcontainer container 0b73818878c68f08cf1c22373f6e067538ae88b9352693e4a39723998e839ea3. Nov 1 10:08:54.958634 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 10:08:54.967142 systemd-networkd[1502]: cali6ea89a29183: Link UP Nov 1 10:08:54.967343 systemd-networkd[1502]: cali6ea89a29183: Gained carrier Nov 1 10:08:54.983675 containerd[1601]: 2025-11-01 10:08:54.682 [INFO][4578] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--hgd94-eth0 coredns-66bc5c9577- kube-system 22e74b9c-2f6c-481f-9e41-938b85854239 833 0 2025-11-01 10:08:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-hgd94 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6ea89a29183 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="55b8d483228e4b2f4dfb8a5be9516ab7d77d442105112c5d659128f27a6c2c27" Namespace="kube-system" Pod="coredns-66bc5c9577-hgd94" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--hgd94-" Nov 1 10:08:54.983675 containerd[1601]: 2025-11-01 10:08:54.682 [INFO][4578] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="55b8d483228e4b2f4dfb8a5be9516ab7d77d442105112c5d659128f27a6c2c27" Namespace="kube-system" Pod="coredns-66bc5c9577-hgd94" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--hgd94-eth0" Nov 1 10:08:54.983675 containerd[1601]: 2025-11-01 10:08:54.727 [INFO][4622] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="55b8d483228e4b2f4dfb8a5be9516ab7d77d442105112c5d659128f27a6c2c27" HandleID="k8s-pod-network.55b8d483228e4b2f4dfb8a5be9516ab7d77d442105112c5d659128f27a6c2c27" Workload="localhost-k8s-coredns--66bc5c9577--hgd94-eth0" Nov 1 10:08:54.983675 containerd[1601]: 2025-11-01 10:08:54.727 [INFO][4622] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="55b8d483228e4b2f4dfb8a5be9516ab7d77d442105112c5d659128f27a6c2c27" HandleID="k8s-pod-network.55b8d483228e4b2f4dfb8a5be9516ab7d77d442105112c5d659128f27a6c2c27" Workload="localhost-k8s-coredns--66bc5c9577--hgd94-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7000), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-hgd94", "timestamp":"2025-11-01 10:08:54.727341636 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 10:08:54.983675 containerd[1601]: 2025-11-01 10:08:54.728 [INFO][4622] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 10:08:54.983675 containerd[1601]: 2025-11-01 10:08:54.863 [INFO][4622] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 10:08:54.983675 containerd[1601]: 2025-11-01 10:08:54.863 [INFO][4622] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 10:08:54.983675 containerd[1601]: 2025-11-01 10:08:54.933 [INFO][4622] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.55b8d483228e4b2f4dfb8a5be9516ab7d77d442105112c5d659128f27a6c2c27" host="localhost" Nov 1 10:08:54.983675 containerd[1601]: 2025-11-01 10:08:54.939 [INFO][4622] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 10:08:54.983675 containerd[1601]: 2025-11-01 10:08:54.946 [INFO][4622] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 10:08:54.983675 containerd[1601]: 2025-11-01 10:08:54.948 [INFO][4622] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 10:08:54.983675 containerd[1601]: 2025-11-01 10:08:54.950 [INFO][4622] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 10:08:54.983675 containerd[1601]: 2025-11-01 10:08:54.950 [INFO][4622] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.55b8d483228e4b2f4dfb8a5be9516ab7d77d442105112c5d659128f27a6c2c27" host="localhost" Nov 1 10:08:54.983675 containerd[1601]: 2025-11-01 10:08:54.951 [INFO][4622] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.55b8d483228e4b2f4dfb8a5be9516ab7d77d442105112c5d659128f27a6c2c27 Nov 1 10:08:54.983675 containerd[1601]: 2025-11-01 10:08:54.954 [INFO][4622] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.55b8d483228e4b2f4dfb8a5be9516ab7d77d442105112c5d659128f27a6c2c27" host="localhost" Nov 1 10:08:54.983675 containerd[1601]: 2025-11-01 10:08:54.960 [INFO][4622] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.55b8d483228e4b2f4dfb8a5be9516ab7d77d442105112c5d659128f27a6c2c27" host="localhost" Nov 1 10:08:54.983675 containerd[1601]: 2025-11-01 10:08:54.960 [INFO][4622] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.55b8d483228e4b2f4dfb8a5be9516ab7d77d442105112c5d659128f27a6c2c27" host="localhost" Nov 1 10:08:54.983675 containerd[1601]: 2025-11-01 10:08:54.960 [INFO][4622] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 10:08:54.983675 containerd[1601]: 2025-11-01 10:08:54.960 [INFO][4622] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="55b8d483228e4b2f4dfb8a5be9516ab7d77d442105112c5d659128f27a6c2c27" HandleID="k8s-pod-network.55b8d483228e4b2f4dfb8a5be9516ab7d77d442105112c5d659128f27a6c2c27" Workload="localhost-k8s-coredns--66bc5c9577--hgd94-eth0" Nov 1 10:08:54.984213 containerd[1601]: 2025-11-01 10:08:54.964 [INFO][4578] cni-plugin/k8s.go 418: Populated endpoint ContainerID="55b8d483228e4b2f4dfb8a5be9516ab7d77d442105112c5d659128f27a6c2c27" Namespace="kube-system" Pod="coredns-66bc5c9577-hgd94" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--hgd94-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--hgd94-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"22e74b9c-2f6c-481f-9e41-938b85854239", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 8, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-hgd94", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6ea89a29183", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:08:54.984213 containerd[1601]: 2025-11-01 10:08:54.964 [INFO][4578] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="55b8d483228e4b2f4dfb8a5be9516ab7d77d442105112c5d659128f27a6c2c27" Namespace="kube-system" Pod="coredns-66bc5c9577-hgd94" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--hgd94-eth0" Nov 1 10:08:54.984213 containerd[1601]: 2025-11-01 10:08:54.964 [INFO][4578] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6ea89a29183 ContainerID="55b8d483228e4b2f4dfb8a5be9516ab7d77d442105112c5d659128f27a6c2c27" Namespace="kube-system" Pod="coredns-66bc5c9577-hgd94" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--hgd94-eth0" Nov 1 10:08:54.984213 containerd[1601]: 2025-11-01 10:08:54.966 [INFO][4578] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="55b8d483228e4b2f4dfb8a5be9516ab7d77d442105112c5d659128f27a6c2c27" Namespace="kube-system" Pod="coredns-66bc5c9577-hgd94" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--hgd94-eth0" Nov 1 10:08:54.984213 containerd[1601]: 2025-11-01 10:08:54.967 [INFO][4578] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="55b8d483228e4b2f4dfb8a5be9516ab7d77d442105112c5d659128f27a6c2c27" Namespace="kube-system" Pod="coredns-66bc5c9577-hgd94" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--hgd94-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--hgd94-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"22e74b9c-2f6c-481f-9e41-938b85854239", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 8, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"55b8d483228e4b2f4dfb8a5be9516ab7d77d442105112c5d659128f27a6c2c27", Pod:"coredns-66bc5c9577-hgd94", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6ea89a29183", MAC:"ae:c6:69:79:e3:5a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:08:54.984213 containerd[1601]: 2025-11-01 10:08:54.977 [INFO][4578] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="55b8d483228e4b2f4dfb8a5be9516ab7d77d442105112c5d659128f27a6c2c27" Namespace="kube-system" Pod="coredns-66bc5c9577-hgd94" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--hgd94-eth0" Nov 1 10:08:54.995829 containerd[1601]: time="2025-11-01T10:08:54.995714753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-988dffdbc-59586,Uid:2dc8436d-b4aa-4090-a31c-cb311723721e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0b73818878c68f08cf1c22373f6e067538ae88b9352693e4a39723998e839ea3\"" Nov 1 10:08:55.010663 containerd[1601]: time="2025-11-01T10:08:55.010609234Z" level=info msg="connecting to shim 55b8d483228e4b2f4dfb8a5be9516ab7d77d442105112c5d659128f27a6c2c27" address="unix:///run/containerd/s/d861a73f8cda4dfb49dbaedbe15912f6e2d3db4f369b9499c2b2ea77bc58fa69" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:08:55.053839 systemd[1]: Started cri-containerd-55b8d483228e4b2f4dfb8a5be9516ab7d77d442105112c5d659128f27a6c2c27.scope - libcontainer container 55b8d483228e4b2f4dfb8a5be9516ab7d77d442105112c5d659128f27a6c2c27. Nov 1 10:08:55.069941 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 10:08:55.102310 containerd[1601]: time="2025-11-01T10:08:55.102249758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-hgd94,Uid:22e74b9c-2f6c-481f-9e41-938b85854239,Namespace:kube-system,Attempt:0,} returns sandbox id \"55b8d483228e4b2f4dfb8a5be9516ab7d77d442105112c5d659128f27a6c2c27\"" Nov 1 10:08:55.103362 kubelet[2747]: E1101 10:08:55.103338 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:55.107393 containerd[1601]: time="2025-11-01T10:08:55.107288708Z" level=info msg="CreateContainer within sandbox \"55b8d483228e4b2f4dfb8a5be9516ab7d77d442105112c5d659128f27a6c2c27\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 10:08:55.113259 kubelet[2747]: E1101 10:08:55.112977 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:55.116577 kubelet[2747]: E1101 10:08:55.116041 2747 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f8c559b4d-57dz9" podUID="8cb58978-7fd1-4b27-8e4f-8d93d2102825" Nov 1 10:08:55.124200 containerd[1601]: time="2025-11-01T10:08:55.124132220Z" level=info msg="Container 1836cc696ca7fd8085e272f0503fa89e59dfe3748f06ff39022a4ee2300ac941: CDI devices from CRI Config.CDIDevices: []" Nov 1 10:08:55.130433 containerd[1601]: time="2025-11-01T10:08:55.130294080Z" level=info msg="CreateContainer within sandbox \"55b8d483228e4b2f4dfb8a5be9516ab7d77d442105112c5d659128f27a6c2c27\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1836cc696ca7fd8085e272f0503fa89e59dfe3748f06ff39022a4ee2300ac941\"" Nov 1 10:08:55.132127 containerd[1601]: time="2025-11-01T10:08:55.132100404Z" level=info msg="StartContainer for \"1836cc696ca7fd8085e272f0503fa89e59dfe3748f06ff39022a4ee2300ac941\"" Nov 1 10:08:55.132903 containerd[1601]: time="2025-11-01T10:08:55.132857718Z" level=info msg="connecting to shim 1836cc696ca7fd8085e272f0503fa89e59dfe3748f06ff39022a4ee2300ac941" address="unix:///run/containerd/s/d861a73f8cda4dfb49dbaedbe15912f6e2d3db4f369b9499c2b2ea77bc58fa69" protocol=ttrpc version=3 Nov 1 10:08:55.156853 systemd[1]: Started cri-containerd-1836cc696ca7fd8085e272f0503fa89e59dfe3748f06ff39022a4ee2300ac941.scope - libcontainer container 1836cc696ca7fd8085e272f0503fa89e59dfe3748f06ff39022a4ee2300ac941. Nov 1 10:08:55.192650 containerd[1601]: time="2025-11-01T10:08:55.192592483Z" level=info msg="StartContainer for \"1836cc696ca7fd8085e272f0503fa89e59dfe3748f06ff39022a4ee2300ac941\" returns successfully" Nov 1 10:08:55.227388 containerd[1601]: time="2025-11-01T10:08:55.227327408Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:08:55.228552 containerd[1601]: time="2025-11-01T10:08:55.228514800Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 10:08:55.228810 containerd[1601]: time="2025-11-01T10:08:55.228574232Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 1 10:08:55.228995 kubelet[2747]: E1101 10:08:55.228932 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 10:08:55.228995 kubelet[2747]: E1101 10:08:55.228983 2747 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 10:08:55.229174 kubelet[2747]: E1101 10:08:55.229145 2747 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-rw5k5_calico-system(157d34c2-941f-430c-9ff6-c3d7ebcb5c55): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 10:08:55.229199 kubelet[2747]: E1101 10:08:55.229178 2747 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-rw5k5" podUID="157d34c2-941f-430c-9ff6-c3d7ebcb5c55" Nov 1 10:08:55.229707 containerd[1601]: time="2025-11-01T10:08:55.229660784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 10:08:55.387932 systemd-networkd[1502]: cali978884b566e: Gained IPv6LL Nov 1 10:08:55.580668 containerd[1601]: time="2025-11-01T10:08:55.580517516Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:08:55.584757 containerd[1601]: time="2025-11-01T10:08:55.582833699Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 10:08:55.584757 containerd[1601]: time="2025-11-01T10:08:55.582913971Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 1 10:08:55.584984 kubelet[2747]: E1101 10:08:55.583284 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 10:08:55.584984 kubelet[2747]: E1101 10:08:55.583339 2747 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 10:08:55.584984 kubelet[2747]: E1101 10:08:55.583444 2747 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-988dffdbc-59586_calico-apiserver(2dc8436d-b4aa-4090-a31c-cb311723721e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 10:08:55.584984 kubelet[2747]: E1101 10:08:55.583486 2747 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-988dffdbc-59586" podUID="2dc8436d-b4aa-4090-a31c-cb311723721e" Nov 1 10:08:56.116613 kubelet[2747]: E1101 10:08:56.116566 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:56.118762 kubelet[2747]: E1101 10:08:56.118267 2747 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-rw5k5" podUID="157d34c2-941f-430c-9ff6-c3d7ebcb5c55" Nov 1 10:08:56.118762 kubelet[2747]: E1101 10:08:56.118431 2747 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-988dffdbc-59586" podUID="2dc8436d-b4aa-4090-a31c-cb311723721e" Nov 1 10:08:56.125800 kubelet[2747]: I1101 10:08:56.125678 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-hgd94" podStartSLOduration=36.125662553 podStartE2EDuration="36.125662553s" podCreationTimestamp="2025-11-01 10:08:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:08:56.124807365 +0000 UTC m=+42.603041718" watchObservedRunningTime="2025-11-01 10:08:56.125662553 +0000 UTC m=+42.603896906" Nov 1 10:08:56.348845 systemd-networkd[1502]: cali6ea89a29183: Gained IPv6LL Nov 1 10:08:56.475883 systemd-networkd[1502]: cali6cc185197ed: Gained IPv6LL Nov 1 10:08:56.667958 systemd-networkd[1502]: cali26a41958811: Gained IPv6LL Nov 1 10:08:57.117924 kubelet[2747]: E1101 10:08:57.117896 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:08:57.151219 systemd[1]: Started sshd@8-10.0.0.91:22-10.0.0.1:58342.service - OpenSSH per-connection server daemon (10.0.0.1:58342). Nov 1 10:08:57.218008 sshd[4862]: Accepted publickey for core from 10.0.0.1 port 58342 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:08:57.219470 sshd-session[4862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:08:57.224145 systemd-logind[1585]: New session 9 of user core. Nov 1 10:08:57.231825 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 1 10:08:57.315540 sshd[4868]: Connection closed by 10.0.0.1 port 58342 Nov 1 10:08:57.315881 sshd-session[4862]: pam_unix(sshd:session): session closed for user core Nov 1 10:08:57.321065 systemd[1]: sshd@8-10.0.0.91:22-10.0.0.1:58342.service: Deactivated successfully. Nov 1 10:08:57.323198 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 10:08:57.324090 systemd-logind[1585]: Session 9 logged out. Waiting for processes to exit. Nov 1 10:08:57.325436 systemd-logind[1585]: Removed session 9. Nov 1 10:08:58.120322 kubelet[2747]: E1101 10:08:58.120275 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:09:01.228489 kubelet[2747]: I1101 10:09:01.228442 2747 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 10:09:01.228961 kubelet[2747]: E1101 10:09:01.228847 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:09:02.129128 kubelet[2747]: E1101 10:09:02.129088 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:09:02.329808 systemd[1]: Started sshd@9-10.0.0.91:22-10.0.0.1:34848.service - OpenSSH per-connection server daemon (10.0.0.1:34848). Nov 1 10:09:02.384755 sshd[4942]: Accepted publickey for core from 10.0.0.1 port 34848 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:09:02.386041 sshd-session[4942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:09:02.390507 systemd-logind[1585]: New session 10 of user core. Nov 1 10:09:02.403924 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 1 10:09:02.491356 sshd[4945]: Connection closed by 10.0.0.1 port 34848 Nov 1 10:09:02.491719 sshd-session[4942]: pam_unix(sshd:session): session closed for user core Nov 1 10:09:02.500918 systemd[1]: sshd@9-10.0.0.91:22-10.0.0.1:34848.service: Deactivated successfully. Nov 1 10:09:02.503065 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 10:09:02.503953 systemd-logind[1585]: Session 10 logged out. Waiting for processes to exit. Nov 1 10:09:02.507017 systemd[1]: Started sshd@10-10.0.0.91:22-10.0.0.1:34856.service - OpenSSH per-connection server daemon (10.0.0.1:34856). Nov 1 10:09:02.507778 systemd-logind[1585]: Removed session 10. Nov 1 10:09:02.567042 sshd[4959]: Accepted publickey for core from 10.0.0.1 port 34856 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:09:02.568537 sshd-session[4959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:09:02.573366 systemd-logind[1585]: New session 11 of user core. Nov 1 10:09:02.584905 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 1 10:09:02.698210 sshd[4962]: Connection closed by 10.0.0.1 port 34856 Nov 1 10:09:02.700773 sshd-session[4959]: pam_unix(sshd:session): session closed for user core Nov 1 10:09:02.709788 systemd[1]: sshd@10-10.0.0.91:22-10.0.0.1:34856.service: Deactivated successfully. Nov 1 10:09:02.712120 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 10:09:02.713778 systemd-logind[1585]: Session 11 logged out. Waiting for processes to exit. Nov 1 10:09:02.719192 systemd[1]: Started sshd@11-10.0.0.91:22-10.0.0.1:34858.service - OpenSSH per-connection server daemon (10.0.0.1:34858). Nov 1 10:09:02.720325 systemd-logind[1585]: Removed session 11. Nov 1 10:09:02.771624 sshd[4974]: Accepted publickey for core from 10.0.0.1 port 34858 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:09:02.772942 sshd-session[4974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:09:02.777318 systemd-logind[1585]: New session 12 of user core. Nov 1 10:09:02.785881 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 1 10:09:02.862509 sshd[4977]: Connection closed by 10.0.0.1 port 34858 Nov 1 10:09:02.862846 sshd-session[4974]: pam_unix(sshd:session): session closed for user core Nov 1 10:09:02.868059 systemd[1]: sshd@11-10.0.0.91:22-10.0.0.1:34858.service: Deactivated successfully. Nov 1 10:09:02.870307 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 10:09:02.871151 systemd-logind[1585]: Session 12 logged out. Waiting for processes to exit. Nov 1 10:09:02.872399 systemd-logind[1585]: Removed session 12. Nov 1 10:09:06.616543 containerd[1601]: time="2025-11-01T10:09:06.616480191Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 10:09:06.911753 containerd[1601]: time="2025-11-01T10:09:06.911539586Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:09:06.912838 containerd[1601]: time="2025-11-01T10:09:06.912804129Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 10:09:06.912983 containerd[1601]: time="2025-11-01T10:09:06.912897715Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 1 10:09:06.913154 kubelet[2747]: E1101 10:09:06.913034 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 10:09:06.913154 kubelet[2747]: E1101 10:09:06.913088 2747 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 10:09:06.913619 kubelet[2747]: E1101 10:09:06.913321 2747 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-wgv7h_calico-system(2d70db43-7a2b-4384-9018-e7385784e621): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 10:09:06.913653 containerd[1601]: time="2025-11-01T10:09:06.913443500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 10:09:07.250033 containerd[1601]: time="2025-11-01T10:09:07.249882194Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:09:07.251219 containerd[1601]: time="2025-11-01T10:09:07.251177857Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 10:09:07.251283 containerd[1601]: time="2025-11-01T10:09:07.251260792Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 1 10:09:07.251483 kubelet[2747]: E1101 10:09:07.251435 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 10:09:07.251536 kubelet[2747]: E1101 10:09:07.251490 2747 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 10:09:07.251717 kubelet[2747]: E1101 10:09:07.251671 2747 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-8988d9647-8v76c_calico-system(925c93c8-2f46-4e42-abc0-187db9f33983): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 10:09:07.251869 containerd[1601]: time="2025-11-01T10:09:07.251832065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 10:09:07.584889 containerd[1601]: time="2025-11-01T10:09:07.584828598Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:09:07.586131 containerd[1601]: time="2025-11-01T10:09:07.586090587Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 10:09:07.586216 containerd[1601]: time="2025-11-01T10:09:07.586160167Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 1 10:09:07.586365 kubelet[2747]: E1101 10:09:07.586319 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 10:09:07.586434 kubelet[2747]: E1101 10:09:07.586371 2747 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 10:09:07.586626 kubelet[2747]: E1101 10:09:07.586562 2747 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-wgv7h_calico-system(2d70db43-7a2b-4384-9018-e7385784e621): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 10:09:07.586724 kubelet[2747]: E1101 10:09:07.586629 2747 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wgv7h" podUID="2d70db43-7a2b-4384-9018-e7385784e621" Nov 1 10:09:07.586914 containerd[1601]: time="2025-11-01T10:09:07.586870140Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 10:09:07.878372 systemd[1]: Started sshd@12-10.0.0.91:22-10.0.0.1:34872.service - OpenSSH per-connection server daemon (10.0.0.1:34872). Nov 1 10:09:07.899808 containerd[1601]: time="2025-11-01T10:09:07.899758357Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:09:07.918424 sshd[4993]: Accepted publickey for core from 10.0.0.1 port 34872 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:09:07.919881 sshd-session[4993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:09:07.924026 systemd-logind[1585]: New session 13 of user core. Nov 1 10:09:07.934819 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 1 10:09:08.006311 sshd[4996]: Connection closed by 10.0.0.1 port 34872 Nov 1 10:09:08.006658 sshd-session[4993]: pam_unix(sshd:session): session closed for user core Nov 1 10:09:08.011410 systemd[1]: sshd@12-10.0.0.91:22-10.0.0.1:34872.service: Deactivated successfully. Nov 1 10:09:08.013376 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 10:09:08.014350 systemd-logind[1585]: Session 13 logged out. Waiting for processes to exit. Nov 1 10:09:08.015803 systemd-logind[1585]: Removed session 13. Nov 1 10:09:08.074997 containerd[1601]: time="2025-11-01T10:09:08.074908160Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 10:09:08.074997 containerd[1601]: time="2025-11-01T10:09:08.074958886Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 1 10:09:08.075192 kubelet[2747]: E1101 10:09:08.075145 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 10:09:08.075505 kubelet[2747]: E1101 10:09:08.075190 2747 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 10:09:08.075505 kubelet[2747]: E1101 10:09:08.075343 2747 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-8988d9647-8v76c_calico-system(925c93c8-2f46-4e42-abc0-187db9f33983): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 10:09:08.075505 kubelet[2747]: E1101 10:09:08.075384 2747 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8988d9647-8v76c" podUID="925c93c8-2f46-4e42-abc0-187db9f33983" Nov 1 10:09:08.075646 containerd[1601]: time="2025-11-01T10:09:08.075604477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 10:09:08.519937 containerd[1601]: time="2025-11-01T10:09:08.519874607Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:09:08.521389 containerd[1601]: time="2025-11-01T10:09:08.521277340Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 10:09:08.521389 containerd[1601]: time="2025-11-01T10:09:08.521319088Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 1 10:09:08.521587 kubelet[2747]: E1101 10:09:08.521519 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 10:09:08.521676 kubelet[2747]: E1101 10:09:08.521584 2747 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 10:09:08.521742 kubelet[2747]: E1101 10:09:08.521677 2747 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-rw5k5_calico-system(157d34c2-941f-430c-9ff6-c3d7ebcb5c55): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 10:09:08.521782 kubelet[2747]: E1101 10:09:08.521740 2747 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-rw5k5" podUID="157d34c2-941f-430c-9ff6-c3d7ebcb5c55" Nov 1 10:09:08.615150 containerd[1601]: time="2025-11-01T10:09:08.615109045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 10:09:09.007891 containerd[1601]: time="2025-11-01T10:09:09.007828672Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:09:09.009225 containerd[1601]: time="2025-11-01T10:09:09.009190278Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 10:09:09.009307 containerd[1601]: time="2025-11-01T10:09:09.009237456Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 1 10:09:09.009439 kubelet[2747]: E1101 10:09:09.009400 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 10:09:09.009488 kubelet[2747]: E1101 10:09:09.009447 2747 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 10:09:09.009777 kubelet[2747]: E1101 10:09:09.009716 2747 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-988dffdbc-fdbhp_calico-apiserver(5d708427-4ed7-49ae-b59a-606507c6e8d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 10:09:09.009973 kubelet[2747]: E1101 10:09:09.009786 2747 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-988dffdbc-fdbhp" podUID="5d708427-4ed7-49ae-b59a-606507c6e8d8" Nov 1 10:09:09.010029 containerd[1601]: time="2025-11-01T10:09:09.009819369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 10:09:09.489590 containerd[1601]: time="2025-11-01T10:09:09.489510335Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:09:09.491003 containerd[1601]: time="2025-11-01T10:09:09.490966047Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 10:09:09.491113 containerd[1601]: time="2025-11-01T10:09:09.491050155Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 1 10:09:09.491227 kubelet[2747]: E1101 10:09:09.491181 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 10:09:09.491720 kubelet[2747]: E1101 10:09:09.491228 2747 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 10:09:09.491720 kubelet[2747]: E1101 10:09:09.491341 2747 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-988dffdbc-59586_calico-apiserver(2dc8436d-b4aa-4090-a31c-cb311723721e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 10:09:09.491720 kubelet[2747]: E1101 10:09:09.491371 2747 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-988dffdbc-59586" podUID="2dc8436d-b4aa-4090-a31c-cb311723721e" Nov 1 10:09:09.614945 containerd[1601]: time="2025-11-01T10:09:09.614308442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 10:09:09.926404 containerd[1601]: time="2025-11-01T10:09:09.926326086Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:09:09.927661 containerd[1601]: time="2025-11-01T10:09:09.927612411Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 10:09:09.927786 containerd[1601]: time="2025-11-01T10:09:09.927744920Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 1 10:09:09.927895 kubelet[2747]: E1101 10:09:09.927854 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 10:09:09.927939 kubelet[2747]: E1101 10:09:09.927903 2747 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 10:09:09.928012 kubelet[2747]: E1101 10:09:09.927993 2747 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7f8c559b4d-57dz9_calico-system(8cb58978-7fd1-4b27-8e4f-8d93d2102825): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 10:09:09.928117 kubelet[2747]: E1101 10:09:09.928023 2747 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f8c559b4d-57dz9" podUID="8cb58978-7fd1-4b27-8e4f-8d93d2102825" Nov 1 10:09:13.021680 systemd[1]: Started sshd@13-10.0.0.91:22-10.0.0.1:54040.service - OpenSSH per-connection server daemon (10.0.0.1:54040). Nov 1 10:09:13.071527 sshd[5019]: Accepted publickey for core from 10.0.0.1 port 54040 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:09:13.072932 sshd-session[5019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:09:13.077333 systemd-logind[1585]: New session 14 of user core. Nov 1 10:09:13.086843 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 1 10:09:13.157079 sshd[5022]: Connection closed by 10.0.0.1 port 54040 Nov 1 10:09:13.157606 sshd-session[5019]: pam_unix(sshd:session): session closed for user core Nov 1 10:09:13.162943 systemd[1]: sshd@13-10.0.0.91:22-10.0.0.1:54040.service: Deactivated successfully. Nov 1 10:09:13.165274 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 10:09:13.166062 systemd-logind[1585]: Session 14 logged out. Waiting for processes to exit. Nov 1 10:09:13.167222 systemd-logind[1585]: Removed session 14. Nov 1 10:09:18.178394 systemd[1]: Started sshd@14-10.0.0.91:22-10.0.0.1:54042.service - OpenSSH per-connection server daemon (10.0.0.1:54042). Nov 1 10:09:18.233981 sshd[5039]: Accepted publickey for core from 10.0.0.1 port 54042 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:09:18.235295 sshd-session[5039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:09:18.239566 systemd-logind[1585]: New session 15 of user core. Nov 1 10:09:18.251818 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 1 10:09:18.332760 sshd[5042]: Connection closed by 10.0.0.1 port 54042 Nov 1 10:09:18.333051 sshd-session[5039]: pam_unix(sshd:session): session closed for user core Nov 1 10:09:18.337162 systemd[1]: sshd@14-10.0.0.91:22-10.0.0.1:54042.service: Deactivated successfully. Nov 1 10:09:18.339157 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 10:09:18.340032 systemd-logind[1585]: Session 15 logged out. Waiting for processes to exit. Nov 1 10:09:18.341085 systemd-logind[1585]: Removed session 15. Nov 1 10:09:19.614996 kubelet[2747]: E1101 10:09:19.614864 2747 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wgv7h" podUID="2d70db43-7a2b-4384-9018-e7385784e621" Nov 1 10:09:21.614414 kubelet[2747]: E1101 10:09:21.614355 2747 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-988dffdbc-fdbhp" podUID="5d708427-4ed7-49ae-b59a-606507c6e8d8" Nov 1 10:09:22.614069 kubelet[2747]: E1101 10:09:22.614006 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:09:22.614394 kubelet[2747]: E1101 10:09:22.614346 2747 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-rw5k5" podUID="157d34c2-941f-430c-9ff6-c3d7ebcb5c55" Nov 1 10:09:22.615656 kubelet[2747]: E1101 10:09:22.615571 2747 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8988d9647-8v76c" podUID="925c93c8-2f46-4e42-abc0-187db9f33983" Nov 1 10:09:23.356544 systemd[1]: Started sshd@15-10.0.0.91:22-10.0.0.1:60242.service - OpenSSH per-connection server daemon (10.0.0.1:60242). Nov 1 10:09:23.409799 sshd[5057]: Accepted publickey for core from 10.0.0.1 port 60242 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:09:23.411132 sshd-session[5057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:09:23.415350 systemd-logind[1585]: New session 16 of user core. Nov 1 10:09:23.429819 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 1 10:09:23.510213 sshd[5060]: Connection closed by 10.0.0.1 port 60242 Nov 1 10:09:23.510656 sshd-session[5057]: pam_unix(sshd:session): session closed for user core Nov 1 10:09:23.519488 systemd[1]: sshd@15-10.0.0.91:22-10.0.0.1:60242.service: Deactivated successfully. Nov 1 10:09:23.521414 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 10:09:23.522190 systemd-logind[1585]: Session 16 logged out. Waiting for processes to exit. Nov 1 10:09:23.525140 systemd[1]: Started sshd@16-10.0.0.91:22-10.0.0.1:60256.service - OpenSSH per-connection server daemon (10.0.0.1:60256). Nov 1 10:09:23.525800 systemd-logind[1585]: Removed session 16. Nov 1 10:09:23.602561 sshd[5073]: Accepted publickey for core from 10.0.0.1 port 60256 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:09:23.604278 sshd-session[5073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:09:23.609519 systemd-logind[1585]: New session 17 of user core. Nov 1 10:09:23.617645 kubelet[2747]: E1101 10:09:23.617607 2747 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-988dffdbc-59586" podUID="2dc8436d-b4aa-4090-a31c-cb311723721e" Nov 1 10:09:23.618918 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 1 10:09:23.797223 sshd[5076]: Connection closed by 10.0.0.1 port 60256 Nov 1 10:09:23.797668 sshd-session[5073]: pam_unix(sshd:session): session closed for user core Nov 1 10:09:23.807110 systemd[1]: sshd@16-10.0.0.91:22-10.0.0.1:60256.service: Deactivated successfully. Nov 1 10:09:23.809428 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 10:09:23.810336 systemd-logind[1585]: Session 17 logged out. Waiting for processes to exit. Nov 1 10:09:23.814150 systemd[1]: Started sshd@17-10.0.0.91:22-10.0.0.1:60264.service - OpenSSH per-connection server daemon (10.0.0.1:60264). Nov 1 10:09:23.815453 systemd-logind[1585]: Removed session 17. Nov 1 10:09:23.866748 sshd[5089]: Accepted publickey for core from 10.0.0.1 port 60264 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:09:23.868031 sshd-session[5089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:09:23.872565 systemd-logind[1585]: New session 18 of user core. Nov 1 10:09:23.880847 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 1 10:09:24.299117 sshd[5092]: Connection closed by 10.0.0.1 port 60264 Nov 1 10:09:24.299556 sshd-session[5089]: pam_unix(sshd:session): session closed for user core Nov 1 10:09:24.307925 systemd[1]: sshd@17-10.0.0.91:22-10.0.0.1:60264.service: Deactivated successfully. Nov 1 10:09:24.311329 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 10:09:24.313627 systemd-logind[1585]: Session 18 logged out. Waiting for processes to exit. Nov 1 10:09:24.318819 systemd[1]: Started sshd@18-10.0.0.91:22-10.0.0.1:60268.service - OpenSSH per-connection server daemon (10.0.0.1:60268). Nov 1 10:09:24.319429 systemd-logind[1585]: Removed session 18. Nov 1 10:09:24.376067 sshd[5109]: Accepted publickey for core from 10.0.0.1 port 60268 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:09:24.377931 sshd-session[5109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:09:24.382799 systemd-logind[1585]: New session 19 of user core. Nov 1 10:09:24.392823 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 1 10:09:24.579074 sshd[5112]: Connection closed by 10.0.0.1 port 60268 Nov 1 10:09:24.579831 sshd-session[5109]: pam_unix(sshd:session): session closed for user core Nov 1 10:09:24.590723 systemd[1]: sshd@18-10.0.0.91:22-10.0.0.1:60268.service: Deactivated successfully. Nov 1 10:09:24.593031 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 10:09:24.594051 systemd-logind[1585]: Session 19 logged out. Waiting for processes to exit. Nov 1 10:09:24.597622 systemd[1]: Started sshd@19-10.0.0.91:22-10.0.0.1:60284.service - OpenSSH per-connection server daemon (10.0.0.1:60284). Nov 1 10:09:24.598328 systemd-logind[1585]: Removed session 19. Nov 1 10:09:24.614329 kubelet[2747]: E1101 10:09:24.614282 2747 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f8c559b4d-57dz9" podUID="8cb58978-7fd1-4b27-8e4f-8d93d2102825" Nov 1 10:09:24.657377 sshd[5124]: Accepted publickey for core from 10.0.0.1 port 60284 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:09:24.659405 sshd-session[5124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:09:24.664610 systemd-logind[1585]: New session 20 of user core. Nov 1 10:09:24.674825 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 1 10:09:24.762233 sshd[5127]: Connection closed by 10.0.0.1 port 60284 Nov 1 10:09:24.762653 sshd-session[5124]: pam_unix(sshd:session): session closed for user core Nov 1 10:09:24.768365 systemd[1]: sshd@19-10.0.0.91:22-10.0.0.1:60284.service: Deactivated successfully. Nov 1 10:09:24.770808 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 10:09:24.771837 systemd-logind[1585]: Session 20 logged out. Waiting for processes to exit. Nov 1 10:09:24.773572 systemd-logind[1585]: Removed session 20. Nov 1 10:09:28.614526 kubelet[2747]: E1101 10:09:28.614432 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:09:29.787597 systemd[1]: Started sshd@20-10.0.0.91:22-10.0.0.1:60290.service - OpenSSH per-connection server daemon (10.0.0.1:60290). Nov 1 10:09:29.863914 sshd[5144]: Accepted publickey for core from 10.0.0.1 port 60290 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:09:29.865846 sshd-session[5144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:09:29.871455 systemd-logind[1585]: New session 21 of user core. Nov 1 10:09:29.879932 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 1 10:09:29.970649 sshd[5147]: Connection closed by 10.0.0.1 port 60290 Nov 1 10:09:29.971277 sshd-session[5144]: pam_unix(sshd:session): session closed for user core Nov 1 10:09:29.976901 systemd[1]: sshd@20-10.0.0.91:22-10.0.0.1:60290.service: Deactivated successfully. Nov 1 10:09:29.980475 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 10:09:29.982812 systemd-logind[1585]: Session 21 logged out. Waiting for processes to exit. Nov 1 10:09:29.985346 systemd-logind[1585]: Removed session 21. Nov 1 10:09:31.616341 containerd[1601]: time="2025-11-01T10:09:31.616274614Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 10:09:31.947508 containerd[1601]: time="2025-11-01T10:09:31.947285970Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:09:31.948757 containerd[1601]: time="2025-11-01T10:09:31.948714553Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 10:09:31.948868 containerd[1601]: time="2025-11-01T10:09:31.948796301Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 1 10:09:31.949066 kubelet[2747]: E1101 10:09:31.949008 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 10:09:31.949796 kubelet[2747]: E1101 10:09:31.949077 2747 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 10:09:31.949796 kubelet[2747]: E1101 10:09:31.949182 2747 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-wgv7h_calico-system(2d70db43-7a2b-4384-9018-e7385784e621): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 10:09:31.950379 containerd[1601]: time="2025-11-01T10:09:31.950283657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 10:09:32.291772 containerd[1601]: time="2025-11-01T10:09:32.291606569Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:09:32.292832 containerd[1601]: time="2025-11-01T10:09:32.292800778Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 10:09:32.292932 containerd[1601]: time="2025-11-01T10:09:32.292843000Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 1 10:09:32.293178 kubelet[2747]: E1101 10:09:32.293036 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 10:09:32.293178 kubelet[2747]: E1101 10:09:32.293087 2747 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 10:09:32.293269 kubelet[2747]: E1101 10:09:32.293185 2747 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-wgv7h_calico-system(2d70db43-7a2b-4384-9018-e7385784e621): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 10:09:32.293269 kubelet[2747]: E1101 10:09:32.293221 2747 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wgv7h" podUID="2d70db43-7a2b-4384-9018-e7385784e621" Nov 1 10:09:32.614270 kubelet[2747]: E1101 10:09:32.614215 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:09:34.614835 containerd[1601]: time="2025-11-01T10:09:34.614781159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 10:09:34.969070 containerd[1601]: time="2025-11-01T10:09:34.968887431Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:09:34.977241 containerd[1601]: time="2025-11-01T10:09:34.977151894Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 10:09:34.977498 containerd[1601]: time="2025-11-01T10:09:34.977186561Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 1 10:09:34.977570 kubelet[2747]: E1101 10:09:34.977479 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 10:09:34.977570 kubelet[2747]: E1101 10:09:34.977532 2747 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 10:09:34.978045 kubelet[2747]: E1101 10:09:34.977640 2747 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-988dffdbc-fdbhp_calico-apiserver(5d708427-4ed7-49ae-b59a-606507c6e8d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 10:09:34.978045 kubelet[2747]: E1101 10:09:34.977680 2747 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-988dffdbc-fdbhp" podUID="5d708427-4ed7-49ae-b59a-606507c6e8d8" Nov 1 10:09:34.988334 systemd[1]: Started sshd@21-10.0.0.91:22-10.0.0.1:43914.service - OpenSSH per-connection server daemon (10.0.0.1:43914). Nov 1 10:09:35.058746 sshd[5196]: Accepted publickey for core from 10.0.0.1 port 43914 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:09:35.060863 sshd-session[5196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:09:35.065450 systemd-logind[1585]: New session 22 of user core. Nov 1 10:09:35.072847 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 1 10:09:35.209069 sshd[5201]: Connection closed by 10.0.0.1 port 43914 Nov 1 10:09:35.209789 sshd-session[5196]: pam_unix(sshd:session): session closed for user core Nov 1 10:09:35.214862 systemd[1]: sshd@21-10.0.0.91:22-10.0.0.1:43914.service: Deactivated successfully. Nov 1 10:09:35.217046 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 10:09:35.217807 systemd-logind[1585]: Session 22 logged out. Waiting for processes to exit. Nov 1 10:09:35.219179 systemd-logind[1585]: Removed session 22. Nov 1 10:09:36.615485 containerd[1601]: time="2025-11-01T10:09:36.615405226Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 10:09:36.971825 containerd[1601]: time="2025-11-01T10:09:36.971664727Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:09:36.973190 containerd[1601]: time="2025-11-01T10:09:36.973129641Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 10:09:36.973305 containerd[1601]: time="2025-11-01T10:09:36.973211929Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 1 10:09:36.973360 kubelet[2747]: E1101 10:09:36.973324 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 10:09:36.973812 kubelet[2747]: E1101 10:09:36.973367 2747 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 10:09:36.973812 kubelet[2747]: E1101 10:09:36.973549 2747 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-rw5k5_calico-system(157d34c2-941f-430c-9ff6-c3d7ebcb5c55): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 10:09:36.973812 kubelet[2747]: E1101 10:09:36.973578 2747 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-rw5k5" podUID="157d34c2-941f-430c-9ff6-c3d7ebcb5c55" Nov 1 10:09:36.973926 containerd[1601]: time="2025-11-01T10:09:36.973660230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 10:09:37.322800 containerd[1601]: time="2025-11-01T10:09:37.322745650Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:09:37.324020 containerd[1601]: time="2025-11-01T10:09:37.323987854Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 10:09:37.324082 containerd[1601]: time="2025-11-01T10:09:37.324061756Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 1 10:09:37.324287 kubelet[2747]: E1101 10:09:37.324243 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 10:09:37.324357 kubelet[2747]: E1101 10:09:37.324293 2747 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 10:09:37.324381 kubelet[2747]: E1101 10:09:37.324369 2747 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-8988d9647-8v76c_calico-system(925c93c8-2f46-4e42-abc0-187db9f33983): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 10:09:37.325451 containerd[1601]: time="2025-11-01T10:09:37.325411566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 10:09:37.632072 containerd[1601]: time="2025-11-01T10:09:37.631894947Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:09:37.633378 containerd[1601]: time="2025-11-01T10:09:37.633337575Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 10:09:37.633510 containerd[1601]: time="2025-11-01T10:09:37.633362142Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 1 10:09:37.633768 kubelet[2747]: E1101 10:09:37.633537 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 10:09:37.633768 kubelet[2747]: E1101 10:09:37.633571 2747 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 10:09:37.633873 containerd[1601]: time="2025-11-01T10:09:37.633855349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 10:09:37.634092 kubelet[2747]: E1101 10:09:37.634025 2747 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-8988d9647-8v76c_calico-system(925c93c8-2f46-4e42-abc0-187db9f33983): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 10:09:37.634247 kubelet[2747]: E1101 10:09:37.634212 2747 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8988d9647-8v76c" podUID="925c93c8-2f46-4e42-abc0-187db9f33983" Nov 1 10:09:37.966871 containerd[1601]: time="2025-11-01T10:09:37.966729402Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:09:37.968190 containerd[1601]: time="2025-11-01T10:09:37.968097468Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 10:09:37.968190 containerd[1601]: time="2025-11-01T10:09:37.968157974Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 1 10:09:37.968434 kubelet[2747]: E1101 10:09:37.968394 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 10:09:37.968510 kubelet[2747]: E1101 10:09:37.968443 2747 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 10:09:37.968613 kubelet[2747]: E1101 10:09:37.968582 2747 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-988dffdbc-59586_calico-apiserver(2dc8436d-b4aa-4090-a31c-cb311723721e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 10:09:37.968613 kubelet[2747]: E1101 10:09:37.968614 2747 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-988dffdbc-59586" podUID="2dc8436d-b4aa-4090-a31c-cb311723721e" Nov 1 10:09:39.616335 containerd[1601]: time="2025-11-01T10:09:39.615936879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 10:09:39.908656 containerd[1601]: time="2025-11-01T10:09:39.908474677Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:09:39.909961 containerd[1601]: time="2025-11-01T10:09:39.909923264Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 10:09:39.910038 containerd[1601]: time="2025-11-01T10:09:39.909970374Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 1 10:09:39.910251 kubelet[2747]: E1101 10:09:39.910207 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 10:09:39.910824 kubelet[2747]: E1101 10:09:39.910259 2747 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 10:09:39.910824 kubelet[2747]: E1101 10:09:39.910338 2747 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7f8c559b4d-57dz9_calico-system(8cb58978-7fd1-4b27-8e4f-8d93d2102825): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 10:09:39.910824 kubelet[2747]: E1101 10:09:39.910370 2747 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f8c559b4d-57dz9" podUID="8cb58978-7fd1-4b27-8e4f-8d93d2102825" Nov 1 10:09:40.227810 systemd[1]: Started sshd@22-10.0.0.91:22-10.0.0.1:47972.service - OpenSSH per-connection server daemon (10.0.0.1:47972). Nov 1 10:09:40.274490 sshd[5214]: Accepted publickey for core from 10.0.0.1 port 47972 ssh2: RSA SHA256:xyHlhP/ZWauU1qF16e0XO1liGu774KWQKuYesmG87DE Nov 1 10:09:40.276534 sshd-session[5214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:09:40.283612 systemd-logind[1585]: New session 23 of user core. Nov 1 10:09:40.289879 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 1 10:09:40.381029 sshd[5217]: Connection closed by 10.0.0.1 port 47972 Nov 1 10:09:40.381986 sshd-session[5214]: pam_unix(sshd:session): session closed for user core Nov 1 10:09:40.386383 systemd[1]: sshd@22-10.0.0.91:22-10.0.0.1:47972.service: Deactivated successfully. Nov 1 10:09:40.389330 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 10:09:40.393477 systemd-logind[1585]: Session 23 logged out. Waiting for processes to exit. Nov 1 10:09:40.397000 systemd-logind[1585]: Removed session 23. Nov 1 10:09:41.614721 kubelet[2747]: E1101 10:09:41.614643 2747 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"