Oct 29 05:32:49.342003 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Oct 29 03:32:17 -00 2025 Oct 29 05:32:49.342027 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d610570145801afd1fd509077ab1d27ba16da1750238d30fd1973784421d84ed Oct 29 05:32:49.342036 kernel: BIOS-provided physical RAM map: Oct 29 05:32:49.342046 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 29 05:32:49.342053 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Oct 29 05:32:49.342060 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Oct 29 05:32:49.342068 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Oct 29 05:32:49.342075 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Oct 29 05:32:49.342084 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Oct 29 05:32:49.342091 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Oct 29 05:32:49.342098 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Oct 29 05:32:49.342107 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Oct 29 05:32:49.342114 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Oct 29 05:32:49.342121 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Oct 29 05:32:49.342130 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Oct 29 05:32:49.342137 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Oct 29 05:32:49.342150 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Oct 29 05:32:49.342157 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Oct 29 05:32:49.342165 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Oct 29 05:32:49.342172 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Oct 29 05:32:49.342179 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Oct 29 05:32:49.342186 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Oct 29 05:32:49.342193 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Oct 29 05:32:49.342201 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 29 05:32:49.342208 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Oct 29 05:32:49.342215 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 29 05:32:49.342225 kernel: NX (Execute Disable) protection: active Oct 29 05:32:49.342232 kernel: APIC: Static calls initialized Oct 29 05:32:49.342239 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Oct 29 05:32:49.342247 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Oct 29 05:32:49.342254 kernel: extended physical RAM map: Oct 29 05:32:49.342262 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 29 05:32:49.342269 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Oct 29 05:32:49.342276 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Oct 29 05:32:49.342284 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Oct 29 05:32:49.342291 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Oct 29 05:32:49.342298 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Oct 29 05:32:49.342308 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Oct 29 05:32:49.342315 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Oct 29 05:32:49.342323 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Oct 29 05:32:49.342334 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Oct 29 05:32:49.342344 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Oct 29 05:32:49.342351 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Oct 29 05:32:49.342359 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Oct 29 05:32:49.342367 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Oct 29 05:32:49.342374 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Oct 29 05:32:49.342382 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Oct 29 05:32:49.342390 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Oct 29 05:32:49.342397 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Oct 29 05:32:49.342405 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Oct 29 05:32:49.342415 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Oct 29 05:32:49.342423 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Oct 29 05:32:49.342430 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Oct 29 05:32:49.342438 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Oct 29 05:32:49.342446 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Oct 29 05:32:49.342453 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 29 05:32:49.342461 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Oct 29 05:32:49.342469 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 29 05:32:49.342479 kernel: efi: EFI v2.7 by EDK II Oct 29 05:32:49.342487 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Oct 29 05:32:49.342494 kernel: random: crng init done Oct 29 05:32:49.342506 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Oct 29 05:32:49.342514 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Oct 29 05:32:49.342524 kernel: secureboot: Secure boot disabled Oct 29 05:32:49.342531 kernel: SMBIOS 2.8 present. Oct 29 05:32:49.342539 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Oct 29 05:32:49.342547 kernel: DMI: Memory slots populated: 1/1 Oct 29 05:32:49.342554 kernel: Hypervisor detected: KVM Oct 29 05:32:49.342562 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Oct 29 05:32:49.342570 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 29 05:32:49.342577 kernel: kvm-clock: using sched offset of 4918416080 cycles Oct 29 05:32:49.342585 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 29 05:32:49.342596 kernel: tsc: Detected 2794.748 MHz processor Oct 29 05:32:49.342604 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 29 05:32:49.342612 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 29 05:32:49.342620 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Oct 29 05:32:49.342628 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Oct 29 05:32:49.342637 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 29 05:32:49.342645 kernel: Using GB pages for direct mapping Oct 29 05:32:49.342655 kernel: ACPI: Early table checksum verification disabled Oct 29 05:32:49.342663 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Oct 29 05:32:49.342671 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Oct 29 05:32:49.342679 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 05:32:49.342687 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 05:32:49.342695 kernel: ACPI: FACS 0x000000009CBDD000 000040 Oct 29 05:32:49.342703 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 05:32:49.342714 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 05:32:49.342722 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 05:32:49.342730 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 05:32:49.342738 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Oct 29 05:32:49.342746 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Oct 29 05:32:49.342754 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Oct 29 05:32:49.342762 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Oct 29 05:32:49.342772 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Oct 29 05:32:49.342780 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Oct 29 05:32:49.342788 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Oct 29 05:32:49.342796 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Oct 29 05:32:49.342804 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Oct 29 05:32:49.342812 kernel: No NUMA configuration found Oct 29 05:32:49.342820 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Oct 29 05:32:49.342828 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Oct 29 05:32:49.342838 kernel: Zone ranges: Oct 29 05:32:49.342846 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 29 05:32:49.342854 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Oct 29 05:32:49.342862 kernel: Normal empty Oct 29 05:32:49.342870 kernel: Device empty Oct 29 05:32:49.342878 kernel: Movable zone start for each node Oct 29 05:32:49.342886 kernel: Early memory node ranges Oct 29 05:32:49.342894 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Oct 29 05:32:49.342906 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Oct 29 05:32:49.342914 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Oct 29 05:32:49.342922 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Oct 29 05:32:49.342930 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Oct 29 05:32:49.342953 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Oct 29 05:32:49.342962 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Oct 29 05:32:49.342970 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Oct 29 05:32:49.342989 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Oct 29 05:32:49.342997 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 29 05:32:49.343012 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Oct 29 05:32:49.343023 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Oct 29 05:32:49.343032 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 29 05:32:49.343040 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Oct 29 05:32:49.343048 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Oct 29 05:32:49.343056 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Oct 29 05:32:49.343065 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Oct 29 05:32:49.343073 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Oct 29 05:32:49.343084 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 29 05:32:49.343092 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 29 05:32:49.343100 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 29 05:32:49.343111 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 29 05:32:49.343119 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 29 05:32:49.343127 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 29 05:32:49.343136 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 29 05:32:49.343144 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 29 05:32:49.343152 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 29 05:32:49.343160 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 29 05:32:49.343171 kernel: TSC deadline timer available Oct 29 05:32:49.343179 kernel: CPU topo: Max. logical packages: 1 Oct 29 05:32:49.343188 kernel: CPU topo: Max. logical dies: 1 Oct 29 05:32:49.343196 kernel: CPU topo: Max. dies per package: 1 Oct 29 05:32:49.343204 kernel: CPU topo: Max. threads per core: 1 Oct 29 05:32:49.343212 kernel: CPU topo: Num. cores per package: 4 Oct 29 05:32:49.343220 kernel: CPU topo: Num. threads per package: 4 Oct 29 05:32:49.343229 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Oct 29 05:32:49.343239 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 29 05:32:49.343247 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 29 05:32:49.343255 kernel: kvm-guest: setup PV sched yield Oct 29 05:32:49.343264 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Oct 29 05:32:49.343272 kernel: Booting paravirtualized kernel on KVM Oct 29 05:32:49.343280 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 29 05:32:49.343289 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 29 05:32:49.343300 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Oct 29 05:32:49.343308 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Oct 29 05:32:49.343316 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 29 05:32:49.343324 kernel: kvm-guest: PV spinlocks enabled Oct 29 05:32:49.343333 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 29 05:32:49.343344 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d610570145801afd1fd509077ab1d27ba16da1750238d30fd1973784421d84ed Oct 29 05:32:49.343353 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 29 05:32:49.343364 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 29 05:32:49.343372 kernel: Fallback order for Node 0: 0 Oct 29 05:32:49.343380 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Oct 29 05:32:49.343389 kernel: Policy zone: DMA32 Oct 29 05:32:49.343397 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 29 05:32:49.343405 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 29 05:32:49.343413 kernel: ftrace: allocating 40092 entries in 157 pages Oct 29 05:32:49.343424 kernel: ftrace: allocated 157 pages with 5 groups Oct 29 05:32:49.343432 kernel: Dynamic Preempt: voluntary Oct 29 05:32:49.343441 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 29 05:32:49.343450 kernel: rcu: RCU event tracing is enabled. Oct 29 05:32:49.343458 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 29 05:32:49.343466 kernel: Trampoline variant of Tasks RCU enabled. Oct 29 05:32:49.343475 kernel: Rude variant of Tasks RCU enabled. Oct 29 05:32:49.343483 kernel: Tracing variant of Tasks RCU enabled. Oct 29 05:32:49.343493 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 29 05:32:49.343502 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 29 05:32:49.343512 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 29 05:32:49.343521 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 29 05:32:49.343529 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 29 05:32:49.343538 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 29 05:32:49.343546 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 29 05:32:49.343557 kernel: Console: colour dummy device 80x25 Oct 29 05:32:49.343565 kernel: printk: legacy console [ttyS0] enabled Oct 29 05:32:49.343573 kernel: ACPI: Core revision 20240827 Oct 29 05:32:49.343581 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 29 05:32:49.343590 kernel: APIC: Switch to symmetric I/O mode setup Oct 29 05:32:49.343598 kernel: x2apic enabled Oct 29 05:32:49.343606 kernel: APIC: Switched APIC routing to: physical x2apic Oct 29 05:32:49.343617 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 29 05:32:49.343625 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 29 05:32:49.343633 kernel: kvm-guest: setup PV IPIs Oct 29 05:32:49.343642 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 29 05:32:49.343650 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Oct 29 05:32:49.343659 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Oct 29 05:32:49.343667 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 29 05:32:49.343677 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 29 05:32:49.343686 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 29 05:32:49.343694 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 29 05:32:49.343702 kernel: Spectre V2 : Mitigation: Retpolines Oct 29 05:32:49.343711 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 29 05:32:49.343719 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 29 05:32:49.343727 kernel: active return thunk: retbleed_return_thunk Oct 29 05:32:49.343738 kernel: RETBleed: Mitigation: untrained return thunk Oct 29 05:32:49.343748 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 29 05:32:49.343757 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 29 05:32:49.343765 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 29 05:32:49.343774 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 29 05:32:49.343783 kernel: active return thunk: srso_return_thunk Oct 29 05:32:49.343791 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 29 05:32:49.343802 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 29 05:32:49.343810 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 29 05:32:49.343818 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 29 05:32:49.343827 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 29 05:32:49.343835 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 29 05:32:49.343843 kernel: Freeing SMP alternatives memory: 32K Oct 29 05:32:49.343852 kernel: pid_max: default: 32768 minimum: 301 Oct 29 05:32:49.343862 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 29 05:32:49.343870 kernel: landlock: Up and running. Oct 29 05:32:49.343878 kernel: SELinux: Initializing. Oct 29 05:32:49.343887 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 29 05:32:49.343895 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 29 05:32:49.343904 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 29 05:32:49.343912 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 29 05:32:49.343923 kernel: ... version: 0 Oct 29 05:32:49.343932 kernel: ... bit width: 48 Oct 29 05:32:49.343952 kernel: ... generic registers: 6 Oct 29 05:32:49.343961 kernel: ... value mask: 0000ffffffffffff Oct 29 05:32:49.343969 kernel: ... max period: 00007fffffffffff Oct 29 05:32:49.343977 kernel: ... fixed-purpose events: 0 Oct 29 05:32:49.343992 kernel: ... event mask: 000000000000003f Oct 29 05:32:49.344003 kernel: signal: max sigframe size: 1776 Oct 29 05:32:49.344011 kernel: rcu: Hierarchical SRCU implementation. Oct 29 05:32:49.344020 kernel: rcu: Max phase no-delay instances is 400. Oct 29 05:32:49.344030 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 29 05:32:49.344039 kernel: smp: Bringing up secondary CPUs ... Oct 29 05:32:49.344047 kernel: smpboot: x86: Booting SMP configuration: Oct 29 05:32:49.344055 kernel: .... node #0, CPUs: #1 #2 #3 Oct 29 05:32:49.344066 kernel: smp: Brought up 1 node, 4 CPUs Oct 29 05:32:49.344074 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Oct 29 05:32:49.344083 kernel: Memory: 2441092K/2565800K available (14336K kernel code, 2443K rwdata, 29892K rodata, 15964K init, 2080K bss, 118768K reserved, 0K cma-reserved) Oct 29 05:32:49.344091 kernel: devtmpfs: initialized Oct 29 05:32:49.344100 kernel: x86/mm: Memory block size: 128MB Oct 29 05:32:49.344108 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Oct 29 05:32:49.344117 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Oct 29 05:32:49.344127 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Oct 29 05:32:49.344136 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Oct 29 05:32:49.344144 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Oct 29 05:32:49.344153 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Oct 29 05:32:49.344161 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 29 05:32:49.344169 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 29 05:32:49.344178 kernel: pinctrl core: initialized pinctrl subsystem Oct 29 05:32:49.344188 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 29 05:32:49.344197 kernel: audit: initializing netlink subsys (disabled) Oct 29 05:32:49.344205 kernel: audit: type=2000 audit(1761715967.639:1): state=initialized audit_enabled=0 res=1 Oct 29 05:32:49.344213 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 29 05:32:49.344221 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 29 05:32:49.344230 kernel: cpuidle: using governor menu Oct 29 05:32:49.344238 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 29 05:32:49.344249 kernel: dca service started, version 1.12.1 Oct 29 05:32:49.344257 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Oct 29 05:32:49.344266 kernel: PCI: Using configuration type 1 for base access Oct 29 05:32:49.344274 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 29 05:32:49.344282 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 29 05:32:49.344291 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 29 05:32:49.344299 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 29 05:32:49.344310 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 29 05:32:49.344318 kernel: ACPI: Added _OSI(Module Device) Oct 29 05:32:49.344326 kernel: ACPI: Added _OSI(Processor Device) Oct 29 05:32:49.344334 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 29 05:32:49.344343 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 29 05:32:49.344351 kernel: ACPI: Interpreter enabled Oct 29 05:32:49.344359 kernel: ACPI: PM: (supports S0 S3 S5) Oct 29 05:32:49.344367 kernel: ACPI: Using IOAPIC for interrupt routing Oct 29 05:32:49.344378 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 29 05:32:49.344386 kernel: PCI: Using E820 reservations for host bridge windows Oct 29 05:32:49.344395 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 29 05:32:49.344403 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 29 05:32:49.344650 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 29 05:32:49.344832 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 29 05:32:49.345036 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 29 05:32:49.345049 kernel: PCI host bridge to bus 0000:00 Oct 29 05:32:49.345339 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 29 05:32:49.345502 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 29 05:32:49.345660 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 29 05:32:49.345819 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Oct 29 05:32:49.346009 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Oct 29 05:32:49.346173 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Oct 29 05:32:49.346336 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 29 05:32:49.346530 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Oct 29 05:32:49.346727 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Oct 29 05:32:49.347007 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Oct 29 05:32:49.347377 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Oct 29 05:32:49.347724 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Oct 29 05:32:49.348342 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 29 05:32:49.348542 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 29 05:32:49.348720 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Oct 29 05:32:49.348901 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Oct 29 05:32:49.349104 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Oct 29 05:32:49.349290 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Oct 29 05:32:49.349465 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Oct 29 05:32:49.349639 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Oct 29 05:32:49.349817 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Oct 29 05:32:49.350746 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Oct 29 05:32:49.351057 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Oct 29 05:32:49.351359 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Oct 29 05:32:49.351647 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Oct 29 05:32:49.351866 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Oct 29 05:32:49.352092 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Oct 29 05:32:49.352327 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 29 05:32:49.352530 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Oct 29 05:32:49.352708 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Oct 29 05:32:49.352881 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Oct 29 05:32:49.353091 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Oct 29 05:32:49.353273 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Oct 29 05:32:49.353285 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 29 05:32:49.353294 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 29 05:32:49.353303 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 29 05:32:49.353311 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 29 05:32:49.353320 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 29 05:32:49.353331 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 29 05:32:49.353340 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 29 05:32:49.353348 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 29 05:32:49.353357 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 29 05:32:49.353365 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 29 05:32:49.353373 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 29 05:32:49.353381 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 29 05:32:49.353392 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 29 05:32:49.353400 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 29 05:32:49.353409 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 29 05:32:49.353417 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 29 05:32:49.353426 kernel: iommu: Default domain type: Translated Oct 29 05:32:49.353434 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 29 05:32:49.353442 kernel: efivars: Registered efivars operations Oct 29 05:32:49.353453 kernel: PCI: Using ACPI for IRQ routing Oct 29 05:32:49.353461 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 29 05:32:49.353470 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Oct 29 05:32:49.353478 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Oct 29 05:32:49.353486 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Oct 29 05:32:49.353494 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Oct 29 05:32:49.353502 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Oct 29 05:32:49.353513 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Oct 29 05:32:49.353521 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Oct 29 05:32:49.353529 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Oct 29 05:32:49.353702 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 29 05:32:49.353875 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 29 05:32:49.354073 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 29 05:32:49.354085 kernel: vgaarb: loaded Oct 29 05:32:49.354098 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 29 05:32:49.354106 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 29 05:32:49.354114 kernel: clocksource: Switched to clocksource kvm-clock Oct 29 05:32:49.354123 kernel: VFS: Disk quotas dquot_6.6.0 Oct 29 05:32:49.354131 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 29 05:32:49.354141 kernel: pnp: PnP ACPI init Oct 29 05:32:49.354343 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Oct 29 05:32:49.354362 kernel: pnp: PnP ACPI: found 6 devices Oct 29 05:32:49.354371 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 29 05:32:49.354380 kernel: NET: Registered PF_INET protocol family Oct 29 05:32:49.354389 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 29 05:32:49.354398 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 29 05:32:49.354406 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 29 05:32:49.354418 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 29 05:32:49.354426 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 29 05:32:49.354435 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 29 05:32:49.354444 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 29 05:32:49.354453 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 29 05:32:49.354461 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 29 05:32:49.354470 kernel: NET: Registered PF_XDP protocol family Oct 29 05:32:49.354691 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Oct 29 05:32:49.354870 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Oct 29 05:32:49.355061 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 29 05:32:49.355223 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 29 05:32:49.355382 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 29 05:32:49.355541 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Oct 29 05:32:49.355705 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Oct 29 05:32:49.355864 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Oct 29 05:32:49.355875 kernel: PCI: CLS 0 bytes, default 64 Oct 29 05:32:49.355884 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Oct 29 05:32:49.355896 kernel: Initialise system trusted keyrings Oct 29 05:32:49.355908 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 29 05:32:49.355916 kernel: Key type asymmetric registered Oct 29 05:32:49.355925 kernel: Asymmetric key parser 'x509' registered Oct 29 05:32:49.355934 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 29 05:32:49.355958 kernel: io scheduler mq-deadline registered Oct 29 05:32:49.355966 kernel: io scheduler kyber registered Oct 29 05:32:49.355975 kernel: io scheduler bfq registered Oct 29 05:32:49.355997 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 29 05:32:49.356007 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 29 05:32:49.356016 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 29 05:32:49.356025 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 29 05:32:49.356033 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 29 05:32:49.356042 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 29 05:32:49.356051 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 29 05:32:49.356062 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 29 05:32:49.356071 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 29 05:32:49.356255 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 29 05:32:49.356268 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 29 05:32:49.356433 kernel: rtc_cmos 00:04: registered as rtc0 Oct 29 05:32:49.356599 kernel: rtc_cmos 00:04: setting system clock to 2025-10-29T05:32:47 UTC (1761715967) Oct 29 05:32:49.356771 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Oct 29 05:32:49.356782 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 29 05:32:49.356792 kernel: efifb: probing for efifb Oct 29 05:32:49.356801 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Oct 29 05:32:49.356810 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Oct 29 05:32:49.356819 kernel: efifb: scrolling: redraw Oct 29 05:32:49.356827 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Oct 29 05:32:49.356840 kernel: Console: switching to colour frame buffer device 160x50 Oct 29 05:32:49.356848 kernel: fb0: EFI VGA frame buffer device Oct 29 05:32:49.356857 kernel: pstore: Using crash dump compression: deflate Oct 29 05:32:49.356866 kernel: pstore: Registered efi_pstore as persistent store backend Oct 29 05:32:49.356875 kernel: NET: Registered PF_INET6 protocol family Oct 29 05:32:49.356883 kernel: Segment Routing with IPv6 Oct 29 05:32:49.356892 kernel: In-situ OAM (IOAM) with IPv6 Oct 29 05:32:49.356901 kernel: NET: Registered PF_PACKET protocol family Oct 29 05:32:49.356912 kernel: Key type dns_resolver registered Oct 29 05:32:49.356920 kernel: IPI shorthand broadcast: enabled Oct 29 05:32:49.356929 kernel: sched_clock: Marking stable (1524003708, 283934161)->(1865885291, -57947422) Oct 29 05:32:49.356938 kernel: registered taskstats version 1 Oct 29 05:32:49.356990 kernel: Loading compiled-in X.509 certificates Oct 29 05:32:49.356999 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: f89c51f030e63154d181695681c9c05492d5512a' Oct 29 05:32:49.357008 kernel: Demotion targets for Node 0: null Oct 29 05:32:49.357029 kernel: Key type .fscrypt registered Oct 29 05:32:49.357037 kernel: Key type fscrypt-provisioning registered Oct 29 05:32:49.357046 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 29 05:32:49.357055 kernel: ima: Allocated hash algorithm: sha1 Oct 29 05:32:49.357063 kernel: ima: No architecture policies found Oct 29 05:32:49.357072 kernel: clk: Disabling unused clocks Oct 29 05:32:49.357081 kernel: Freeing unused kernel image (initmem) memory: 15964K Oct 29 05:32:49.357096 kernel: Write protecting the kernel read-only data: 45056k Oct 29 05:32:49.357105 kernel: Freeing unused kernel image (rodata/data gap) memory: 828K Oct 29 05:32:49.357114 kernel: Run /init as init process Oct 29 05:32:49.357122 kernel: with arguments: Oct 29 05:32:49.357131 kernel: /init Oct 29 05:32:49.357139 kernel: with environment: Oct 29 05:32:49.357148 kernel: HOME=/ Oct 29 05:32:49.357163 kernel: TERM=linux Oct 29 05:32:49.357172 kernel: SCSI subsystem initialized Oct 29 05:32:49.357180 kernel: libata version 3.00 loaded. Oct 29 05:32:49.357365 kernel: ahci 0000:00:1f.2: version 3.0 Oct 29 05:32:49.357378 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 29 05:32:49.357561 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Oct 29 05:32:49.357736 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Oct 29 05:32:49.357926 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 29 05:32:49.358153 kernel: scsi host0: ahci Oct 29 05:32:49.358342 kernel: scsi host1: ahci Oct 29 05:32:49.358528 kernel: scsi host2: ahci Oct 29 05:32:49.358713 kernel: scsi host3: ahci Oct 29 05:32:49.358918 kernel: scsi host4: ahci Oct 29 05:32:49.359175 kernel: scsi host5: ahci Oct 29 05:32:49.359191 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Oct 29 05:32:49.359200 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Oct 29 05:32:49.359209 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Oct 29 05:32:49.359218 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Oct 29 05:32:49.359239 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Oct 29 05:32:49.359248 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Oct 29 05:32:49.359257 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 29 05:32:49.359266 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 29 05:32:49.359275 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 29 05:32:49.359284 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 29 05:32:49.359293 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 29 05:32:49.359309 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 29 05:32:49.359318 kernel: ata3.00: LPM support broken, forcing max_power Oct 29 05:32:49.359327 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 29 05:32:49.359335 kernel: ata3.00: applying bridge limits Oct 29 05:32:49.359344 kernel: ata3.00: LPM support broken, forcing max_power Oct 29 05:32:49.359353 kernel: ata3.00: configured for UDMA/100 Oct 29 05:32:49.359601 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 29 05:32:49.359849 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 29 05:32:49.360921 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Oct 29 05:32:49.360955 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 29 05:32:49.360965 kernel: GPT:16515071 != 27000831 Oct 29 05:32:49.360974 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 29 05:32:49.360991 kernel: GPT:16515071 != 27000831 Oct 29 05:32:49.361013 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 29 05:32:49.361021 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 29 05:32:49.361031 kernel: Invalid ELF header magic: != \u007fELF Oct 29 05:32:49.361237 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 29 05:32:49.361250 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 29 05:32:49.361439 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 29 05:32:49.361451 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 29 05:32:49.361471 kernel: device-mapper: uevent: version 1.0.3 Oct 29 05:32:49.361480 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 29 05:32:49.361489 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Oct 29 05:32:49.361498 kernel: Invalid ELF header magic: != \u007fELF Oct 29 05:32:49.361507 kernel: Invalid ELF header magic: != \u007fELF Oct 29 05:32:49.361515 kernel: raid6: avx2x4 gen() 27511 MB/s Oct 29 05:32:49.361524 kernel: raid6: avx2x2 gen() 28552 MB/s Oct 29 05:32:49.361539 kernel: raid6: avx2x1 gen() 24744 MB/s Oct 29 05:32:49.361548 kernel: raid6: using algorithm avx2x2 gen() 28552 MB/s Oct 29 05:32:49.361557 kernel: raid6: .... xor() 19504 MB/s, rmw enabled Oct 29 05:32:49.361566 kernel: raid6: using avx2x2 recovery algorithm Oct 29 05:32:49.361575 kernel: Invalid ELF header magic: != \u007fELF Oct 29 05:32:49.361583 kernel: Invalid ELF header magic: != \u007fELF Oct 29 05:32:49.361591 kernel: Invalid ELF header magic: != \u007fELF Oct 29 05:32:49.361600 kernel: xor: automatically using best checksumming function avx Oct 29 05:32:49.361616 kernel: Invalid ELF header magic: != \u007fELF Oct 29 05:32:49.361624 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 29 05:32:49.361633 kernel: BTRFS: device fsid fda6afd4-b762-45ee-91be-df972c4036d5 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (176) Oct 29 05:32:49.361643 kernel: BTRFS info (device dm-0): first mount of filesystem fda6afd4-b762-45ee-91be-df972c4036d5 Oct 29 05:32:49.361651 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 29 05:32:49.361661 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 29 05:32:49.361669 kernel: BTRFS info (device dm-0): enabling free space tree Oct 29 05:32:49.361685 kernel: Invalid ELF header magic: != \u007fELF Oct 29 05:32:49.361694 kernel: loop: module loaded Oct 29 05:32:49.361702 kernel: loop0: detected capacity change from 0 to 100136 Oct 29 05:32:49.361711 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 29 05:32:49.361721 systemd[1]: Successfully made /usr/ read-only. Oct 29 05:32:49.361733 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 29 05:32:49.361750 systemd[1]: Detected virtualization kvm. Oct 29 05:32:49.361759 systemd[1]: Detected architecture x86-64. Oct 29 05:32:49.361768 systemd[1]: Running in initrd. Oct 29 05:32:49.361777 systemd[1]: No hostname configured, using default hostname. Oct 29 05:32:49.361787 systemd[1]: Hostname set to . Oct 29 05:32:49.361796 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 29 05:32:49.361805 systemd[1]: Queued start job for default target initrd.target. Oct 29 05:32:49.361820 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 29 05:32:49.361829 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 29 05:32:49.361839 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 29 05:32:49.361849 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 29 05:32:49.361858 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 29 05:32:49.361868 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 29 05:32:49.361884 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 29 05:32:49.361894 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 29 05:32:49.361903 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 29 05:32:49.361913 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 29 05:32:49.361922 systemd[1]: Reached target paths.target - Path Units. Oct 29 05:32:49.361931 systemd[1]: Reached target slices.target - Slice Units. Oct 29 05:32:49.361961 systemd[1]: Reached target swap.target - Swaps. Oct 29 05:32:49.361970 systemd[1]: Reached target timers.target - Timer Units. Oct 29 05:32:49.361988 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 29 05:32:49.361998 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 29 05:32:49.362007 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 29 05:32:49.362017 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 29 05:32:49.362026 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 29 05:32:49.362043 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 29 05:32:49.362053 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 29 05:32:49.362062 systemd[1]: Reached target sockets.target - Socket Units. Oct 29 05:32:49.362072 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 29 05:32:49.362082 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 29 05:32:49.362092 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 29 05:32:49.362103 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 29 05:32:49.362121 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 29 05:32:49.362130 systemd[1]: Starting systemd-fsck-usr.service... Oct 29 05:32:49.362139 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 29 05:32:49.362148 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 29 05:32:49.362158 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 05:32:49.362174 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 29 05:32:49.362183 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 29 05:32:49.362193 systemd[1]: Finished systemd-fsck-usr.service. Oct 29 05:32:49.362202 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 29 05:32:49.362238 systemd-journald[310]: Collecting audit messages is disabled. Oct 29 05:32:49.362267 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 29 05:32:49.362279 kernel: Bridge firewalling registered Oct 29 05:32:49.362288 systemd-journald[310]: Journal started Oct 29 05:32:49.362319 systemd-journald[310]: Runtime Journal (/run/log/journal/b0f0cfc55fcc4bc2aedc18c2f9c5e22f) is 6M, max 48.1M, 42.1M free. Oct 29 05:32:49.364731 systemd[1]: Started systemd-journald.service - Journal Service. Oct 29 05:32:49.362006 systemd-modules-load[313]: Inserted module 'br_netfilter' Oct 29 05:32:49.365765 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 29 05:32:49.369232 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 29 05:32:49.372080 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 29 05:32:49.386365 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 29 05:32:49.391753 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 05:32:49.397807 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 29 05:32:49.402789 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 29 05:32:49.403510 systemd-tmpfiles[329]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 29 05:32:49.409863 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 29 05:32:49.412211 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 29 05:32:49.416181 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 29 05:32:49.427864 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 29 05:32:49.433158 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 29 05:32:49.438685 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 29 05:32:49.470177 systemd-resolved[340]: Positive Trust Anchors: Oct 29 05:32:49.470191 systemd-resolved[340]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 29 05:32:49.470195 systemd-resolved[340]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 29 05:32:49.470226 systemd-resolved[340]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 29 05:32:49.490861 dracut-cmdline[355]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d610570145801afd1fd509077ab1d27ba16da1750238d30fd1973784421d84ed Oct 29 05:32:49.496707 systemd-resolved[340]: Defaulting to hostname 'linux'. Oct 29 05:32:49.498051 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 29 05:32:49.499728 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 29 05:32:49.591981 kernel: Loading iSCSI transport class v2.0-870. Oct 29 05:32:49.605971 kernel: iscsi: registered transport (tcp) Oct 29 05:32:49.629071 kernel: iscsi: registered transport (qla4xxx) Oct 29 05:32:49.629115 kernel: QLogic iSCSI HBA Driver Oct 29 05:32:49.667811 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 29 05:32:49.692419 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 29 05:32:49.693929 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 29 05:32:49.763647 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 29 05:32:49.766019 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 29 05:32:49.768090 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 29 05:32:49.807659 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 29 05:32:49.811426 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 29 05:32:49.846041 systemd-udevd[594]: Using default interface naming scheme 'v257'. Oct 29 05:32:49.861586 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 29 05:32:49.864122 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 29 05:32:49.895161 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 29 05:32:49.897744 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 29 05:32:49.902715 dracut-pre-trigger[659]: rd.md=0: removing MD RAID activation Oct 29 05:32:49.939126 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 29 05:32:49.942446 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 29 05:32:49.954741 systemd-networkd[703]: lo: Link UP Oct 29 05:32:49.954749 systemd-networkd[703]: lo: Gained carrier Oct 29 05:32:49.956206 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 29 05:32:49.960823 systemd[1]: Reached target network.target - Network. Oct 29 05:32:50.039659 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 29 05:32:50.042776 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 29 05:32:50.096569 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 29 05:32:50.115506 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 29 05:32:50.136052 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 29 05:32:50.145007 kernel: cryptd: max_cpu_qlen set to 1000 Oct 29 05:32:50.151817 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 29 05:32:50.156724 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 29 05:32:50.167070 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 29 05:32:50.167250 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 05:32:50.169179 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 05:32:50.172271 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 05:32:50.183981 kernel: AES CTR mode by8 optimization enabled Oct 29 05:32:50.187975 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Oct 29 05:32:50.190559 disk-uuid[771]: Primary Header is updated. Oct 29 05:32:50.190559 disk-uuid[771]: Secondary Entries is updated. Oct 29 05:32:50.190559 disk-uuid[771]: Secondary Header is updated. Oct 29 05:32:50.226090 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 05:32:50.230327 systemd-networkd[703]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 29 05:32:50.231712 systemd-networkd[703]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 29 05:32:50.232213 systemd-networkd[703]: eth0: Link UP Oct 29 05:32:50.233087 systemd-networkd[703]: eth0: Gained carrier Oct 29 05:32:50.233097 systemd-networkd[703]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 29 05:32:50.254032 systemd-networkd[703]: eth0: DHCPv4 address 10.0.0.106/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 29 05:32:50.298883 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 29 05:32:50.300194 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 29 05:32:50.304316 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 29 05:32:50.306235 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 29 05:32:50.308160 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 29 05:32:50.339983 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 29 05:32:50.431535 systemd-resolved[340]: Detected conflict on linux IN A 10.0.0.106 Oct 29 05:32:50.431561 systemd-resolved[340]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. Oct 29 05:32:51.243015 disk-uuid[778]: Warning: The kernel is still using the old partition table. Oct 29 05:32:51.243015 disk-uuid[778]: The new table will be used at the next reboot or after you Oct 29 05:32:51.243015 disk-uuid[778]: run partprobe(8) or kpartx(8) Oct 29 05:32:51.243015 disk-uuid[778]: The operation has completed successfully. Oct 29 05:32:51.262158 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 29 05:32:51.262412 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 29 05:32:51.265473 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 29 05:32:51.313524 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (865) Oct 29 05:32:51.313589 kernel: BTRFS info (device vda6): first mount of filesystem 44c01edc-ed3c-4d38-bf8b-b25afdfe8b0d Oct 29 05:32:51.313606 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 29 05:32:51.318726 kernel: BTRFS info (device vda6): turning on async discard Oct 29 05:32:51.318749 kernel: BTRFS info (device vda6): enabling free space tree Oct 29 05:32:51.326974 kernel: BTRFS info (device vda6): last unmount of filesystem 44c01edc-ed3c-4d38-bf8b-b25afdfe8b0d Oct 29 05:32:51.328459 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 29 05:32:51.330714 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 29 05:32:51.727492 ignition[884]: Ignition 2.22.0 Oct 29 05:32:51.727509 ignition[884]: Stage: fetch-offline Oct 29 05:32:51.727577 ignition[884]: no configs at "/usr/lib/ignition/base.d" Oct 29 05:32:51.727590 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 05:32:51.727725 ignition[884]: parsed url from cmdline: "" Oct 29 05:32:51.727730 ignition[884]: no config URL provided Oct 29 05:32:51.727737 ignition[884]: reading system config file "/usr/lib/ignition/user.ign" Oct 29 05:32:51.727749 ignition[884]: no config at "/usr/lib/ignition/user.ign" Oct 29 05:32:51.727813 ignition[884]: op(1): [started] loading QEMU firmware config module Oct 29 05:32:51.727818 ignition[884]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 29 05:32:51.747588 ignition[884]: op(1): [finished] loading QEMU firmware config module Oct 29 05:32:51.827919 ignition[884]: parsing config with SHA512: bbded9f1d39423f6a16f8762c605388097763d8c23ecb4ac323f5f384b28c1267c595b8f60978912cb80e8ea1ca92f68ea68b78274e35046a3d6ca46625ebc59 Oct 29 05:32:51.835235 unknown[884]: fetched base config from "system" Oct 29 05:32:51.835250 unknown[884]: fetched user config from "qemu" Oct 29 05:32:51.835609 ignition[884]: fetch-offline: fetch-offline passed Oct 29 05:32:51.835678 ignition[884]: Ignition finished successfully Oct 29 05:32:51.839451 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 29 05:32:51.842957 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 29 05:32:51.844113 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 29 05:32:51.897969 ignition[896]: Ignition 2.22.0 Oct 29 05:32:51.897984 ignition[896]: Stage: kargs Oct 29 05:32:51.898186 ignition[896]: no configs at "/usr/lib/ignition/base.d" Oct 29 05:32:51.898199 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 05:32:51.899391 ignition[896]: kargs: kargs passed Oct 29 05:32:51.904609 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 29 05:32:51.899443 ignition[896]: Ignition finished successfully Oct 29 05:32:51.906471 systemd-networkd[703]: eth0: Gained IPv6LL Oct 29 05:32:51.907475 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 29 05:32:51.947525 ignition[904]: Ignition 2.22.0 Oct 29 05:32:51.947539 ignition[904]: Stage: disks Oct 29 05:32:51.947690 ignition[904]: no configs at "/usr/lib/ignition/base.d" Oct 29 05:32:51.947701 ignition[904]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 05:32:51.951719 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 29 05:32:51.948429 ignition[904]: disks: disks passed Oct 29 05:32:51.954914 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 29 05:32:51.948484 ignition[904]: Ignition finished successfully Oct 29 05:32:51.958162 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 29 05:32:51.958829 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 29 05:32:51.959369 systemd[1]: Reached target sysinit.target - System Initialization. Oct 29 05:32:51.959636 systemd[1]: Reached target basic.target - Basic System. Oct 29 05:32:51.961130 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 29 05:32:52.014767 systemd-fsck[914]: ROOT: clean, 15/456736 files, 38230/456704 blocks Oct 29 05:32:52.023153 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 29 05:32:52.026227 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 29 05:32:52.154975 kernel: EXT4-fs (vda9): mounted filesystem 7735217e-e323-4a7c-9200-3c231a187230 r/w with ordered data mode. Quota mode: none. Oct 29 05:32:52.155742 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 29 05:32:52.157061 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 29 05:32:52.160229 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 29 05:32:52.164076 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 29 05:32:52.167509 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 29 05:32:52.167565 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 29 05:32:52.167597 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 29 05:32:52.181449 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 29 05:32:52.186170 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (923) Oct 29 05:32:52.187185 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 29 05:32:52.195387 kernel: BTRFS info (device vda6): first mount of filesystem 44c01edc-ed3c-4d38-bf8b-b25afdfe8b0d Oct 29 05:32:52.195439 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 29 05:32:52.195452 kernel: BTRFS info (device vda6): turning on async discard Oct 29 05:32:52.195463 kernel: BTRFS info (device vda6): enabling free space tree Oct 29 05:32:52.197043 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 29 05:32:52.242520 initrd-setup-root[947]: cut: /sysroot/etc/passwd: No such file or directory Oct 29 05:32:52.247363 initrd-setup-root[954]: cut: /sysroot/etc/group: No such file or directory Oct 29 05:32:52.253684 initrd-setup-root[961]: cut: /sysroot/etc/shadow: No such file or directory Oct 29 05:32:52.258875 initrd-setup-root[968]: cut: /sysroot/etc/gshadow: No such file or directory Oct 29 05:32:52.359164 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 29 05:32:52.362694 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 29 05:32:52.365145 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 29 05:32:52.385026 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 29 05:32:52.387624 kernel: BTRFS info (device vda6): last unmount of filesystem 44c01edc-ed3c-4d38-bf8b-b25afdfe8b0d Oct 29 05:32:52.403196 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 29 05:32:52.443782 ignition[1037]: INFO : Ignition 2.22.0 Oct 29 05:32:52.443782 ignition[1037]: INFO : Stage: mount Oct 29 05:32:52.446460 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 29 05:32:52.446460 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 05:32:52.450357 ignition[1037]: INFO : mount: mount passed Oct 29 05:32:52.451603 ignition[1037]: INFO : Ignition finished successfully Oct 29 05:32:52.455783 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 29 05:32:52.459280 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 29 05:32:53.157609 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 29 05:32:53.179063 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1050) Oct 29 05:32:53.179119 kernel: BTRFS info (device vda6): first mount of filesystem 44c01edc-ed3c-4d38-bf8b-b25afdfe8b0d Oct 29 05:32:53.179145 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 29 05:32:53.184713 kernel: BTRFS info (device vda6): turning on async discard Oct 29 05:32:53.184742 kernel: BTRFS info (device vda6): enabling free space tree Oct 29 05:32:53.186303 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 29 05:32:53.378768 ignition[1067]: INFO : Ignition 2.22.0 Oct 29 05:32:53.378768 ignition[1067]: INFO : Stage: files Oct 29 05:32:53.381326 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 29 05:32:53.381326 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 05:32:53.381326 ignition[1067]: DEBUG : files: compiled without relabeling support, skipping Oct 29 05:32:53.387619 ignition[1067]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 29 05:32:53.389786 ignition[1067]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 29 05:32:53.395071 ignition[1067]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 29 05:32:53.397450 ignition[1067]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 29 05:32:53.399565 ignition[1067]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 29 05:32:53.398000 unknown[1067]: wrote ssh authorized keys file for user: core Oct 29 05:32:53.403795 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 29 05:32:53.407050 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 29 05:32:53.447771 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 29 05:32:53.507419 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 29 05:32:53.510793 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 29 05:32:53.510793 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 29 05:32:53.510793 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 29 05:32:53.510793 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 29 05:32:53.510793 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 29 05:32:53.525434 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 29 05:32:53.525434 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 29 05:32:53.525434 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 29 05:32:53.579453 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 29 05:32:53.582652 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 29 05:32:53.582652 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 29 05:32:53.598780 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 29 05:32:53.598780 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 29 05:32:53.606456 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Oct 29 05:32:54.056243 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 29 05:32:54.794482 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 29 05:32:54.794482 ignition[1067]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 29 05:32:54.800841 ignition[1067]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 29 05:32:54.896644 ignition[1067]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 29 05:32:54.896644 ignition[1067]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 29 05:32:54.896644 ignition[1067]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 29 05:32:54.904219 ignition[1067]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 29 05:32:54.904219 ignition[1067]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 29 05:32:54.904219 ignition[1067]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 29 05:32:54.904219 ignition[1067]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 29 05:32:55.106297 ignition[1067]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 29 05:32:55.112230 ignition[1067]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 29 05:32:55.114746 ignition[1067]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 29 05:32:55.114746 ignition[1067]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 29 05:32:55.114746 ignition[1067]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 29 05:32:55.114746 ignition[1067]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 29 05:32:55.114746 ignition[1067]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 29 05:32:55.114746 ignition[1067]: INFO : files: files passed Oct 29 05:32:55.114746 ignition[1067]: INFO : Ignition finished successfully Oct 29 05:32:55.116570 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 29 05:32:55.120555 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 29 05:32:55.123892 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 29 05:32:55.144251 initrd-setup-root-after-ignition[1098]: grep: /sysroot/oem/oem-release: No such file or directory Oct 29 05:32:55.138059 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 29 05:32:55.148006 initrd-setup-root-after-ignition[1101]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 29 05:32:55.148006 initrd-setup-root-after-ignition[1101]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 29 05:32:55.138193 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 29 05:32:55.160611 initrd-setup-root-after-ignition[1105]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 29 05:32:55.147936 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 29 05:32:55.150491 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 29 05:32:55.155096 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 29 05:32:55.219739 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 29 05:32:55.219897 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 29 05:32:55.221036 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 29 05:32:55.221633 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 29 05:32:55.229961 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 29 05:32:55.234283 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 29 05:32:55.270393 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 29 05:32:55.273080 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 29 05:32:55.300137 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 29 05:32:55.300446 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 29 05:32:55.301595 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 29 05:32:55.302476 systemd[1]: Stopped target timers.target - Timer Units. Oct 29 05:32:55.310442 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 29 05:32:55.310646 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 29 05:32:55.316010 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 29 05:32:55.316889 systemd[1]: Stopped target basic.target - Basic System. Oct 29 05:32:55.317709 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 29 05:32:55.318571 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 29 05:32:55.327930 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 29 05:32:55.328510 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 29 05:32:55.335661 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 29 05:32:55.339094 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 29 05:32:55.342505 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 29 05:32:55.343051 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 29 05:32:55.351847 systemd[1]: Stopped target swap.target - Swaps. Oct 29 05:32:55.352737 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 29 05:32:55.352912 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 29 05:32:55.357807 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 29 05:32:55.359341 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 29 05:32:55.363376 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 29 05:32:55.366485 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 29 05:32:55.370013 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 29 05:32:55.370159 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 29 05:32:55.371237 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 29 05:32:55.371352 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 29 05:32:55.371870 systemd[1]: Stopped target paths.target - Path Units. Oct 29 05:32:55.379489 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 29 05:32:55.385020 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 29 05:32:55.385748 systemd[1]: Stopped target slices.target - Slice Units. Oct 29 05:32:55.391601 systemd[1]: Stopped target sockets.target - Socket Units. Oct 29 05:32:55.394516 systemd[1]: iscsid.socket: Deactivated successfully. Oct 29 05:32:55.394621 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 29 05:32:55.398552 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 29 05:32:55.398639 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 29 05:32:55.401111 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 29 05:32:55.401247 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 29 05:32:55.404103 systemd[1]: ignition-files.service: Deactivated successfully. Oct 29 05:32:55.404226 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 29 05:32:55.415441 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 29 05:32:55.416433 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 29 05:32:55.416610 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 29 05:32:55.433701 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 29 05:32:55.434426 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 29 05:32:55.434570 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 29 05:32:55.437595 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 29 05:32:55.437747 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 29 05:32:55.440775 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 29 05:32:55.441059 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 29 05:32:55.452149 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 29 05:32:55.452269 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 29 05:32:55.558972 ignition[1126]: INFO : Ignition 2.22.0 Oct 29 05:32:55.558972 ignition[1126]: INFO : Stage: umount Oct 29 05:32:55.561923 ignition[1126]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 29 05:32:55.561923 ignition[1126]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 05:32:55.568000 ignition[1126]: INFO : umount: umount passed Oct 29 05:32:55.569294 ignition[1126]: INFO : Ignition finished successfully Oct 29 05:32:55.573778 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 29 05:32:55.573960 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 29 05:32:55.578853 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 29 05:32:55.579421 systemd[1]: Stopped target network.target - Network. Oct 29 05:32:55.580381 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 29 05:32:55.580443 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 29 05:32:55.580966 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 29 05:32:55.581020 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 29 05:32:55.585593 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 29 05:32:55.585655 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 29 05:32:55.588566 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 29 05:32:55.588619 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 29 05:32:55.591740 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 29 05:32:55.592490 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 29 05:32:55.609404 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 29 05:32:55.611191 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 29 05:32:55.620037 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 29 05:32:55.620213 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 29 05:32:55.627356 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 29 05:32:55.628353 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 29 05:32:55.628399 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 29 05:32:55.632462 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 29 05:32:55.639771 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 29 05:32:55.641975 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 29 05:32:55.646563 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 29 05:32:55.646661 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 29 05:32:55.649817 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 29 05:32:55.649887 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 29 05:32:55.653527 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 29 05:32:55.659917 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 29 05:32:55.660079 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 29 05:32:55.662516 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 29 05:32:55.662579 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 29 05:32:55.683095 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 29 05:32:55.692174 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 29 05:32:55.693605 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 29 05:32:55.693693 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 29 05:32:55.697394 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 29 05:32:55.697466 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 29 05:32:55.700534 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 29 05:32:55.700632 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 29 05:32:55.706838 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 29 05:32:55.706906 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 29 05:32:55.708742 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 29 05:32:55.708856 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 29 05:32:55.717786 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 29 05:32:55.720853 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 29 05:32:55.720965 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 29 05:32:55.724807 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 29 05:32:55.724906 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 29 05:32:55.725430 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 29 05:32:55.725528 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 05:32:55.736158 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 29 05:32:55.745156 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 29 05:32:55.754206 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 29 05:32:55.754431 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 29 05:32:55.756215 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 29 05:32:55.764029 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 29 05:32:55.790141 systemd[1]: Switching root. Oct 29 05:32:55.831623 systemd-journald[310]: Journal stopped Oct 29 05:32:57.467205 systemd-journald[310]: Received SIGTERM from PID 1 (systemd). Oct 29 05:32:57.467284 kernel: SELinux: policy capability network_peer_controls=1 Oct 29 05:32:57.467308 kernel: SELinux: policy capability open_perms=1 Oct 29 05:32:57.467325 kernel: SELinux: policy capability extended_socket_class=1 Oct 29 05:32:57.467343 kernel: SELinux: policy capability always_check_network=0 Oct 29 05:32:57.467356 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 29 05:32:57.467368 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 29 05:32:57.467380 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 29 05:32:57.467392 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 29 05:32:57.467411 kernel: SELinux: policy capability userspace_initial_context=0 Oct 29 05:32:57.467428 kernel: audit: type=1403 audit(1761715976.365:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 29 05:32:57.467442 systemd[1]: Successfully loaded SELinux policy in 113.146ms. Oct 29 05:32:57.467475 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.067ms. Oct 29 05:32:57.467489 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 29 05:32:57.467503 systemd[1]: Detected virtualization kvm. Oct 29 05:32:57.467516 systemd[1]: Detected architecture x86-64. Oct 29 05:32:57.467535 systemd[1]: Detected first boot. Oct 29 05:32:57.467549 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 29 05:32:57.467561 zram_generator::config[1171]: No configuration found. Oct 29 05:32:57.467582 kernel: Guest personality initialized and is inactive Oct 29 05:32:57.467595 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 29 05:32:57.467607 kernel: Initialized host personality Oct 29 05:32:57.467619 kernel: NET: Registered PF_VSOCK protocol family Oct 29 05:32:57.467638 systemd[1]: Populated /etc with preset unit settings. Oct 29 05:32:57.467652 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 29 05:32:57.467664 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 29 05:32:57.467678 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 29 05:32:57.467691 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 29 05:32:57.467707 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 29 05:32:57.467719 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 29 05:32:57.467739 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 29 05:32:57.467753 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 29 05:32:57.467766 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 29 05:32:57.467782 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 29 05:32:57.467803 systemd[1]: Created slice user.slice - User and Session Slice. Oct 29 05:32:57.467817 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 29 05:32:57.467830 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 29 05:32:57.467851 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 29 05:32:57.467864 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 29 05:32:57.467877 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 29 05:32:57.467890 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 29 05:32:57.467906 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 29 05:32:57.467920 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 29 05:32:57.467953 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 29 05:32:57.467967 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 29 05:32:57.467980 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 29 05:32:57.467993 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 29 05:32:57.468006 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 29 05:32:57.468018 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 29 05:32:57.468031 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 29 05:32:57.468056 systemd[1]: Reached target slices.target - Slice Units. Oct 29 05:32:57.468069 systemd[1]: Reached target swap.target - Swaps. Oct 29 05:32:57.468084 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 29 05:32:57.468097 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 29 05:32:57.468111 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 29 05:32:57.468124 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 29 05:32:57.468137 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 29 05:32:57.468149 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 29 05:32:57.468170 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 29 05:32:57.468182 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 29 05:32:57.468195 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 29 05:32:57.468208 systemd[1]: Mounting media.mount - External Media Directory... Oct 29 05:32:57.468221 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 05:32:57.468236 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 29 05:32:57.468257 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 29 05:32:57.468273 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 29 05:32:57.468286 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 29 05:32:57.468302 systemd[1]: Reached target machines.target - Containers. Oct 29 05:32:57.468315 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 29 05:32:57.468328 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 29 05:32:57.468342 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 29 05:32:57.468362 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 29 05:32:57.468375 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 29 05:32:57.468388 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 29 05:32:57.468401 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 29 05:32:57.468414 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 29 05:32:57.468431 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 29 05:32:57.468445 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 29 05:32:57.468465 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 29 05:32:57.468481 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 29 05:32:57.468493 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 29 05:32:57.468506 systemd[1]: Stopped systemd-fsck-usr.service. Oct 29 05:32:57.468520 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 05:32:57.468532 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 29 05:32:57.468548 kernel: fuse: init (API version 7.41) Oct 29 05:32:57.468567 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 29 05:32:57.468580 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 29 05:32:57.468593 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 29 05:32:57.468606 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 29 05:32:57.468619 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 29 05:32:57.468639 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 05:32:57.468653 kernel: ACPI: bus type drm_connector registered Oct 29 05:32:57.468668 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 29 05:32:57.468681 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 29 05:32:57.468694 systemd[1]: Mounted media.mount - External Media Directory. Oct 29 05:32:57.468714 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 29 05:32:57.468729 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 29 05:32:57.468742 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 29 05:32:57.468755 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 29 05:32:57.468794 systemd-journald[1239]: Collecting audit messages is disabled. Oct 29 05:32:57.468818 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 29 05:32:57.468840 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 29 05:32:57.468853 systemd-journald[1239]: Journal started Oct 29 05:32:57.468875 systemd-journald[1239]: Runtime Journal (/run/log/journal/b0f0cfc55fcc4bc2aedc18c2f9c5e22f) is 6M, max 48.1M, 42.1M free. Oct 29 05:32:57.118175 systemd[1]: Queued start job for default target multi-user.target. Oct 29 05:32:57.138243 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 29 05:32:57.138779 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 29 05:32:57.474015 systemd[1]: Started systemd-journald.service - Journal Service. Oct 29 05:32:57.476767 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 05:32:57.477042 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 29 05:32:57.479214 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 29 05:32:57.479465 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 29 05:32:57.481549 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 05:32:57.481769 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 29 05:32:57.484071 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 29 05:32:57.484389 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 29 05:32:57.486409 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 05:32:57.486635 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 29 05:32:57.488714 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 29 05:32:57.490992 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 29 05:32:57.494199 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 29 05:32:57.496816 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 29 05:32:57.519997 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 29 05:32:57.522293 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 29 05:32:57.524407 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 29 05:32:57.524441 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 29 05:32:57.527149 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 29 05:32:57.529388 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 05:32:57.530838 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 29 05:32:57.533664 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 29 05:32:57.535691 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 29 05:32:57.537838 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 29 05:32:57.540609 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 29 05:32:57.546094 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 29 05:32:57.548755 systemd-journald[1239]: Time spent on flushing to /var/log/journal/b0f0cfc55fcc4bc2aedc18c2f9c5e22f is 22.961ms for 1054 entries. Oct 29 05:32:57.548755 systemd-journald[1239]: System Journal (/var/log/journal/b0f0cfc55fcc4bc2aedc18c2f9c5e22f) is 8M, max 163.5M, 155.5M free. Oct 29 05:32:58.042933 systemd-journald[1239]: Received client request to flush runtime journal. Oct 29 05:32:58.043033 kernel: loop1: detected capacity change from 0 to 111544 Oct 29 05:32:58.043067 kernel: loop2: detected capacity change from 0 to 128912 Oct 29 05:32:58.043091 kernel: loop3: detected capacity change from 0 to 219144 Oct 29 05:32:58.043118 kernel: loop4: detected capacity change from 0 to 111544 Oct 29 05:32:58.043138 kernel: loop5: detected capacity change from 0 to 128912 Oct 29 05:32:58.043201 kernel: loop6: detected capacity change from 0 to 219144 Oct 29 05:32:57.549444 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 29 05:32:57.573752 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 29 05:32:57.576642 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 29 05:32:57.582540 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 29 05:32:57.606455 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 29 05:32:57.826955 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 29 05:32:57.829971 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 29 05:32:57.834225 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 29 05:32:57.840047 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 29 05:32:57.849952 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 29 05:32:57.853169 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 29 05:32:58.027393 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 29 05:32:58.030814 systemd-tmpfiles[1302]: ACLs are not supported, ignoring. Oct 29 05:32:58.030830 systemd-tmpfiles[1302]: ACLs are not supported, ignoring. Oct 29 05:32:58.043294 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 29 05:32:58.046529 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 29 05:32:58.057561 (sd-merge)[1297]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Oct 29 05:32:58.061557 (sd-merge)[1297]: Merged extensions into '/usr'. Oct 29 05:32:58.065894 systemd[1]: Reload requested from client PID 1287 ('systemd-sysext') (unit systemd-sysext.service)... Oct 29 05:32:58.065911 systemd[1]: Reloading... Oct 29 05:32:58.118046 zram_generator::config[1341]: No configuration found. Oct 29 05:32:58.175516 systemd-resolved[1300]: Positive Trust Anchors: Oct 29 05:32:58.175533 systemd-resolved[1300]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 29 05:32:58.175538 systemd-resolved[1300]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 29 05:32:58.175569 systemd-resolved[1300]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 29 05:32:58.185936 systemd-resolved[1300]: Defaulting to hostname 'linux'. Oct 29 05:32:58.315379 systemd[1]: Reloading finished in 248 ms. Oct 29 05:32:58.345513 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 29 05:32:58.368314 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 29 05:32:58.370393 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 29 05:32:58.372578 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 29 05:32:58.375055 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 29 05:32:58.381133 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 29 05:32:58.385042 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 29 05:32:58.389053 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 29 05:32:58.392313 systemd[1]: Starting ensure-sysext.service... Oct 29 05:32:58.399354 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 29 05:32:58.403891 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 29 05:32:58.427020 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 29 05:32:58.551744 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 29 05:32:58.551795 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 29 05:32:58.552189 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 29 05:32:58.552482 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 29 05:32:58.553494 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 29 05:32:58.553790 systemd-tmpfiles[1382]: ACLs are not supported, ignoring. Oct 29 05:32:58.553867 systemd-tmpfiles[1382]: ACLs are not supported, ignoring. Oct 29 05:32:58.559751 systemd-tmpfiles[1382]: Detected autofs mount point /boot during canonicalization of boot. Oct 29 05:32:58.559770 systemd-tmpfiles[1382]: Skipping /boot Oct 29 05:32:58.559908 systemd[1]: Reload requested from client PID 1381 ('systemctl') (unit ensure-sysext.service)... Oct 29 05:32:58.559927 systemd[1]: Reloading... Oct 29 05:32:58.570729 systemd-tmpfiles[1382]: Detected autofs mount point /boot during canonicalization of boot. Oct 29 05:32:58.570743 systemd-tmpfiles[1382]: Skipping /boot Oct 29 05:32:58.630002 zram_generator::config[1417]: No configuration found. Oct 29 05:32:58.865459 systemd[1]: Reloading finished in 305 ms. Oct 29 05:32:58.920561 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 05:32:58.920742 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 29 05:32:58.922082 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 29 05:32:58.924871 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 29 05:32:58.927771 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 29 05:32:58.929545 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 05:32:58.929816 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 05:32:58.930016 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 05:32:58.932899 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 05:32:58.933097 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 29 05:32:58.933293 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 05:32:58.933392 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 05:32:58.933501 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 05:32:58.936420 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 05:32:58.936633 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 29 05:32:58.943029 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 29 05:32:58.944766 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 05:32:58.944883 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 05:32:58.945047 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 05:32:58.946250 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 05:32:58.946470 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 29 05:32:58.948803 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 05:32:58.949038 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 29 05:32:58.951706 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 05:32:58.951931 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 29 05:32:58.954354 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 29 05:32:58.954573 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 29 05:32:58.959579 systemd[1]: Finished ensure-sysext.service. Oct 29 05:32:58.964436 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 29 05:32:58.964497 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 29 05:32:58.966487 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 29 05:32:58.980269 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 29 05:32:59.021983 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 29 05:32:59.077074 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 29 05:32:59.080811 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 29 05:32:59.084852 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 29 05:32:59.089141 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 29 05:32:59.093609 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 29 05:32:59.096144 systemd[1]: Reached target time-set.target - System Time Set. Oct 29 05:32:59.118882 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 29 05:32:59.129481 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 29 05:32:59.220157 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 29 05:32:59.222477 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 29 05:32:59.520591 augenrules[1493]: No rules Oct 29 05:32:59.522280 systemd[1]: audit-rules.service: Deactivated successfully. Oct 29 05:32:59.522582 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 29 05:32:59.535122 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 29 05:32:59.539044 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 29 05:32:59.579230 systemd-udevd[1500]: Using default interface naming scheme 'v257'. Oct 29 05:32:59.607256 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 29 05:32:59.612745 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 29 05:32:59.817092 systemd-networkd[1508]: lo: Link UP Oct 29 05:32:59.817104 systemd-networkd[1508]: lo: Gained carrier Oct 29 05:32:59.819823 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 29 05:32:59.822231 systemd-networkd[1508]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 29 05:32:59.822240 systemd-networkd[1508]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 29 05:32:59.822845 systemd-networkd[1508]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 29 05:32:59.822912 systemd-networkd[1508]: eth0: Link UP Oct 29 05:32:59.823195 systemd-networkd[1508]: eth0: Gained carrier Oct 29 05:32:59.823213 systemd-networkd[1508]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 29 05:32:59.828794 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 29 05:32:59.829212 systemd[1]: Reached target network.target - Network. Oct 29 05:32:59.832866 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 29 05:32:59.836413 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 29 05:32:59.841081 systemd-networkd[1508]: eth0: DHCPv4 address 10.0.0.106/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 29 05:32:59.841978 systemd-timesyncd[1460]: Network configuration changed, trying to establish connection. Oct 29 05:33:01.546157 systemd-timesyncd[1460]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 29 05:33:01.546220 systemd-timesyncd[1460]: Initial clock synchronization to Wed 2025-10-29 05:33:01.546033 UTC. Oct 29 05:33:01.547474 systemd-resolved[1300]: Clock change detected. Flushing caches. Oct 29 05:33:01.550198 ldconfig[1467]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 29 05:33:01.551351 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 29 05:33:01.563641 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 29 05:33:01.569254 kernel: mousedev: PS/2 mouse device common for all mice Oct 29 05:33:01.575170 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 29 05:33:01.580164 kernel: ACPI: button: Power Button [PWRF] Oct 29 05:33:01.580402 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 29 05:33:01.586936 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 29 05:33:01.654511 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Oct 29 05:33:01.654886 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 29 05:33:01.657972 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 29 05:33:01.659756 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 29 05:33:01.665134 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 29 05:33:01.689697 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 29 05:33:01.740421 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 05:33:01.827443 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 05:33:01.832861 systemd[1]: Reached target sysinit.target - System Initialization. Oct 29 05:33:01.834838 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 29 05:33:01.836918 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 29 05:33:01.838993 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 29 05:33:01.841204 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 29 05:33:01.843205 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 29 05:33:01.845294 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 29 05:33:01.847441 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 29 05:33:01.847498 systemd[1]: Reached target paths.target - Path Units. Oct 29 05:33:01.849145 systemd[1]: Reached target timers.target - Timer Units. Oct 29 05:33:01.851763 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 29 05:33:01.856662 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 29 05:33:01.862153 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 29 05:33:01.864912 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 29 05:33:01.868138 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 29 05:33:01.875941 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 29 05:33:01.877718 kernel: kvm_amd: TSC scaling supported Oct 29 05:33:01.877758 kernel: kvm_amd: Nested Virtualization enabled Oct 29 05:33:01.877783 kernel: kvm_amd: Nested Paging enabled Oct 29 05:33:01.877796 kernel: kvm_amd: LBR virtualization supported Oct 29 05:33:01.881829 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 29 05:33:01.881908 kernel: kvm_amd: Virtual GIF supported Oct 29 05:33:01.882446 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 29 05:33:01.885492 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 29 05:33:01.888555 systemd[1]: Reached target sockets.target - Socket Units. Oct 29 05:33:01.890161 systemd[1]: Reached target basic.target - Basic System. Oct 29 05:33:01.891771 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 29 05:33:01.891818 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 29 05:33:01.893872 systemd[1]: Starting containerd.service - containerd container runtime... Oct 29 05:33:01.897583 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 29 05:33:01.902196 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 29 05:33:01.907299 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 29 05:33:01.912662 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 29 05:33:01.914515 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 29 05:33:01.916209 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 29 05:33:01.920319 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 29 05:33:01.922832 jq[1569]: false Oct 29 05:33:01.923053 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 29 05:33:01.931106 kernel: EDAC MC: Ver: 3.0.0 Oct 29 05:33:01.932252 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 29 05:33:01.936524 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 29 05:33:01.939336 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Refreshing passwd entry cache Oct 29 05:33:01.939325 oslogin_cache_refresh[1571]: Refreshing passwd entry cache Oct 29 05:33:02.038158 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Failure getting users, quitting Oct 29 05:33:02.038158 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 29 05:33:02.038158 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Refreshing group entry cache Oct 29 05:33:02.038158 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Failure getting groups, quitting Oct 29 05:33:02.038158 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 29 05:33:01.952600 oslogin_cache_refresh[1571]: Failure getting users, quitting Oct 29 05:33:01.952628 oslogin_cache_refresh[1571]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 29 05:33:01.952696 oslogin_cache_refresh[1571]: Refreshing group entry cache Oct 29 05:33:01.959320 oslogin_cache_refresh[1571]: Failure getting groups, quitting Oct 29 05:33:01.959336 oslogin_cache_refresh[1571]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 29 05:33:02.039875 extend-filesystems[1570]: Found /dev/vda6 Oct 29 05:33:02.045009 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 29 05:33:02.052711 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 29 05:33:02.059251 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 29 05:33:02.059922 systemd[1]: Starting update-engine.service - Update Engine... Oct 29 05:33:02.082614 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 29 05:33:02.087897 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 29 05:33:02.090982 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 29 05:33:02.091593 jq[1594]: true Oct 29 05:33:02.091289 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 29 05:33:02.091633 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 29 05:33:02.091899 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 29 05:33:02.094147 systemd[1]: motdgen.service: Deactivated successfully. Oct 29 05:33:02.094399 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 29 05:33:02.097214 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 29 05:33:02.097490 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 29 05:33:02.129757 jq[1600]: true Oct 29 05:33:02.136814 tar[1599]: linux-amd64/LICENSE Oct 29 05:33:02.137176 update_engine[1590]: I20251029 05:33:02.136888 1590 main.cc:92] Flatcar Update Engine starting Oct 29 05:33:02.138202 tar[1599]: linux-amd64/helm Oct 29 05:33:02.142815 systemd-logind[1587]: Watching system buttons on /dev/input/event2 (Power Button) Oct 29 05:33:02.142854 systemd-logind[1587]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 29 05:33:02.151497 extend-filesystems[1570]: Found /dev/vda9 Oct 29 05:33:02.143299 systemd-logind[1587]: New seat seat0. Oct 29 05:33:02.144715 systemd[1]: Started systemd-logind.service - User Login Management. Oct 29 05:33:02.155892 dbus-daemon[1567]: [system] SELinux support is enabled Oct 29 05:33:02.156316 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 29 05:33:02.160363 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 29 05:33:02.160397 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 29 05:33:02.162500 update_engine[1590]: I20251029 05:33:02.162451 1590 update_check_scheduler.cc:74] Next update check in 8m41s Oct 29 05:33:02.162719 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 29 05:33:02.162750 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 29 05:33:02.165419 systemd[1]: Started update-engine.service - Update Engine. Oct 29 05:33:02.168523 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 29 05:33:02.206318 extend-filesystems[1570]: Checking size of /dev/vda9 Oct 29 05:33:02.207785 dbus-daemon[1567]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 29 05:33:02.273190 locksmithd[1617]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 29 05:33:02.278126 extend-filesystems[1570]: Resized partition /dev/vda9 Oct 29 05:33:02.412600 extend-filesystems[1640]: resize2fs 1.47.3 (8-Jul-2025) Oct 29 05:33:02.466119 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Oct 29 05:33:02.487994 bash[1632]: Updated "/home/core/.ssh/authorized_keys" Oct 29 05:33:02.485229 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 29 05:33:02.491901 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 29 05:33:02.504108 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Oct 29 05:33:02.546190 extend-filesystems[1640]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 29 05:33:02.546190 extend-filesystems[1640]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 29 05:33:02.546190 extend-filesystems[1640]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Oct 29 05:33:02.555054 extend-filesystems[1570]: Resized filesystem in /dev/vda9 Oct 29 05:33:02.547424 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 29 05:33:02.556670 sshd_keygen[1584]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 29 05:33:02.547782 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 29 05:33:02.578902 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 29 05:33:02.582718 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 29 05:33:02.618880 systemd[1]: issuegen.service: Deactivated successfully. Oct 29 05:33:02.619181 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 29 05:33:02.626563 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 29 05:33:02.632263 systemd-networkd[1508]: eth0: Gained IPv6LL Oct 29 05:33:02.637740 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 29 05:33:02.643744 systemd[1]: Reached target network-online.target - Network is Online. Oct 29 05:33:02.647557 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 29 05:33:02.652274 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 05:33:02.661926 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 29 05:33:02.663797 containerd[1601]: time="2025-10-29T05:33:02Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 29 05:33:02.665493 containerd[1601]: time="2025-10-29T05:33:02.664411583Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 29 05:33:02.664830 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 29 05:33:02.679188 containerd[1601]: time="2025-10-29T05:33:02.679129203Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.718µs" Oct 29 05:33:02.679188 containerd[1601]: time="2025-10-29T05:33:02.679174268Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 29 05:33:02.679269 containerd[1601]: time="2025-10-29T05:33:02.679198433Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 29 05:33:02.680107 containerd[1601]: time="2025-10-29T05:33:02.679392868Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 29 05:33:02.680107 containerd[1601]: time="2025-10-29T05:33:02.679414278Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 29 05:33:02.680107 containerd[1601]: time="2025-10-29T05:33:02.679444154Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 29 05:33:02.680199 containerd[1601]: time="2025-10-29T05:33:02.680154897Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 29 05:33:02.680199 containerd[1601]: time="2025-10-29T05:33:02.680173111Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 29 05:33:02.682473 containerd[1601]: time="2025-10-29T05:33:02.680498141Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 29 05:33:02.682473 containerd[1601]: time="2025-10-29T05:33:02.680529920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 29 05:33:02.682473 containerd[1601]: time="2025-10-29T05:33:02.680546872Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 29 05:33:02.682473 containerd[1601]: time="2025-10-29T05:33:02.680557302Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 29 05:33:02.682473 containerd[1601]: time="2025-10-29T05:33:02.680672458Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 29 05:33:02.682473 containerd[1601]: time="2025-10-29T05:33:02.680927977Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 29 05:33:02.682473 containerd[1601]: time="2025-10-29T05:33:02.680965377Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 29 05:33:02.682473 containerd[1601]: time="2025-10-29T05:33:02.680975326Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 29 05:33:02.682473 containerd[1601]: time="2025-10-29T05:33:02.681631546Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 29 05:33:02.682473 containerd[1601]: time="2025-10-29T05:33:02.681932571Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 29 05:33:02.682473 containerd[1601]: time="2025-10-29T05:33:02.682056744Z" level=info msg="metadata content store policy set" policy=shared Oct 29 05:33:02.681485 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 29 05:33:02.686966 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 29 05:33:02.690111 containerd[1601]: time="2025-10-29T05:33:02.689986164Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 29 05:33:02.692935 containerd[1601]: time="2025-10-29T05:33:02.690219802Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 29 05:33:02.692935 containerd[1601]: time="2025-10-29T05:33:02.690246723Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 29 05:33:02.692935 containerd[1601]: time="2025-10-29T05:33:02.690260178Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 29 05:33:02.692935 containerd[1601]: time="2025-10-29T05:33:02.690274545Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 29 05:33:02.692935 containerd[1601]: time="2025-10-29T05:33:02.690286147Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 29 05:33:02.692935 containerd[1601]: time="2025-10-29T05:33:02.690300103Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 29 05:33:02.692935 containerd[1601]: time="2025-10-29T05:33:02.690318377Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 29 05:33:02.692935 containerd[1601]: time="2025-10-29T05:33:02.690331071Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 29 05:33:02.692935 containerd[1601]: time="2025-10-29T05:33:02.690348714Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 29 05:33:02.692935 containerd[1601]: time="2025-10-29T05:33:02.690358553Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 29 05:33:02.692935 containerd[1601]: time="2025-10-29T05:33:02.690370695Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 29 05:33:02.692935 containerd[1601]: time="2025-10-29T05:33:02.690541646Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 29 05:33:02.692935 containerd[1601]: time="2025-10-29T05:33:02.690566843Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 29 05:33:02.692935 containerd[1601]: time="2025-10-29T05:33:02.690580950Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 29 05:33:02.690913 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 29 05:33:02.693320 containerd[1601]: time="2025-10-29T05:33:02.690596218Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 29 05:33:02.693320 containerd[1601]: time="2025-10-29T05:33:02.690607720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 29 05:33:02.693320 containerd[1601]: time="2025-10-29T05:33:02.690617117Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 29 05:33:02.693320 containerd[1601]: time="2025-10-29T05:33:02.690629471Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 29 05:33:02.693320 containerd[1601]: time="2025-10-29T05:33:02.690643507Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 29 05:33:02.693320 containerd[1601]: time="2025-10-29T05:33:02.690655018Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 29 05:33:02.693320 containerd[1601]: time="2025-10-29T05:33:02.690669355Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 29 05:33:02.693320 containerd[1601]: time="2025-10-29T05:33:02.690681899Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 29 05:33:02.693320 containerd[1601]: time="2025-10-29T05:33:02.690764484Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 29 05:33:02.693320 containerd[1601]: time="2025-10-29T05:33:02.690784371Z" level=info msg="Start snapshots syncer" Oct 29 05:33:02.693320 containerd[1601]: time="2025-10-29T05:33:02.690827412Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 29 05:33:02.692981 systemd[1]: Reached target getty.target - Login Prompts. Oct 29 05:33:02.695091 containerd[1601]: time="2025-10-29T05:33:02.693664322Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 29 05:33:02.695091 containerd[1601]: time="2025-10-29T05:33:02.693736377Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 29 05:33:02.695294 containerd[1601]: time="2025-10-29T05:33:02.693886769Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 29 05:33:02.695294 containerd[1601]: time="2025-10-29T05:33:02.694033264Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 29 05:33:02.695294 containerd[1601]: time="2025-10-29T05:33:02.694054524Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 29 05:33:02.695294 containerd[1601]: time="2025-10-29T05:33:02.694064853Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 29 05:33:02.695294 containerd[1601]: time="2025-10-29T05:33:02.694091563Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 29 05:33:02.695294 containerd[1601]: time="2025-10-29T05:33:02.694106251Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 29 05:33:02.695294 containerd[1601]: time="2025-10-29T05:33:02.694117662Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 29 05:33:02.695294 containerd[1601]: time="2025-10-29T05:33:02.694293672Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 29 05:33:02.695294 containerd[1601]: time="2025-10-29T05:33:02.694339829Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 29 05:33:02.695294 containerd[1601]: time="2025-10-29T05:33:02.694353304Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 29 05:33:02.695294 containerd[1601]: time="2025-10-29T05:33:02.694370667Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 29 05:33:02.695294 containerd[1601]: time="2025-10-29T05:33:02.694517732Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 29 05:33:02.695294 containerd[1601]: time="2025-10-29T05:33:02.694537810Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 29 05:33:02.695294 containerd[1601]: time="2025-10-29T05:33:02.694547699Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 29 05:33:02.695542 containerd[1601]: time="2025-10-29T05:33:02.694561034Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 29 05:33:02.695542 containerd[1601]: time="2025-10-29T05:33:02.694572435Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 29 05:33:02.695542 containerd[1601]: time="2025-10-29T05:33:02.694583826Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 29 05:33:02.695542 containerd[1601]: time="2025-10-29T05:33:02.694606289Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 29 05:33:02.695542 containerd[1601]: time="2025-10-29T05:33:02.694631646Z" level=info msg="runtime interface created" Oct 29 05:33:02.695542 containerd[1601]: time="2025-10-29T05:33:02.694639541Z" level=info msg="created NRI interface" Oct 29 05:33:02.695542 containerd[1601]: time="2025-10-29T05:33:02.694651974Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 29 05:33:02.695542 containerd[1601]: time="2025-10-29T05:33:02.694664117Z" level=info msg="Connect containerd service" Oct 29 05:33:02.695542 containerd[1601]: time="2025-10-29T05:33:02.694729379Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 29 05:33:02.696180 systemd[1]: Started sshd@0-10.0.0.106:22-10.0.0.1:39582.service - OpenSSH per-connection server daemon (10.0.0.1:39582). Oct 29 05:33:02.762973 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 29 05:33:02.767995 containerd[1601]: time="2025-10-29T05:33:02.767940206Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 29 05:33:02.904246 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 29 05:33:02.904558 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 29 05:33:02.906867 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 29 05:33:03.036181 tar[1599]: linux-amd64/README.md Oct 29 05:33:03.045672 sshd[1682]: Accepted publickey for core from 10.0.0.1 port 39582 ssh2: RSA SHA256:XlI1mMWbAUEpbMdibrfNtyLuAe47fXxox5VA8A+V0wo Oct 29 05:33:03.047789 sshd-session[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 05:33:03.183283 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 29 05:33:03.188321 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 29 05:33:03.192176 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 29 05:33:03.204390 systemd-logind[1587]: New session 1 of user core. Oct 29 05:33:03.215185 containerd[1601]: time="2025-10-29T05:33:03.215099399Z" level=info msg="Start subscribing containerd event" Oct 29 05:33:03.215308 containerd[1601]: time="2025-10-29T05:33:03.215209185Z" level=info msg="Start recovering state" Oct 29 05:33:03.215427 containerd[1601]: time="2025-10-29T05:33:03.215399321Z" level=info msg="Start event monitor" Oct 29 05:33:03.215452 containerd[1601]: time="2025-10-29T05:33:03.215432183Z" level=info msg="Start cni network conf syncer for default" Oct 29 05:33:03.215452 containerd[1601]: time="2025-10-29T05:33:03.215448914Z" level=info msg="Start streaming server" Oct 29 05:33:03.215512 containerd[1601]: time="2025-10-29T05:33:03.215465005Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 29 05:33:03.215512 containerd[1601]: time="2025-10-29T05:33:03.215478229Z" level=info msg="runtime interface starting up..." Oct 29 05:33:03.215512 containerd[1601]: time="2025-10-29T05:33:03.215486054Z" level=info msg="starting plugins..." Oct 29 05:33:03.215512 containerd[1601]: time="2025-10-29T05:33:03.215502655Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 29 05:33:03.216504 containerd[1601]: time="2025-10-29T05:33:03.216397814Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 29 05:33:03.216504 containerd[1601]: time="2025-10-29T05:33:03.216488504Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 29 05:33:03.219209 containerd[1601]: time="2025-10-29T05:33:03.219182386Z" level=info msg="containerd successfully booted in 0.555986s" Oct 29 05:33:03.223870 systemd[1]: Started containerd.service - containerd container runtime. Oct 29 05:33:03.226524 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 29 05:33:03.232854 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 29 05:33:03.254731 (systemd)[1711]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 29 05:33:03.257890 systemd-logind[1587]: New session c1 of user core. Oct 29 05:33:03.395312 systemd[1711]: Queued start job for default target default.target. Oct 29 05:33:03.410531 systemd[1711]: Created slice app.slice - User Application Slice. Oct 29 05:33:03.410561 systemd[1711]: Reached target paths.target - Paths. Oct 29 05:33:03.410609 systemd[1711]: Reached target timers.target - Timers. Oct 29 05:33:03.412365 systemd[1711]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 29 05:33:03.577580 systemd[1711]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 29 05:33:03.577750 systemd[1711]: Reached target sockets.target - Sockets. Oct 29 05:33:03.577805 systemd[1711]: Reached target basic.target - Basic System. Oct 29 05:33:03.577868 systemd[1711]: Reached target default.target - Main User Target. Oct 29 05:33:03.577908 systemd[1711]: Startup finished in 312ms. Oct 29 05:33:03.578196 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 29 05:33:03.581951 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 29 05:33:03.615302 systemd[1]: Started sshd@1-10.0.0.106:22-10.0.0.1:39590.service - OpenSSH per-connection server daemon (10.0.0.1:39590). Oct 29 05:33:03.689403 sshd[1722]: Accepted publickey for core from 10.0.0.1 port 39590 ssh2: RSA SHA256:XlI1mMWbAUEpbMdibrfNtyLuAe47fXxox5VA8A+V0wo Oct 29 05:33:03.690965 sshd-session[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 05:33:03.695632 systemd-logind[1587]: New session 2 of user core. Oct 29 05:33:03.707250 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 29 05:33:03.723556 sshd[1725]: Connection closed by 10.0.0.1 port 39590 Oct 29 05:33:03.724248 sshd-session[1722]: pam_unix(sshd:session): session closed for user core Oct 29 05:33:03.732641 systemd[1]: sshd@1-10.0.0.106:22-10.0.0.1:39590.service: Deactivated successfully. Oct 29 05:33:03.734650 systemd[1]: session-2.scope: Deactivated successfully. Oct 29 05:33:03.735417 systemd-logind[1587]: Session 2 logged out. Waiting for processes to exit. Oct 29 05:33:03.738482 systemd[1]: Started sshd@2-10.0.0.106:22-10.0.0.1:39602.service - OpenSSH per-connection server daemon (10.0.0.1:39602). Oct 29 05:33:03.741247 systemd-logind[1587]: Removed session 2. Oct 29 05:33:03.825629 sshd[1731]: Accepted publickey for core from 10.0.0.1 port 39602 ssh2: RSA SHA256:XlI1mMWbAUEpbMdibrfNtyLuAe47fXxox5VA8A+V0wo Oct 29 05:33:03.827184 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 05:33:03.831878 systemd-logind[1587]: New session 3 of user core. Oct 29 05:33:03.846233 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 29 05:33:03.869380 sshd[1734]: Connection closed by 10.0.0.1 port 39602 Oct 29 05:33:03.869713 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Oct 29 05:33:03.873978 systemd[1]: sshd@2-10.0.0.106:22-10.0.0.1:39602.service: Deactivated successfully. Oct 29 05:33:03.875972 systemd[1]: session-3.scope: Deactivated successfully. Oct 29 05:33:03.877461 systemd-logind[1587]: Session 3 logged out. Waiting for processes to exit. Oct 29 05:33:03.879066 systemd-logind[1587]: Removed session 3. Oct 29 05:33:04.246845 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 05:33:04.249559 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 29 05:33:04.251603 systemd[1]: Startup finished in 2.923s (kernel) + 7.326s (initrd) + 6.292s (userspace) = 16.542s. Oct 29 05:33:04.260397 (kubelet)[1744]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 29 05:33:05.081976 kubelet[1744]: E1029 05:33:05.081864 1744 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 05:33:05.086984 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 05:33:05.087299 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 05:33:05.087893 systemd[1]: kubelet.service: Consumed 2.077s CPU time, 257.5M memory peak. Oct 29 05:33:13.892940 systemd[1]: Started sshd@3-10.0.0.106:22-10.0.0.1:40036.service - OpenSSH per-connection server daemon (10.0.0.1:40036). Oct 29 05:33:13.945839 sshd[1758]: Accepted publickey for core from 10.0.0.1 port 40036 ssh2: RSA SHA256:XlI1mMWbAUEpbMdibrfNtyLuAe47fXxox5VA8A+V0wo Oct 29 05:33:13.947529 sshd-session[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 05:33:13.951831 systemd-logind[1587]: New session 4 of user core. Oct 29 05:33:13.966204 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 29 05:33:13.980262 sshd[1761]: Connection closed by 10.0.0.1 port 40036 Oct 29 05:33:13.980492 sshd-session[1758]: pam_unix(sshd:session): session closed for user core Oct 29 05:33:13.989507 systemd[1]: sshd@3-10.0.0.106:22-10.0.0.1:40036.service: Deactivated successfully. Oct 29 05:33:13.991741 systemd[1]: session-4.scope: Deactivated successfully. Oct 29 05:33:13.992588 systemd-logind[1587]: Session 4 logged out. Waiting for processes to exit. Oct 29 05:33:13.996215 systemd[1]: Started sshd@4-10.0.0.106:22-10.0.0.1:40048.service - OpenSSH per-connection server daemon (10.0.0.1:40048). Oct 29 05:33:13.996764 systemd-logind[1587]: Removed session 4. Oct 29 05:33:14.044295 sshd[1767]: Accepted publickey for core from 10.0.0.1 port 40048 ssh2: RSA SHA256:XlI1mMWbAUEpbMdibrfNtyLuAe47fXxox5VA8A+V0wo Oct 29 05:33:14.045640 sshd-session[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 05:33:14.049705 systemd-logind[1587]: New session 5 of user core. Oct 29 05:33:14.061262 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 29 05:33:14.070000 sshd[1770]: Connection closed by 10.0.0.1 port 40048 Oct 29 05:33:14.070315 sshd-session[1767]: pam_unix(sshd:session): session closed for user core Oct 29 05:33:14.082573 systemd[1]: sshd@4-10.0.0.106:22-10.0.0.1:40048.service: Deactivated successfully. Oct 29 05:33:14.084411 systemd[1]: session-5.scope: Deactivated successfully. Oct 29 05:33:14.085152 systemd-logind[1587]: Session 5 logged out. Waiting for processes to exit. Oct 29 05:33:14.087736 systemd[1]: Started sshd@5-10.0.0.106:22-10.0.0.1:40058.service - OpenSSH per-connection server daemon (10.0.0.1:40058). Oct 29 05:33:14.088373 systemd-logind[1587]: Removed session 5. Oct 29 05:33:14.135514 sshd[1776]: Accepted publickey for core from 10.0.0.1 port 40058 ssh2: RSA SHA256:XlI1mMWbAUEpbMdibrfNtyLuAe47fXxox5VA8A+V0wo Oct 29 05:33:14.136781 sshd-session[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 05:33:14.140810 systemd-logind[1587]: New session 6 of user core. Oct 29 05:33:14.150220 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 29 05:33:14.165201 sshd[1779]: Connection closed by 10.0.0.1 port 40058 Oct 29 05:33:14.165513 sshd-session[1776]: pam_unix(sshd:session): session closed for user core Oct 29 05:33:14.179188 systemd[1]: sshd@5-10.0.0.106:22-10.0.0.1:40058.service: Deactivated successfully. Oct 29 05:33:14.181034 systemd[1]: session-6.scope: Deactivated successfully. Oct 29 05:33:14.181941 systemd-logind[1587]: Session 6 logged out. Waiting for processes to exit. Oct 29 05:33:14.185020 systemd[1]: Started sshd@6-10.0.0.106:22-10.0.0.1:40072.service - OpenSSH per-connection server daemon (10.0.0.1:40072). Oct 29 05:33:14.185880 systemd-logind[1587]: Removed session 6. Oct 29 05:33:14.249372 sshd[1785]: Accepted publickey for core from 10.0.0.1 port 40072 ssh2: RSA SHA256:XlI1mMWbAUEpbMdibrfNtyLuAe47fXxox5VA8A+V0wo Oct 29 05:33:14.250788 sshd-session[1785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 05:33:14.255855 systemd-logind[1587]: New session 7 of user core. Oct 29 05:33:14.269284 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 29 05:33:14.294796 sudo[1789]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 29 05:33:14.295181 sudo[1789]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 29 05:33:14.324724 sudo[1789]: pam_unix(sudo:session): session closed for user root Oct 29 05:33:14.326932 sshd[1788]: Connection closed by 10.0.0.1 port 40072 Oct 29 05:33:14.327355 sshd-session[1785]: pam_unix(sshd:session): session closed for user core Oct 29 05:33:14.345718 systemd[1]: sshd@6-10.0.0.106:22-10.0.0.1:40072.service: Deactivated successfully. Oct 29 05:33:14.347585 systemd[1]: session-7.scope: Deactivated successfully. Oct 29 05:33:14.348576 systemd-logind[1587]: Session 7 logged out. Waiting for processes to exit. Oct 29 05:33:14.351626 systemd[1]: Started sshd@7-10.0.0.106:22-10.0.0.1:40078.service - OpenSSH per-connection server daemon (10.0.0.1:40078). Oct 29 05:33:14.352402 systemd-logind[1587]: Removed session 7. Oct 29 05:33:14.415850 sshd[1795]: Accepted publickey for core from 10.0.0.1 port 40078 ssh2: RSA SHA256:XlI1mMWbAUEpbMdibrfNtyLuAe47fXxox5VA8A+V0wo Oct 29 05:33:14.417879 sshd-session[1795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 05:33:14.422793 systemd-logind[1587]: New session 8 of user core. Oct 29 05:33:14.432241 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 29 05:33:14.449216 sudo[1800]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 29 05:33:14.449630 sudo[1800]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 29 05:33:14.456957 sudo[1800]: pam_unix(sudo:session): session closed for user root Oct 29 05:33:14.467606 sudo[1799]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 29 05:33:14.468037 sudo[1799]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 29 05:33:14.480337 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 29 05:33:14.535682 augenrules[1822]: No rules Oct 29 05:33:14.537796 systemd[1]: audit-rules.service: Deactivated successfully. Oct 29 05:33:14.538178 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 29 05:33:14.539618 sudo[1799]: pam_unix(sudo:session): session closed for user root Oct 29 05:33:14.541741 sshd[1798]: Connection closed by 10.0.0.1 port 40078 Oct 29 05:33:14.542107 sshd-session[1795]: pam_unix(sshd:session): session closed for user core Oct 29 05:33:14.552273 systemd[1]: sshd@7-10.0.0.106:22-10.0.0.1:40078.service: Deactivated successfully. Oct 29 05:33:14.554847 systemd[1]: session-8.scope: Deactivated successfully. Oct 29 05:33:14.555852 systemd-logind[1587]: Session 8 logged out. Waiting for processes to exit. Oct 29 05:33:14.559419 systemd[1]: Started sshd@8-10.0.0.106:22-10.0.0.1:40088.service - OpenSSH per-connection server daemon (10.0.0.1:40088). Oct 29 05:33:14.560116 systemd-logind[1587]: Removed session 8. Oct 29 05:33:14.610700 sshd[1831]: Accepted publickey for core from 10.0.0.1 port 40088 ssh2: RSA SHA256:XlI1mMWbAUEpbMdibrfNtyLuAe47fXxox5VA8A+V0wo Oct 29 05:33:14.612230 sshd-session[1831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 05:33:14.617373 systemd-logind[1587]: New session 9 of user core. Oct 29 05:33:14.624237 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 29 05:33:14.641374 sudo[1835]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 29 05:33:14.641836 sudo[1835]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 29 05:33:15.285404 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 29 05:33:15.287673 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 05:33:15.477519 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 29 05:33:15.499413 (dockerd)[1858]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 29 05:33:15.833893 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 05:33:15.846439 (kubelet)[1871]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 29 05:33:15.907567 kubelet[1871]: E1029 05:33:15.907489 1871 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 05:33:15.914244 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 05:33:15.914451 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 05:33:15.914860 systemd[1]: kubelet.service: Consumed 394ms CPU time, 110.2M memory peak. Oct 29 05:33:16.187868 dockerd[1858]: time="2025-10-29T05:33:16.187672234Z" level=info msg="Starting up" Oct 29 05:33:16.188899 dockerd[1858]: time="2025-10-29T05:33:16.188849531Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 29 05:33:16.209347 dockerd[1858]: time="2025-10-29T05:33:16.209260018Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 29 05:33:17.316456 dockerd[1858]: time="2025-10-29T05:33:17.316378541Z" level=info msg="Loading containers: start." Oct 29 05:33:17.473127 kernel: Initializing XFRM netlink socket Oct 29 05:33:17.774882 systemd-networkd[1508]: docker0: Link UP Oct 29 05:33:17.780824 dockerd[1858]: time="2025-10-29T05:33:17.780776673Z" level=info msg="Loading containers: done." Oct 29 05:33:17.798611 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck25325148-merged.mount: Deactivated successfully. Oct 29 05:33:17.800710 dockerd[1858]: time="2025-10-29T05:33:17.800648179Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 29 05:33:17.800870 dockerd[1858]: time="2025-10-29T05:33:17.800754969Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 29 05:33:17.800899 dockerd[1858]: time="2025-10-29T05:33:17.800870135Z" level=info msg="Initializing buildkit" Oct 29 05:33:17.831734 dockerd[1858]: time="2025-10-29T05:33:17.831667101Z" level=info msg="Completed buildkit initialization" Oct 29 05:33:17.837742 dockerd[1858]: time="2025-10-29T05:33:17.837709563Z" level=info msg="Daemon has completed initialization" Oct 29 05:33:17.837817 dockerd[1858]: time="2025-10-29T05:33:17.837780856Z" level=info msg="API listen on /run/docker.sock" Oct 29 05:33:17.837986 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 29 05:33:18.602117 containerd[1601]: time="2025-10-29T05:33:18.602046497Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Oct 29 05:33:19.211717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3567380544.mount: Deactivated successfully. Oct 29 05:33:20.424660 containerd[1601]: time="2025-10-29T05:33:20.424596679Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 05:33:20.425544 containerd[1601]: time="2025-10-29T05:33:20.425476399Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065392" Oct 29 05:33:20.426908 containerd[1601]: time="2025-10-29T05:33:20.426864592Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 05:33:20.429697 containerd[1601]: time="2025-10-29T05:33:20.429669292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 05:33:20.430572 containerd[1601]: time="2025-10-29T05:33:20.430539194Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 1.828421123s" Oct 29 05:33:20.430572 containerd[1601]: time="2025-10-29T05:33:20.430575081Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Oct 29 05:33:20.431554 containerd[1601]: time="2025-10-29T05:33:20.431478565Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Oct 29 05:33:21.784138 containerd[1601]: time="2025-10-29T05:33:21.784051853Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 05:33:21.784901 containerd[1601]: time="2025-10-29T05:33:21.784845842Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159757" Oct 29 05:33:21.785965 containerd[1601]: time="2025-10-29T05:33:21.785929474Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 05:33:21.788337 containerd[1601]: time="2025-10-29T05:33:21.788301713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 05:33:21.789238 containerd[1601]: time="2025-10-29T05:33:21.789204696Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.357692098s" Oct 29 05:33:21.789303 containerd[1601]: time="2025-10-29T05:33:21.789242367Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Oct 29 05:33:21.789838 containerd[1601]: time="2025-10-29T05:33:21.789809140Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Oct 29 05:33:22.961880 containerd[1601]: time="2025-10-29T05:33:22.961811002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 05:33:22.962730 containerd[1601]: time="2025-10-29T05:33:22.962669592Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725093" Oct 29 05:33:22.966083 containerd[1601]: time="2025-10-29T05:33:22.963925938Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 05:33:22.967449 containerd[1601]: time="2025-10-29T05:33:22.967393271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 05:33:22.968348 containerd[1601]: time="2025-10-29T05:33:22.968308146Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 1.178472216s" Oct 29 05:33:22.968348 containerd[1601]: time="2025-10-29T05:33:22.968348041Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Oct 29 05:33:22.969242 containerd[1601]: time="2025-10-29T05:33:22.969204207Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Oct 29 05:33:24.344566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount198547752.mount: Deactivated successfully. Oct 29 05:33:25.025375 containerd[1601]: time="2025-10-29T05:33:25.025273531Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 05:33:25.026004 containerd[1601]: time="2025-10-29T05:33:25.025936203Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964699" Oct 29 05:33:25.027118 containerd[1601]: time="2025-10-29T05:33:25.027063928Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 05:33:25.029405 containerd[1601]: time="2025-10-29T05:33:25.029355095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 05:33:25.029989 containerd[1601]: time="2025-10-29T05:33:25.029935533Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 2.060701501s" Oct 29 05:33:25.030032 containerd[1601]: time="2025-10-29T05:33:25.029988443Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Oct 29 05:33:25.030756 containerd[1601]: time="2025-10-29T05:33:25.030566797Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Oct 29 05:33:25.818790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2623972732.mount: Deactivated successfully. Oct 29 05:33:26.035149 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 29 05:33:26.037215 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 05:33:26.588299 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 05:33:26.606385 (kubelet)[2185]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 29 05:33:26.814057 kubelet[2185]: E1029 05:33:26.813984 2185 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 05:33:26.818749 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 05:33:26.818957 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 05:33:26.819383 systemd[1]: kubelet.service: Consumed 406ms CPU time, 108.9M memory peak. Oct 29 05:33:27.427643 containerd[1601]: time="2025-10-29T05:33:27.427574763Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 05:33:27.428453 containerd[1601]: time="2025-10-29T05:33:27.428393358Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Oct 29 05:33:27.429624 containerd[1601]: time="2025-10-29T05:33:27.429586155Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 05:33:27.432205 containerd[1601]: time="2025-10-29T05:33:27.432171954Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 05:33:27.433489 containerd[1601]: time="2025-10-29T05:33:27.433431667Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.40283315s" Oct 29 05:33:27.433538 containerd[1601]: time="2025-10-29T05:33:27.433488483Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Oct 29 05:33:27.434144 containerd[1601]: time="2025-10-29T05:33:27.434109768Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Oct 29 05:33:27.918910 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3382041587.mount: Deactivated successfully. Oct 29 05:33:27.927421 containerd[1601]: time="2025-10-29T05:33:27.927369236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 05:33:27.928258 containerd[1601]: time="2025-10-29T05:33:27.928186799Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Oct 29 05:33:27.929567 containerd[1601]: time="2025-10-29T05:33:27.929527884Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 05:33:27.931806 containerd[1601]: time="2025-10-29T05:33:27.931768206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 05:33:27.932500 containerd[1601]: time="2025-10-29T05:33:27.932436799Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 498.28893ms" Oct 29 05:33:27.932500 containerd[1601]: time="2025-10-29T05:33:27.932492464Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Oct 29 05:33:27.933095 containerd[1601]: time="2025-10-29T05:33:27.933017899Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Oct 29 05:33:31.605922 containerd[1601]: time="2025-10-29T05:33:31.605837577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 05:33:31.606839 containerd[1601]: time="2025-10-29T05:33:31.606773442Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514593" Oct 29 05:33:31.607957 containerd[1601]: time="2025-10-29T05:33:31.607901177Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 05:33:31.610567 containerd[1601]: time="2025-10-29T05:33:31.610529787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 05:33:31.611669 containerd[1601]: time="2025-10-29T05:33:31.611626193Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 3.678577576s" Oct 29 05:33:31.611669 containerd[1601]: time="2025-10-29T05:33:31.611665627Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Oct 29 05:33:34.648837 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 05:33:34.649012 systemd[1]: kubelet.service: Consumed 406ms CPU time, 108.9M memory peak. Oct 29 05:33:34.651392 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 05:33:34.682512 systemd[1]: Reload requested from client PID 2310 ('systemctl') (unit session-9.scope)... Oct 29 05:33:34.682536 systemd[1]: Reloading... Oct 29 05:33:34.780133 zram_generator::config[2363]: No configuration found. Oct 29 05:33:35.141946 systemd[1]: Reloading finished in 458 ms. Oct 29 05:33:35.217861 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 29 05:33:35.217965 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 29 05:33:35.218312 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 05:33:35.218359 systemd[1]: kubelet.service: Consumed 163ms CPU time, 98.2M memory peak. Oct 29 05:33:35.220187 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 05:33:35.452351 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 05:33:35.466473 (kubelet)[2402]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 29 05:33:35.513521 kubelet[2402]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 29 05:33:35.513521 kubelet[2402]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 05:33:35.514942 kubelet[2402]: I1029 05:33:35.514805 2402 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 29 05:33:37.647706 kubelet[2402]: I1029 05:33:37.647652 2402 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 29 05:33:37.647706 kubelet[2402]: I1029 05:33:37.647685 2402 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 29 05:33:37.682602 kubelet[2402]: I1029 05:33:37.682569 2402 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 29 05:33:37.682602 kubelet[2402]: I1029 05:33:37.682591 2402 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 29 05:33:37.682878 kubelet[2402]: I1029 05:33:37.682854 2402 server.go:956] "Client rotation is on, will bootstrap in background" Oct 29 05:33:37.716815 kubelet[2402]: E1029 05:33:37.716751 2402 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.106:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 29 05:33:37.716970 kubelet[2402]: I1029 05:33:37.716948 2402 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 29 05:33:37.720004 kubelet[2402]: I1029 05:33:37.719975 2402 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 29 05:33:37.726395 kubelet[2402]: I1029 05:33:37.726351 2402 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 29 05:33:37.727239 kubelet[2402]: I1029 05:33:37.727193 2402 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 29 05:33:37.727431 kubelet[2402]: I1029 05:33:37.727223 2402 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 29 05:33:37.727568 kubelet[2402]: I1029 05:33:37.727446 2402 topology_manager.go:138] "Creating topology manager with none policy" Oct 29 05:33:37.727568 kubelet[2402]: I1029 05:33:37.727459 2402 container_manager_linux.go:306] "Creating device plugin manager" Oct 29 05:33:37.727648 kubelet[2402]: I1029 05:33:37.727597 2402 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 29 05:33:37.771163 kubelet[2402]: I1029 05:33:37.771122 2402 state_mem.go:36] "Initialized new in-memory state store" Oct 29 05:33:37.771353 kubelet[2402]: I1029 05:33:37.771325 2402 kubelet.go:475] "Attempting to sync node with API server" Oct 29 05:33:37.771353 kubelet[2402]: I1029 05:33:37.771342 2402 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 29 05:33:37.771425 kubelet[2402]: I1029 05:33:37.771375 2402 kubelet.go:387] "Adding apiserver pod source" Oct 29 05:33:37.771425 kubelet[2402]: I1029 05:33:37.771401 2402 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 29 05:33:37.772088 kubelet[2402]: E1029 05:33:37.771984 2402 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 29 05:33:37.772088 kubelet[2402]: E1029 05:33:37.771999 2402 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 29 05:33:37.774388 kubelet[2402]: I1029 05:33:37.774369 2402 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 29 05:33:37.774907 kubelet[2402]: I1029 05:33:37.774877 2402 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 29 05:33:37.774907 kubelet[2402]: I1029 05:33:37.774905 2402 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 29 05:33:37.774985 kubelet[2402]: W1029 05:33:37.774972 2402 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 29 05:33:37.778428 kubelet[2402]: I1029 05:33:37.778409 2402 server.go:1262] "Started kubelet" Oct 29 05:33:37.778623 kubelet[2402]: I1029 05:33:37.778468 2402 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 29 05:33:37.778912 kubelet[2402]: I1029 05:33:37.778881 2402 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 29 05:33:37.778956 kubelet[2402]: I1029 05:33:37.778931 2402 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 29 05:33:37.779935 kubelet[2402]: I1029 05:33:37.779517 2402 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 29 05:33:37.780869 kubelet[2402]: I1029 05:33:37.780244 2402 server.go:310] "Adding debug handlers to kubelet server" Oct 29 05:33:37.780869 kubelet[2402]: I1029 05:33:37.780342 2402 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 29 05:33:37.781214 kubelet[2402]: I1029 05:33:37.780962 2402 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 29 05:33:37.783730 kubelet[2402]: E1029 05:33:37.783351 2402 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 05:33:37.783730 kubelet[2402]: I1029 05:33:37.783423 2402 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 29 05:33:37.783730 kubelet[2402]: I1029 05:33:37.783582 2402 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 29 05:33:37.783730 kubelet[2402]: I1029 05:33:37.783631 2402 reconciler.go:29] "Reconciler: start to sync state" Oct 29 05:33:37.783932 kubelet[2402]: E1029 05:33:37.783905 2402 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 29 05:33:37.784058 kubelet[2402]: E1029 05:33:37.784012 2402 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.106:6443: connect: connection refused" interval="200ms" Oct 29 05:33:37.784170 kubelet[2402]: E1029 05:33:37.782743 2402 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.106:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.106:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1872df625acbd006 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-29 05:33:37.778380806 +0000 UTC m=+2.307889000,LastTimestamp:2025-10-29 05:33:37.778380806 +0000 UTC m=+2.307889000,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 29 05:33:37.784942 kubelet[2402]: E1029 05:33:37.784473 2402 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 29 05:33:37.784942 kubelet[2402]: I1029 05:33:37.784512 2402 factory.go:223] Registration of the systemd container factory successfully Oct 29 05:33:37.784942 kubelet[2402]: I1029 05:33:37.784598 2402 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 29 05:33:37.785441 kubelet[2402]: I1029 05:33:37.785424 2402 factory.go:223] Registration of the containerd container factory successfully Oct 29 05:33:37.788514 kubelet[2402]: I1029 05:33:37.788455 2402 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 29 05:33:37.803442 kubelet[2402]: I1029 05:33:37.803411 2402 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 29 05:33:37.803442 kubelet[2402]: I1029 05:33:37.803428 2402 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 29 05:33:37.803442 kubelet[2402]: I1029 05:33:37.803443 2402 state_mem.go:36] "Initialized new in-memory state store" Oct 29 05:33:37.806838 kubelet[2402]: I1029 05:33:37.806799 2402 policy_none.go:49] "None policy: Start" Oct 29 05:33:37.806838 kubelet[2402]: I1029 05:33:37.806826 2402 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 29 05:33:37.806838 kubelet[2402]: I1029 05:33:37.806837 2402 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 29 05:33:37.808927 kubelet[2402]: I1029 05:33:37.808896 2402 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 29 05:33:37.808980 kubelet[2402]: I1029 05:33:37.808933 2402 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 29 05:33:37.808980 kubelet[2402]: I1029 05:33:37.808959 2402 kubelet.go:2427] "Starting kubelet main sync loop" Oct 29 05:33:37.809054 kubelet[2402]: E1029 05:33:37.809000 2402 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 29 05:33:37.810197 kubelet[2402]: I1029 05:33:37.809664 2402 policy_none.go:47] "Start" Oct 29 05:33:37.810897 kubelet[2402]: E1029 05:33:37.810362 2402 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 29 05:33:37.815417 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 29 05:33:37.836198 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 29 05:33:37.856183 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 29 05:33:37.857684 kubelet[2402]: E1029 05:33:37.857631 2402 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 29 05:33:37.857940 kubelet[2402]: I1029 05:33:37.857922 2402 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 29 05:33:37.857997 kubelet[2402]: I1029 05:33:37.857944 2402 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 29 05:33:37.858316 kubelet[2402]: I1029 05:33:37.858247 2402 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 29 05:33:37.859477 kubelet[2402]: E1029 05:33:37.859441 2402 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 29 05:33:37.859568 kubelet[2402]: E1029 05:33:37.859493 2402 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 29 05:33:37.922599 systemd[1]: Created slice kubepods-burstable-pod737ab6952a2f8343db887e31b95ff356.slice - libcontainer container kubepods-burstable-pod737ab6952a2f8343db887e31b95ff356.slice. Oct 29 05:33:37.941097 kubelet[2402]: E1029 05:33:37.941033 2402 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 05:33:37.943997 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice - libcontainer container kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Oct 29 05:33:37.956489 kubelet[2402]: E1029 05:33:37.956450 2402 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 05:33:37.958678 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice - libcontainer container kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Oct 29 05:33:37.959555 kubelet[2402]: I1029 05:33:37.959521 2402 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 05:33:37.960024 kubelet[2402]: E1029 05:33:37.959992 2402 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.106:6443/api/v1/nodes\": dial tcp 10.0.0.106:6443: connect: connection refused" node="localhost" Oct 29 05:33:37.960955 kubelet[2402]: E1029 05:33:37.960925 2402 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 05:33:37.984494 kubelet[2402]: E1029 05:33:37.984457 2402 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.106:6443: connect: connection refused" interval="400ms" Oct 29 05:33:38.084831 kubelet[2402]: I1029 05:33:38.084786 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 05:33:38.084831 kubelet[2402]: I1029 05:33:38.084834 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/737ab6952a2f8343db887e31b95ff356-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"737ab6952a2f8343db887e31b95ff356\") " pod="kube-system/kube-apiserver-localhost" Oct 29 05:33:38.084924 kubelet[2402]: I1029 05:33:38.084861 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/737ab6952a2f8343db887e31b95ff356-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"737ab6952a2f8343db887e31b95ff356\") " pod="kube-system/kube-apiserver-localhost" Oct 29 05:33:38.084950 kubelet[2402]: I1029 05:33:38.084915 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 05:33:38.084973 kubelet[2402]: I1029 05:33:38.084953 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 05:33:38.085008 kubelet[2402]: I1029 05:33:38.084977 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 29 05:33:38.085008 kubelet[2402]: I1029 05:33:38.084996 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/737ab6952a2f8343db887e31b95ff356-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"737ab6952a2f8343db887e31b95ff356\") " pod="kube-system/kube-apiserver-localhost" Oct 29 05:33:38.085138 kubelet[2402]: I1029 05:33:38.085115 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 05:33:38.085199 kubelet[2402]: I1029 05:33:38.085144 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 05:33:38.162264 kubelet[2402]: I1029 05:33:38.162208 2402 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 05:33:38.162664 kubelet[2402]: E1029 05:33:38.162605 2402 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.106:6443/api/v1/nodes\": dial tcp 10.0.0.106:6443: connect: connection refused" node="localhost" Oct 29 05:33:38.245511 kubelet[2402]: E1029 05:33:38.245412 2402 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:33:38.246640 containerd[1601]: time="2025-10-29T05:33:38.246595587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:737ab6952a2f8343db887e31b95ff356,Namespace:kube-system,Attempt:0,}" Oct 29 05:33:38.260119 kubelet[2402]: E1029 05:33:38.260093 2402 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:33:38.260616 containerd[1601]: time="2025-10-29T05:33:38.260578769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Oct 29 05:33:38.264210 kubelet[2402]: E1029 05:33:38.264170 2402 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:33:38.264571 containerd[1601]: time="2025-10-29T05:33:38.264482225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Oct 29 05:33:38.321265 kubelet[2402]: E1029 05:33:38.321060 2402 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.106:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.106:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1872df625acbd006 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-29 05:33:37.778380806 +0000 UTC m=+2.307889000,LastTimestamp:2025-10-29 05:33:37.778380806 +0000 UTC m=+2.307889000,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 29 05:33:38.385987 kubelet[2402]: E1029 05:33:38.385932 2402 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.106:6443: connect: connection refused" interval="800ms" Oct 29 05:33:38.564945 kubelet[2402]: I1029 05:33:38.564910 2402 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 05:33:38.565431 kubelet[2402]: E1029 05:33:38.565383 2402 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.106:6443/api/v1/nodes\": dial tcp 10.0.0.106:6443: connect: connection refused" node="localhost" Oct 29 05:33:38.788633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2508916875.mount: Deactivated successfully. Oct 29 05:33:38.795166 containerd[1601]: time="2025-10-29T05:33:38.795126312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 29 05:33:38.799595 containerd[1601]: time="2025-10-29T05:33:38.799535765Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 29 05:33:38.800616 containerd[1601]: time="2025-10-29T05:33:38.800581071Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 29 05:33:38.801728 containerd[1601]: time="2025-10-29T05:33:38.801683017Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 29 05:33:38.802870 containerd[1601]: time="2025-10-29T05:33:38.802835067Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 29 05:33:38.803790 containerd[1601]: time="2025-10-29T05:33:38.803741439Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 29 05:33:38.804776 containerd[1601]: time="2025-10-29T05:33:38.804735177Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 29 05:33:38.807093 containerd[1601]: time="2025-10-29T05:33:38.806357506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 29 05:33:38.807987 containerd[1601]: time="2025-10-29T05:33:38.807945570Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 558.263335ms" Oct 29 05:33:38.808696 containerd[1601]: time="2025-10-29T05:33:38.808662309Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 541.463826ms" Oct 29 05:33:38.810654 containerd[1601]: time="2025-10-29T05:33:38.810608275Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 546.787885ms" Oct 29 05:33:38.850161 containerd[1601]: time="2025-10-29T05:33:38.849803336Z" level=info msg="connecting to shim 9b0aa75b66a90dec015b3a200ff8edbd8de47522aea9a36af45f10eb83a45550" address="unix:///run/containerd/s/18fe48189c0e9f2f9cabef04fc42e1816102e08005678cf5278f9eef5f7ca044" namespace=k8s.io protocol=ttrpc version=3 Oct 29 05:33:38.855990 containerd[1601]: time="2025-10-29T05:33:38.855926783Z" level=info msg="connecting to shim 8dff9612b2f22a650a5aa6aa1ba7c7bd89ea5d1bb7affb406c73f2ec9aea3196" address="unix:///run/containerd/s/c9824f800ff99057012fca0973bd866c19cdada45ec994a13ca7ca741cd1b753" namespace=k8s.io protocol=ttrpc version=3 Oct 29 05:33:38.874118 containerd[1601]: time="2025-10-29T05:33:38.874038650Z" level=info msg="connecting to shim 4864165de3bee982985a0904e136533d5069347a505adb934dd7ce10b5be2d33" address="unix:///run/containerd/s/1be5d97f589bcc174e4032e07f738ba17e68820e4f3af90c1fc3e77c4ab39073" namespace=k8s.io protocol=ttrpc version=3 Oct 29 05:33:38.893274 systemd[1]: Started cri-containerd-9b0aa75b66a90dec015b3a200ff8edbd8de47522aea9a36af45f10eb83a45550.scope - libcontainer container 9b0aa75b66a90dec015b3a200ff8edbd8de47522aea9a36af45f10eb83a45550. Oct 29 05:33:38.897574 systemd[1]: Started cri-containerd-8dff9612b2f22a650a5aa6aa1ba7c7bd89ea5d1bb7affb406c73f2ec9aea3196.scope - libcontainer container 8dff9612b2f22a650a5aa6aa1ba7c7bd89ea5d1bb7affb406c73f2ec9aea3196. Oct 29 05:33:38.911539 systemd[1]: Started cri-containerd-4864165de3bee982985a0904e136533d5069347a505adb934dd7ce10b5be2d33.scope - libcontainer container 4864165de3bee982985a0904e136533d5069347a505adb934dd7ce10b5be2d33. Oct 29 05:33:38.964560 containerd[1601]: time="2025-10-29T05:33:38.964505551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:737ab6952a2f8343db887e31b95ff356,Namespace:kube-system,Attempt:0,} returns sandbox id \"8dff9612b2f22a650a5aa6aa1ba7c7bd89ea5d1bb7affb406c73f2ec9aea3196\"" Oct 29 05:33:38.965504 kubelet[2402]: E1029 05:33:38.965459 2402 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:33:38.966647 containerd[1601]: time="2025-10-29T05:33:38.966505872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b0aa75b66a90dec015b3a200ff8edbd8de47522aea9a36af45f10eb83a45550\"" Oct 29 05:33:38.967889 kubelet[2402]: E1029 05:33:38.967851 2402 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:33:38.970251 containerd[1601]: time="2025-10-29T05:33:38.970206541Z" level=info msg="CreateContainer within sandbox \"8dff9612b2f22a650a5aa6aa1ba7c7bd89ea5d1bb7affb406c73f2ec9aea3196\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 29 05:33:38.973750 containerd[1601]: time="2025-10-29T05:33:38.973702839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"4864165de3bee982985a0904e136533d5069347a505adb934dd7ce10b5be2d33\"" Oct 29 05:33:38.973896 containerd[1601]: time="2025-10-29T05:33:38.973865610Z" level=info msg="CreateContainer within sandbox \"9b0aa75b66a90dec015b3a200ff8edbd8de47522aea9a36af45f10eb83a45550\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 29 05:33:38.974905 kubelet[2402]: E1029 05:33:38.974878 2402 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:33:38.983839 containerd[1601]: time="2025-10-29T05:33:38.983799476Z" level=info msg="Container 16e15622143ca6a491dedac00b8543b331f20c1061e55143d89b1797d2742fa7: CDI devices from CRI Config.CDIDevices: []" Oct 29 05:33:38.991139 containerd[1601]: time="2025-10-29T05:33:38.991094641Z" level=info msg="CreateContainer within sandbox \"4864165de3bee982985a0904e136533d5069347a505adb934dd7ce10b5be2d33\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 29 05:33:38.995644 containerd[1601]: time="2025-10-29T05:33:38.995616729Z" level=info msg="Container 78d08ed96e437a88b31c47af0715e947e17ea9fec30c4121c66349dc51325ec4: CDI devices from CRI Config.CDIDevices: []" Oct 29 05:33:38.996061 containerd[1601]: time="2025-10-29T05:33:38.996034878Z" level=info msg="CreateContainer within sandbox \"9b0aa75b66a90dec015b3a200ff8edbd8de47522aea9a36af45f10eb83a45550\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"16e15622143ca6a491dedac00b8543b331f20c1061e55143d89b1797d2742fa7\"" Oct 29 05:33:38.996649 containerd[1601]: time="2025-10-29T05:33:38.996620917Z" level=info msg="StartContainer for \"16e15622143ca6a491dedac00b8543b331f20c1061e55143d89b1797d2742fa7\"" Oct 29 05:33:38.997847 containerd[1601]: time="2025-10-29T05:33:38.997803868Z" level=info msg="connecting to shim 16e15622143ca6a491dedac00b8543b331f20c1061e55143d89b1797d2742fa7" address="unix:///run/containerd/s/18fe48189c0e9f2f9cabef04fc42e1816102e08005678cf5278f9eef5f7ca044" protocol=ttrpc version=3 Oct 29 05:33:39.004319 containerd[1601]: time="2025-10-29T05:33:39.004281634Z" level=info msg="Container 83851334fa31a73f059b38138d4ced130b89eb954dfc24dd5e672e94d5056697: CDI devices from CRI Config.CDIDevices: []" Oct 29 05:33:39.007370 containerd[1601]: time="2025-10-29T05:33:39.007345500Z" level=info msg="CreateContainer within sandbox \"8dff9612b2f22a650a5aa6aa1ba7c7bd89ea5d1bb7affb406c73f2ec9aea3196\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"78d08ed96e437a88b31c47af0715e947e17ea9fec30c4121c66349dc51325ec4\"" Oct 29 05:33:39.007780 containerd[1601]: time="2025-10-29T05:33:39.007753568Z" level=info msg="StartContainer for \"78d08ed96e437a88b31c47af0715e947e17ea9fec30c4121c66349dc51325ec4\"" Oct 29 05:33:39.008966 containerd[1601]: time="2025-10-29T05:33:39.008938800Z" level=info msg="connecting to shim 78d08ed96e437a88b31c47af0715e947e17ea9fec30c4121c66349dc51325ec4" address="unix:///run/containerd/s/c9824f800ff99057012fca0973bd866c19cdada45ec994a13ca7ca741cd1b753" protocol=ttrpc version=3 Oct 29 05:33:39.013481 containerd[1601]: time="2025-10-29T05:33:39.013452953Z" level=info msg="CreateContainer within sandbox \"4864165de3bee982985a0904e136533d5069347a505adb934dd7ce10b5be2d33\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"83851334fa31a73f059b38138d4ced130b89eb954dfc24dd5e672e94d5056697\"" Oct 29 05:33:39.014743 containerd[1601]: time="2025-10-29T05:33:39.013791008Z" level=info msg="StartContainer for \"83851334fa31a73f059b38138d4ced130b89eb954dfc24dd5e672e94d5056697\"" Oct 29 05:33:39.014901 containerd[1601]: time="2025-10-29T05:33:39.014858735Z" level=info msg="connecting to shim 83851334fa31a73f059b38138d4ced130b89eb954dfc24dd5e672e94d5056697" address="unix:///run/containerd/s/1be5d97f589bcc174e4032e07f738ba17e68820e4f3af90c1fc3e77c4ab39073" protocol=ttrpc version=3 Oct 29 05:33:39.021252 systemd[1]: Started cri-containerd-16e15622143ca6a491dedac00b8543b331f20c1061e55143d89b1797d2742fa7.scope - libcontainer container 16e15622143ca6a491dedac00b8543b331f20c1061e55143d89b1797d2742fa7. Oct 29 05:33:39.031261 systemd[1]: Started cri-containerd-78d08ed96e437a88b31c47af0715e947e17ea9fec30c4121c66349dc51325ec4.scope - libcontainer container 78d08ed96e437a88b31c47af0715e947e17ea9fec30c4121c66349dc51325ec4. Oct 29 05:33:39.034766 systemd[1]: Started cri-containerd-83851334fa31a73f059b38138d4ced130b89eb954dfc24dd5e672e94d5056697.scope - libcontainer container 83851334fa31a73f059b38138d4ced130b89eb954dfc24dd5e672e94d5056697. Oct 29 05:33:39.098510 containerd[1601]: time="2025-10-29T05:33:39.098462674Z" level=info msg="StartContainer for \"16e15622143ca6a491dedac00b8543b331f20c1061e55143d89b1797d2742fa7\" returns successfully" Oct 29 05:33:39.114030 containerd[1601]: time="2025-10-29T05:33:39.113263875Z" level=info msg="StartContainer for \"78d08ed96e437a88b31c47af0715e947e17ea9fec30c4121c66349dc51325ec4\" returns successfully" Oct 29 05:33:39.122691 containerd[1601]: time="2025-10-29T05:33:39.122654092Z" level=info msg="StartContainer for \"83851334fa31a73f059b38138d4ced130b89eb954dfc24dd5e672e94d5056697\" returns successfully" Oct 29 05:33:39.367131 kubelet[2402]: I1029 05:33:39.366970 2402 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 05:33:39.822450 kubelet[2402]: E1029 05:33:39.822409 2402 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 05:33:39.822614 kubelet[2402]: E1029 05:33:39.822553 2402 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:33:39.824842 kubelet[2402]: E1029 05:33:39.824822 2402 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 05:33:39.824925 kubelet[2402]: E1029 05:33:39.824908 2402 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:33:39.827527 kubelet[2402]: E1029 05:33:39.827507 2402 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 05:33:39.827617 kubelet[2402]: E1029 05:33:39.827601 2402 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:33:40.650097 kubelet[2402]: E1029 05:33:40.650031 2402 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 29 05:33:40.774617 kubelet[2402]: I1029 05:33:40.774551 2402 apiserver.go:52] "Watching apiserver" Oct 29 05:33:40.784690 kubelet[2402]: I1029 05:33:40.784636 2402 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 29 05:33:40.833105 kubelet[2402]: I1029 05:33:40.832953 2402 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 29 05:33:40.835671 kubelet[2402]: I1029 05:33:40.834485 2402 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 29 05:33:40.835671 kubelet[2402]: I1029 05:33:40.834916 2402 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 29 05:33:40.855602 kubelet[2402]: E1029 05:33:40.855203 2402 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 29 05:33:40.855602 kubelet[2402]: E1029 05:33:40.855254 2402 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 29 05:33:40.855602 kubelet[2402]: E1029 05:33:40.855443 2402 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:33:40.855602 kubelet[2402]: E1029 05:33:40.855502 2402 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:33:40.884617 kubelet[2402]: I1029 05:33:40.884585 2402 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 29 05:33:40.886683 kubelet[2402]: E1029 05:33:40.886655 2402 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 29 05:33:40.886683 kubelet[2402]: I1029 05:33:40.886679 2402 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 29 05:33:40.888468 kubelet[2402]: E1029 05:33:40.888419 2402 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 29 05:33:40.888468 kubelet[2402]: I1029 05:33:40.888441 2402 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 29 05:33:40.890471 kubelet[2402]: E1029 05:33:40.890439 2402 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 29 05:33:41.833776 kubelet[2402]: I1029 05:33:41.833744 2402 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 29 05:33:41.839157 kubelet[2402]: E1029 05:33:41.839132 2402 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:33:42.836571 kubelet[2402]: E1029 05:33:42.836534 2402 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:33:42.905244 systemd[1]: Reload requested from client PID 2688 ('systemctl') (unit session-9.scope)... Oct 29 05:33:42.905262 systemd[1]: Reloading... Oct 29 05:33:42.989142 zram_generator::config[2732]: No configuration found. Oct 29 05:33:43.228684 systemd[1]: Reloading finished in 322 ms. Oct 29 05:33:43.255211 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 05:33:43.277474 systemd[1]: kubelet.service: Deactivated successfully. Oct 29 05:33:43.277809 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 05:33:43.277881 systemd[1]: kubelet.service: Consumed 1.711s CPU time, 127.3M memory peak. Oct 29 05:33:43.280131 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 05:33:43.551978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 05:33:43.570587 (kubelet)[2777]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 29 05:33:43.619998 kubelet[2777]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 29 05:33:43.619998 kubelet[2777]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 05:33:43.620502 kubelet[2777]: I1029 05:33:43.620046 2777 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 29 05:33:43.626350 kubelet[2777]: I1029 05:33:43.626302 2777 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 29 05:33:43.626350 kubelet[2777]: I1029 05:33:43.626333 2777 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 29 05:33:43.626442 kubelet[2777]: I1029 05:33:43.626366 2777 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 29 05:33:43.626442 kubelet[2777]: I1029 05:33:43.626378 2777 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 29 05:33:43.626615 kubelet[2777]: I1029 05:33:43.626583 2777 server.go:956] "Client rotation is on, will bootstrap in background" Oct 29 05:33:43.629225 kubelet[2777]: I1029 05:33:43.629203 2777 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 29 05:33:43.631085 kubelet[2777]: I1029 05:33:43.631042 2777 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 29 05:33:43.634427 kubelet[2777]: I1029 05:33:43.634395 2777 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 29 05:33:43.639284 kubelet[2777]: I1029 05:33:43.639259 2777 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 29 05:33:43.639497 kubelet[2777]: I1029 05:33:43.639465 2777 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 29 05:33:43.639655 kubelet[2777]: I1029 05:33:43.639494 2777 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 29 05:33:43.639655 kubelet[2777]: I1029 05:33:43.639645 2777 topology_manager.go:138] "Creating topology manager with none policy" Oct 29 05:33:43.639655 kubelet[2777]: I1029 05:33:43.639656 2777 container_manager_linux.go:306] "Creating device plugin manager" Oct 29 05:33:43.639798 kubelet[2777]: I1029 05:33:43.639681 2777 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 29 05:33:43.640408 kubelet[2777]: I1029 05:33:43.640374 2777 state_mem.go:36] "Initialized new in-memory state store" Oct 29 05:33:43.640575 kubelet[2777]: I1029 05:33:43.640545 2777 kubelet.go:475] "Attempting to sync node with API server" Oct 29 05:33:43.640575 kubelet[2777]: I1029 05:33:43.640570 2777 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 29 05:33:43.640634 kubelet[2777]: I1029 05:33:43.640595 2777 kubelet.go:387] "Adding apiserver pod source" Oct 29 05:33:43.640634 kubelet[2777]: I1029 05:33:43.640624 2777 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 29 05:33:43.641987 kubelet[2777]: I1029 05:33:43.641905 2777 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 29 05:33:43.642403 kubelet[2777]: I1029 05:33:43.642374 2777 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 29 05:33:43.642449 kubelet[2777]: I1029 05:33:43.642405 2777 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 29 05:33:43.648777 kubelet[2777]: I1029 05:33:43.647654 2777 server.go:1262] "Started kubelet" Oct 29 05:33:43.649190 kubelet[2777]: I1029 05:33:43.649158 2777 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 29 05:33:43.649744 kubelet[2777]: I1029 05:33:43.649717 2777 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 29 05:33:43.651827 kubelet[2777]: I1029 05:33:43.650026 2777 server.go:310] "Adding debug handlers to kubelet server" Oct 29 05:33:43.653866 kubelet[2777]: I1029 05:33:43.653656 2777 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 29 05:33:43.653866 kubelet[2777]: I1029 05:33:43.653700 2777 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 29 05:33:43.653971 kubelet[2777]: I1029 05:33:43.653950 2777 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 29 05:33:43.654334 kubelet[2777]: E1029 05:33:43.654308 2777 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 29 05:33:43.655384 kubelet[2777]: I1029 05:33:43.655347 2777 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 29 05:33:43.658472 kubelet[2777]: I1029 05:33:43.658010 2777 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 29 05:33:43.658596 kubelet[2777]: I1029 05:33:43.658567 2777 factory.go:223] Registration of the systemd container factory successfully Oct 29 05:33:43.658705 kubelet[2777]: I1029 05:33:43.658671 2777 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 29 05:33:43.658840 kubelet[2777]: I1029 05:33:43.658821 2777 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 29 05:33:43.659180 kubelet[2777]: I1029 05:33:43.659164 2777 reconciler.go:29] "Reconciler: start to sync state" Oct 29 05:33:43.660202 kubelet[2777]: I1029 05:33:43.660172 2777 factory.go:223] Registration of the containerd container factory successfully Oct 29 05:33:43.664130 kubelet[2777]: I1029 05:33:43.664090 2777 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 29 05:33:43.672512 kubelet[2777]: I1029 05:33:43.672394 2777 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 29 05:33:43.672512 kubelet[2777]: I1029 05:33:43.672419 2777 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 29 05:33:43.672512 kubelet[2777]: I1029 05:33:43.672440 2777 kubelet.go:2427] "Starting kubelet main sync loop" Oct 29 05:33:43.672512 kubelet[2777]: E1029 05:33:43.672489 2777 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 29 05:33:43.717497 kubelet[2777]: I1029 05:33:43.717463 2777 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 29 05:33:43.717671 kubelet[2777]: I1029 05:33:43.717655 2777 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 29 05:33:43.717763 kubelet[2777]: I1029 05:33:43.717732 2777 state_mem.go:36] "Initialized new in-memory state store" Oct 29 05:33:43.717917 kubelet[2777]: I1029 05:33:43.717882 2777 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 29 05:33:43.717917 kubelet[2777]: I1029 05:33:43.717893 2777 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 29 05:33:43.717917 kubelet[2777]: I1029 05:33:43.717912 2777 policy_none.go:49] "None policy: Start" Oct 29 05:33:43.717997 kubelet[2777]: I1029 05:33:43.717922 2777 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 29 05:33:43.717997 kubelet[2777]: I1029 05:33:43.717934 2777 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 29 05:33:43.718146 kubelet[2777]: I1029 05:33:43.718040 2777 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Oct 29 05:33:43.718146 kubelet[2777]: I1029 05:33:43.718050 2777 policy_none.go:47] "Start" Oct 29 05:33:43.722741 kubelet[2777]: E1029 05:33:43.722707 2777 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 29 05:33:43.722929 kubelet[2777]: I1029 05:33:43.722904 2777 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 29 05:33:43.722929 kubelet[2777]: I1029 05:33:43.722921 2777 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 29 05:33:43.723245 kubelet[2777]: I1029 05:33:43.723171 2777 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 29 05:33:43.726093 kubelet[2777]: E1029 05:33:43.724954 2777 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 29 05:33:43.773225 kubelet[2777]: I1029 05:33:43.773171 2777 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 29 05:33:43.773431 kubelet[2777]: I1029 05:33:43.773408 2777 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 29 05:33:43.774068 kubelet[2777]: I1029 05:33:43.773514 2777 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 29 05:33:43.779379 kubelet[2777]: E1029 05:33:43.779322 2777 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 29 05:33:43.830958 kubelet[2777]: I1029 05:33:43.829724 2777 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 05:33:43.836584 kubelet[2777]: I1029 05:33:43.836556 2777 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 29 05:33:43.836651 kubelet[2777]: I1029 05:33:43.836637 2777 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 29 05:33:43.860360 kubelet[2777]: I1029 05:33:43.860317 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/737ab6952a2f8343db887e31b95ff356-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"737ab6952a2f8343db887e31b95ff356\") " pod="kube-system/kube-apiserver-localhost" Oct 29 05:33:43.860439 kubelet[2777]: I1029 05:33:43.860403 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/737ab6952a2f8343db887e31b95ff356-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"737ab6952a2f8343db887e31b95ff356\") " pod="kube-system/kube-apiserver-localhost" Oct 29 05:33:43.860464 kubelet[2777]: I1029 05:33:43.860437 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 05:33:43.860487 kubelet[2777]: I1029 05:33:43.860467 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 05:33:43.860526 kubelet[2777]: I1029 05:33:43.860485 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 05:33:43.860526 kubelet[2777]: I1029 05:33:43.860507 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 29 05:33:43.860582 kubelet[2777]: I1029 05:33:43.860536 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/737ab6952a2f8343db887e31b95ff356-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"737ab6952a2f8343db887e31b95ff356\") " pod="kube-system/kube-apiserver-localhost" Oct 29 05:33:43.860582 kubelet[2777]: I1029 05:33:43.860572 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 05:33:43.860630 kubelet[2777]: I1029 05:33:43.860588 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 05:33:44.083665 kubelet[2777]: E1029 05:33:44.080249 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:33:44.085160 kubelet[2777]: E1029 05:33:44.080260 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:33:44.085420 kubelet[2777]: E1029 05:33:44.085378 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:33:44.641726 kubelet[2777]: I1029 05:33:44.641667 2777 apiserver.go:52] "Watching apiserver" Oct 29 05:33:44.659883 kubelet[2777]: I1029 05:33:44.659824 2777 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 29 05:33:44.686094 kubelet[2777]: I1029 05:33:44.685687 2777 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 29 05:33:44.686094 kubelet[2777]: I1029 05:33:44.685782 2777 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 29 05:33:44.687761 kubelet[2777]: E1029 05:33:44.687740 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:33:44.699307 kubelet[2777]: E1029 05:33:44.699256 2777 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 29 05:33:44.699717 kubelet[2777]: E1029 05:33:44.699669 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:33:44.700571 kubelet[2777]: E1029 05:33:44.700536 2777 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 29 05:33:44.701089 kubelet[2777]: E1029 05:33:44.700822 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:33:44.718103 kubelet[2777]: I1029 05:33:44.716548 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.716472777 podStartE2EDuration="3.716472777s" podCreationTimestamp="2025-10-29 05:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 05:33:44.716425366 +0000 UTC m=+1.140546235" watchObservedRunningTime="2025-10-29 05:33:44.716472777 +0000 UTC m=+1.140593636" Oct 29 05:33:44.738018 kubelet[2777]: I1029 05:33:44.737946 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.737802442 podStartE2EDuration="1.737802442s" podCreationTimestamp="2025-10-29 05:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 05:33:44.728099907 +0000 UTC m=+1.152220776" watchObservedRunningTime="2025-10-29 05:33:44.737802442 +0000 UTC m=+1.161923301" Oct 29 05:33:44.752087 kubelet[2777]: I1029 05:33:44.752031 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.752017867 podStartE2EDuration="1.752017867s" podCreationTimestamp="2025-10-29 05:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 05:33:44.73826493 +0000 UTC m=+1.162385799" watchObservedRunningTime="2025-10-29 05:33:44.752017867 +0000 UTC m=+1.176138726" Oct 29 05:33:45.687600 kubelet[2777]: E1029 05:33:45.687541 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:33:45.688148 kubelet[2777]: E1029 05:33:45.687694 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:33:45.688148 kubelet[2777]: E1029 05:33:45.687742 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:33:46.688793 kubelet[2777]: E1029 05:33:46.688715 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:33:47.711483 update_engine[1590]: I20251029 05:33:47.710449 1590 update_attempter.cc:509] Updating boot flags... Oct 29 05:33:47.855945 kubelet[2777]: I1029 05:33:47.855895 2777 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 29 05:33:47.856379 containerd[1601]: time="2025-10-29T05:33:47.856269243Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 29 05:33:47.856637 kubelet[2777]: I1029 05:33:47.856604 2777 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 29 05:33:48.870618 systemd[1]: Created slice kubepods-besteffort-pod265d009a_7390_4e81_9d04_925482b0ce46.slice - libcontainer container kubepods-besteffort-pod265d009a_7390_4e81_9d04_925482b0ce46.slice. Oct 29 05:33:48.893135 kubelet[2777]: I1029 05:33:48.893089 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/265d009a-7390-4e81-9d04-925482b0ce46-xtables-lock\") pod \"kube-proxy-b9m9t\" (UID: \"265d009a-7390-4e81-9d04-925482b0ce46\") " pod="kube-system/kube-proxy-b9m9t" Oct 29 05:33:48.893135 kubelet[2777]: I1029 05:33:48.893133 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/265d009a-7390-4e81-9d04-925482b0ce46-lib-modules\") pod \"kube-proxy-b9m9t\" (UID: \"265d009a-7390-4e81-9d04-925482b0ce46\") " pod="kube-system/kube-proxy-b9m9t" Oct 29 05:33:48.893516 kubelet[2777]: I1029 05:33:48.893151 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/265d009a-7390-4e81-9d04-925482b0ce46-kube-proxy\") pod \"kube-proxy-b9m9t\" (UID: \"265d009a-7390-4e81-9d04-925482b0ce46\") " pod="kube-system/kube-proxy-b9m9t" Oct 29 05:33:48.893516 kubelet[2777]: I1029 05:33:48.893165 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn5md\" (UniqueName: \"kubernetes.io/projected/265d009a-7390-4e81-9d04-925482b0ce46-kube-api-access-wn5md\") pod \"kube-proxy-b9m9t\" (UID: \"265d009a-7390-4e81-9d04-925482b0ce46\") " pod="kube-system/kube-proxy-b9m9t" Oct 29 05:33:49.026456 systemd[1]: Created slice kubepods-besteffort-pod69c52f3e_7162_4059_95b5_b95e8f11607d.slice - libcontainer container kubepods-besteffort-pod69c52f3e_7162_4059_95b5_b95e8f11607d.slice. Oct 29 05:33:49.094941 kubelet[2777]: I1029 05:33:49.094884 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/69c52f3e-7162-4059-95b5-b95e8f11607d-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-tff6q\" (UID: \"69c52f3e-7162-4059-95b5-b95e8f11607d\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-tff6q" Oct 29 05:33:49.094941 kubelet[2777]: I1029 05:33:49.094938 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw8kk\" (UniqueName: \"kubernetes.io/projected/69c52f3e-7162-4059-95b5-b95e8f11607d-kube-api-access-pw8kk\") pod \"tigera-operator-65cdcdfd6d-tff6q\" (UID: \"69c52f3e-7162-4059-95b5-b95e8f11607d\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-tff6q" Oct 29 05:33:49.183258 kubelet[2777]: E1029 05:33:49.183161 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:33:49.185286 containerd[1601]: time="2025-10-29T05:33:49.185237345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b9m9t,Uid:265d009a-7390-4e81-9d04-925482b0ce46,Namespace:kube-system,Attempt:0,}" Oct 29 05:33:49.212819 containerd[1601]: time="2025-10-29T05:33:49.212757556Z" level=info msg="connecting to shim 1943183f37271e0b1064751de597dc3a2ddd85f86de7577da4a403fbce34f054" address="unix:///run/containerd/s/f1ed1c3d9430f97156cfead8bb444e9e6ebc8b7cc18b3f30490de3ffb063d17a" namespace=k8s.io protocol=ttrpc version=3 Oct 29 05:33:49.242223 systemd[1]: Started cri-containerd-1943183f37271e0b1064751de597dc3a2ddd85f86de7577da4a403fbce34f054.scope - libcontainer container 1943183f37271e0b1064751de597dc3a2ddd85f86de7577da4a403fbce34f054. Oct 29 05:33:49.269830 containerd[1601]: time="2025-10-29T05:33:49.269790079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b9m9t,Uid:265d009a-7390-4e81-9d04-925482b0ce46,Namespace:kube-system,Attempt:0,} returns sandbox id \"1943183f37271e0b1064751de597dc3a2ddd85f86de7577da4a403fbce34f054\"" Oct 29 05:33:49.270716 kubelet[2777]: E1029 05:33:49.270674 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:33:49.283511 containerd[1601]: time="2025-10-29T05:33:49.283471872Z" level=info msg="CreateContainer within sandbox \"1943183f37271e0b1064751de597dc3a2ddd85f86de7577da4a403fbce34f054\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 29 05:33:49.294254 containerd[1601]: time="2025-10-29T05:33:49.294203351Z" level=info msg="Container 89cd53002deda0b885905cd77b17ffa56d162fc026453101e1351c8faea7a5d3: CDI devices from CRI Config.CDIDevices: []" Oct 29 05:33:49.303052 containerd[1601]: time="2025-10-29T05:33:49.303012432Z" level=info msg="CreateContainer within sandbox \"1943183f37271e0b1064751de597dc3a2ddd85f86de7577da4a403fbce34f054\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"89cd53002deda0b885905cd77b17ffa56d162fc026453101e1351c8faea7a5d3\"" Oct 29 05:33:49.303710 containerd[1601]: time="2025-10-29T05:33:49.303678964Z" level=info msg="StartContainer for \"89cd53002deda0b885905cd77b17ffa56d162fc026453101e1351c8faea7a5d3\"" Oct 29 05:33:49.304954 containerd[1601]: time="2025-10-29T05:33:49.304919441Z" level=info msg="connecting to shim 89cd53002deda0b885905cd77b17ffa56d162fc026453101e1351c8faea7a5d3" address="unix:///run/containerd/s/f1ed1c3d9430f97156cfead8bb444e9e6ebc8b7cc18b3f30490de3ffb063d17a" protocol=ttrpc version=3 Oct 29 05:33:49.327223 systemd[1]: Started cri-containerd-89cd53002deda0b885905cd77b17ffa56d162fc026453101e1351c8faea7a5d3.scope - libcontainer container 89cd53002deda0b885905cd77b17ffa56d162fc026453101e1351c8faea7a5d3. Oct 29 05:33:49.334548 containerd[1601]: time="2025-10-29T05:33:49.334486597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-tff6q,Uid:69c52f3e-7162-4059-95b5-b95e8f11607d,Namespace:tigera-operator,Attempt:0,}" Oct 29 05:33:49.361132 containerd[1601]: time="2025-10-29T05:33:49.360988541Z" level=info msg="connecting to shim eb893dde7abba502eeec8e6d596217524176c3448a3ce529717f4366f57a1c4c" address="unix:///run/containerd/s/2abe6e87817e0d813e9160ddb318dcd484c65c702bdb81c4fdb9ef7a074c1eeb" namespace=k8s.io protocol=ttrpc version=3 Oct 29 05:33:49.382514 containerd[1601]: time="2025-10-29T05:33:49.382447583Z" level=info msg="StartContainer for \"89cd53002deda0b885905cd77b17ffa56d162fc026453101e1351c8faea7a5d3\" returns successfully" Oct 29 05:33:49.395271 systemd[1]: Started cri-containerd-eb893dde7abba502eeec8e6d596217524176c3448a3ce529717f4366f57a1c4c.scope - libcontainer container eb893dde7abba502eeec8e6d596217524176c3448a3ce529717f4366f57a1c4c. Oct 29 05:33:49.448282 containerd[1601]: time="2025-10-29T05:33:49.448154974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-tff6q,Uid:69c52f3e-7162-4059-95b5-b95e8f11607d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"eb893dde7abba502eeec8e6d596217524176c3448a3ce529717f4366f57a1c4c\"" Oct 29 05:33:49.450260 containerd[1601]: time="2025-10-29T05:33:49.450217076Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 29 05:33:49.700660 kubelet[2777]: E1029 05:33:49.700523 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:33:49.937969 kubelet[2777]: E1029 05:33:49.937930 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:33:49.950565 kubelet[2777]: I1029 05:33:49.950514 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-b9m9t" podStartSLOduration=1.950483244 podStartE2EDuration="1.950483244s" podCreationTimestamp="2025-10-29 05:33:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 05:33:49.710708804 +0000 UTC m=+6.134829673" watchObservedRunningTime="2025-10-29 05:33:49.950483244 +0000 UTC m=+6.374604103" Oct 29 05:33:50.012418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4072582568.mount: Deactivated successfully. Oct 29 05:33:50.702055 kubelet[2777]: E1029 05:33:50.702013 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:33:51.617968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2389314232.mount: Deactivated successfully. Oct 29 05:33:51.703915 kubelet[2777]: E1029 05:33:51.703868 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:33:52.872088 containerd[1601]: time="2025-10-29T05:33:52.872005359Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 05:33:52.872965 containerd[1601]: time="2025-10-29T05:33:52.872929685Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Oct 29 05:33:52.874259 containerd[1601]: time="2025-10-29T05:33:52.874226726Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 05:33:52.876731 containerd[1601]: time="2025-10-29T05:33:52.876683558Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 05:33:52.877614 containerd[1601]: time="2025-10-29T05:33:52.877571576Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.427306629s" Oct 29 05:33:52.877614 containerd[1601]: time="2025-10-29T05:33:52.877610379Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 29 05:33:52.881874 containerd[1601]: time="2025-10-29T05:33:52.881832407Z" level=info msg="CreateContainer within sandbox \"eb893dde7abba502eeec8e6d596217524176c3448a3ce529717f4366f57a1c4c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 29 05:33:52.890378 containerd[1601]: time="2025-10-29T05:33:52.890335965Z" level=info msg="Container 42215a264f9a310d24a16b4356ac66905413df831beee2e42fc79c3afd6020f2: CDI devices from CRI Config.CDIDevices: []" Oct 29 05:33:52.901114 containerd[1601]: time="2025-10-29T05:33:52.901055739Z" level=info msg="CreateContainer within sandbox \"eb893dde7abba502eeec8e6d596217524176c3448a3ce529717f4366f57a1c4c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"42215a264f9a310d24a16b4356ac66905413df831beee2e42fc79c3afd6020f2\"" Oct 29 05:33:52.901679 containerd[1601]: time="2025-10-29T05:33:52.901630675Z" level=info msg="StartContainer for \"42215a264f9a310d24a16b4356ac66905413df831beee2e42fc79c3afd6020f2\"" Oct 29 05:33:52.902477 containerd[1601]: time="2025-10-29T05:33:52.902452308Z" level=info msg="connecting to shim 42215a264f9a310d24a16b4356ac66905413df831beee2e42fc79c3afd6020f2" address="unix:///run/containerd/s/2abe6e87817e0d813e9160ddb318dcd484c65c702bdb81c4fdb9ef7a074c1eeb" protocol=ttrpc version=3 Oct 29 05:33:52.962254 systemd[1]: Started cri-containerd-42215a264f9a310d24a16b4356ac66905413df831beee2e42fc79c3afd6020f2.scope - libcontainer container 42215a264f9a310d24a16b4356ac66905413df831beee2e42fc79c3afd6020f2. Oct 29 05:33:52.993441 containerd[1601]: time="2025-10-29T05:33:52.993378290Z" level=info msg="StartContainer for \"42215a264f9a310d24a16b4356ac66905413df831beee2e42fc79c3afd6020f2\" returns successfully" Oct 29 05:33:53.050010 kubelet[2777]: E1029 05:33:53.049964 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:33:53.709665 kubelet[2777]: E1029 05:33:53.709624 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:33:53.718892 kubelet[2777]: I1029 05:33:53.718818 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-tff6q" podStartSLOduration=2.290235995 podStartE2EDuration="5.718796526s" podCreationTimestamp="2025-10-29 05:33:48 +0000 UTC" firstStartedPulling="2025-10-29 05:33:49.449779157 +0000 UTC m=+5.873900016" lastFinishedPulling="2025-10-29 05:33:52.878339688 +0000 UTC m=+9.302460547" observedRunningTime="2025-10-29 05:33:53.71869834 +0000 UTC m=+10.142819209" watchObservedRunningTime="2025-10-29 05:33:53.718796526 +0000 UTC m=+10.142917385" Oct 29 05:33:54.411790 kubelet[2777]: E1029 05:33:54.411747 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:33:54.711938 kubelet[2777]: E1029 05:33:54.711791 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:33:58.177139 sudo[1835]: pam_unix(sudo:session): session closed for user root Oct 29 05:33:58.181230 sshd[1834]: Connection closed by 10.0.0.1 port 40088 Oct 29 05:33:58.179738 sshd-session[1831]: pam_unix(sshd:session): session closed for user core Oct 29 05:33:58.187015 systemd[1]: sshd@8-10.0.0.106:22-10.0.0.1:40088.service: Deactivated successfully. Oct 29 05:33:58.193161 systemd[1]: session-9.scope: Deactivated successfully. Oct 29 05:33:58.194055 systemd[1]: session-9.scope: Consumed 5.609s CPU time, 227.2M memory peak. Oct 29 05:33:58.199991 systemd-logind[1587]: Session 9 logged out. Waiting for processes to exit. Oct 29 05:33:58.201342 systemd-logind[1587]: Removed session 9. Oct 29 05:34:02.431574 systemd[1]: Created slice kubepods-besteffort-pod0a088882_daa6_40a1_87c9_bcf3275a6f67.slice - libcontainer container kubepods-besteffort-pod0a088882_daa6_40a1_87c9_bcf3275a6f67.slice. Oct 29 05:34:02.482505 kubelet[2777]: I1029 05:34:02.482436 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0a088882-daa6-40a1-87c9-bcf3275a6f67-typha-certs\") pod \"calico-typha-f668bffc5-8wr2k\" (UID: \"0a088882-daa6-40a1-87c9-bcf3275a6f67\") " pod="calico-system/calico-typha-f668bffc5-8wr2k" Oct 29 05:34:02.482505 kubelet[2777]: I1029 05:34:02.482486 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvzwf\" (UniqueName: \"kubernetes.io/projected/0a088882-daa6-40a1-87c9-bcf3275a6f67-kube-api-access-pvzwf\") pod \"calico-typha-f668bffc5-8wr2k\" (UID: \"0a088882-daa6-40a1-87c9-bcf3275a6f67\") " pod="calico-system/calico-typha-f668bffc5-8wr2k" Oct 29 05:34:02.482505 kubelet[2777]: I1029 05:34:02.482508 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a088882-daa6-40a1-87c9-bcf3275a6f67-tigera-ca-bundle\") pod \"calico-typha-f668bffc5-8wr2k\" (UID: \"0a088882-daa6-40a1-87c9-bcf3275a6f67\") " pod="calico-system/calico-typha-f668bffc5-8wr2k" Oct 29 05:34:02.526244 systemd[1]: Created slice kubepods-besteffort-pod721e2e49_a817_4b2e_9766_cd09052a36d8.slice - libcontainer container kubepods-besteffort-pod721e2e49_a817_4b2e_9766_cd09052a36d8.slice. Oct 29 05:34:02.583271 kubelet[2777]: I1029 05:34:02.583164 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/721e2e49-a817-4b2e-9766-cd09052a36d8-lib-modules\") pod \"calico-node-jf7l5\" (UID: \"721e2e49-a817-4b2e-9766-cd09052a36d8\") " pod="calico-system/calico-node-jf7l5" Oct 29 05:34:02.583271 kubelet[2777]: I1029 05:34:02.583247 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/721e2e49-a817-4b2e-9766-cd09052a36d8-policysync\") pod \"calico-node-jf7l5\" (UID: \"721e2e49-a817-4b2e-9766-cd09052a36d8\") " pod="calico-system/calico-node-jf7l5" Oct 29 05:34:02.583271 kubelet[2777]: I1029 05:34:02.583276 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/721e2e49-a817-4b2e-9766-cd09052a36d8-xtables-lock\") pod \"calico-node-jf7l5\" (UID: \"721e2e49-a817-4b2e-9766-cd09052a36d8\") " pod="calico-system/calico-node-jf7l5" Oct 29 05:34:02.583271 kubelet[2777]: I1029 05:34:02.583296 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/721e2e49-a817-4b2e-9766-cd09052a36d8-cni-log-dir\") pod \"calico-node-jf7l5\" (UID: \"721e2e49-a817-4b2e-9766-cd09052a36d8\") " pod="calico-system/calico-node-jf7l5" Oct 29 05:34:02.583611 kubelet[2777]: I1029 05:34:02.583311 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/721e2e49-a817-4b2e-9766-cd09052a36d8-tigera-ca-bundle\") pod \"calico-node-jf7l5\" (UID: \"721e2e49-a817-4b2e-9766-cd09052a36d8\") " pod="calico-system/calico-node-jf7l5" Oct 29 05:34:02.583611 kubelet[2777]: I1029 05:34:02.583328 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/721e2e49-a817-4b2e-9766-cd09052a36d8-var-lib-calico\") pod \"calico-node-jf7l5\" (UID: \"721e2e49-a817-4b2e-9766-cd09052a36d8\") " pod="calico-system/calico-node-jf7l5" Oct 29 05:34:02.583611 kubelet[2777]: I1029 05:34:02.583361 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/721e2e49-a817-4b2e-9766-cd09052a36d8-var-run-calico\") pod \"calico-node-jf7l5\" (UID: \"721e2e49-a817-4b2e-9766-cd09052a36d8\") " pod="calico-system/calico-node-jf7l5" Oct 29 05:34:02.583611 kubelet[2777]: I1029 05:34:02.583375 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvhg2\" (UniqueName: \"kubernetes.io/projected/721e2e49-a817-4b2e-9766-cd09052a36d8-kube-api-access-pvhg2\") pod \"calico-node-jf7l5\" (UID: \"721e2e49-a817-4b2e-9766-cd09052a36d8\") " pod="calico-system/calico-node-jf7l5" Oct 29 05:34:02.583611 kubelet[2777]: I1029 05:34:02.583391 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/721e2e49-a817-4b2e-9766-cd09052a36d8-cni-net-dir\") pod \"calico-node-jf7l5\" (UID: \"721e2e49-a817-4b2e-9766-cd09052a36d8\") " pod="calico-system/calico-node-jf7l5" Oct 29 05:34:02.583851 kubelet[2777]: I1029 05:34:02.583408 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/721e2e49-a817-4b2e-9766-cd09052a36d8-flexvol-driver-host\") pod \"calico-node-jf7l5\" (UID: \"721e2e49-a817-4b2e-9766-cd09052a36d8\") " pod="calico-system/calico-node-jf7l5" Oct 29 05:34:02.583851 kubelet[2777]: I1029 05:34:02.583459 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/721e2e49-a817-4b2e-9766-cd09052a36d8-node-certs\") pod \"calico-node-jf7l5\" (UID: \"721e2e49-a817-4b2e-9766-cd09052a36d8\") " pod="calico-system/calico-node-jf7l5" Oct 29 05:34:02.583851 kubelet[2777]: I1029 05:34:02.583492 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/721e2e49-a817-4b2e-9766-cd09052a36d8-cni-bin-dir\") pod \"calico-node-jf7l5\" (UID: \"721e2e49-a817-4b2e-9766-cd09052a36d8\") " pod="calico-system/calico-node-jf7l5" Oct 29 05:34:02.692822 kubelet[2777]: E1029 05:34:02.692658 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.692822 kubelet[2777]: W1029 05:34:02.692682 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.692822 kubelet[2777]: E1029 05:34:02.692720 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.708113 kubelet[2777]: E1029 05:34:02.707770 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.708403 kubelet[2777]: W1029 05:34:02.708345 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.708403 kubelet[2777]: E1029 05:34:02.708372 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.710648 kubelet[2777]: E1029 05:34:02.710401 2777 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qkktn" podUID="11b4791e-97d9-4b28-b964-d007606a7e18" Oct 29 05:34:02.743490 kubelet[2777]: E1029 05:34:02.743438 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:34:02.746049 containerd[1601]: time="2025-10-29T05:34:02.745982612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f668bffc5-8wr2k,Uid:0a088882-daa6-40a1-87c9-bcf3275a6f67,Namespace:calico-system,Attempt:0,}" Oct 29 05:34:02.760105 kubelet[2777]: E1029 05:34:02.759933 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.760105 kubelet[2777]: W1029 05:34:02.759959 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.760105 kubelet[2777]: E1029 05:34:02.759982 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.763360 kubelet[2777]: E1029 05:34:02.763324 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.763360 kubelet[2777]: W1029 05:34:02.763347 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.763513 kubelet[2777]: E1029 05:34:02.763372 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.764940 kubelet[2777]: E1029 05:34:02.764912 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.764940 kubelet[2777]: W1029 05:34:02.764928 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.764940 kubelet[2777]: E1029 05:34:02.764940 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.765288 kubelet[2777]: E1029 05:34:02.765262 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.765288 kubelet[2777]: W1029 05:34:02.765274 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.765288 kubelet[2777]: E1029 05:34:02.765284 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.765552 kubelet[2777]: E1029 05:34:02.765497 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.765552 kubelet[2777]: W1029 05:34:02.765505 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.765552 kubelet[2777]: E1029 05:34:02.765519 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.765706 kubelet[2777]: E1029 05:34:02.765690 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.765706 kubelet[2777]: W1029 05:34:02.765702 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.765764 kubelet[2777]: E1029 05:34:02.765711 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.765889 kubelet[2777]: E1029 05:34:02.765873 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.765889 kubelet[2777]: W1029 05:34:02.765884 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.766097 kubelet[2777]: E1029 05:34:02.765892 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.766097 kubelet[2777]: E1029 05:34:02.766054 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.766097 kubelet[2777]: W1029 05:34:02.766061 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.766097 kubelet[2777]: E1029 05:34:02.766090 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.766692 kubelet[2777]: E1029 05:34:02.766266 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.766692 kubelet[2777]: W1029 05:34:02.766274 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.766692 kubelet[2777]: E1029 05:34:02.766282 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.766692 kubelet[2777]: E1029 05:34:02.766447 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.766692 kubelet[2777]: W1029 05:34:02.766455 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.766692 kubelet[2777]: E1029 05:34:02.766463 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.766692 kubelet[2777]: E1029 05:34:02.766619 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.766692 kubelet[2777]: W1029 05:34:02.766626 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.766692 kubelet[2777]: E1029 05:34:02.766634 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.766961 kubelet[2777]: E1029 05:34:02.766801 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.766961 kubelet[2777]: W1029 05:34:02.766810 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.766961 kubelet[2777]: E1029 05:34:02.766823 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.767094 kubelet[2777]: E1029 05:34:02.766986 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.767094 kubelet[2777]: W1029 05:34:02.766994 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.767094 kubelet[2777]: E1029 05:34:02.767003 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.767210 kubelet[2777]: E1029 05:34:02.767186 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.767210 kubelet[2777]: W1029 05:34:02.767201 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.767210 kubelet[2777]: E1029 05:34:02.767210 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.767384 kubelet[2777]: E1029 05:34:02.767365 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.767384 kubelet[2777]: W1029 05:34:02.767376 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.767384 kubelet[2777]: E1029 05:34:02.767384 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.767564 kubelet[2777]: E1029 05:34:02.767544 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.767564 kubelet[2777]: W1029 05:34:02.767555 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.767564 kubelet[2777]: E1029 05:34:02.767563 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.767763 kubelet[2777]: E1029 05:34:02.767747 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.767763 kubelet[2777]: W1029 05:34:02.767758 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.767812 kubelet[2777]: E1029 05:34:02.767766 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.767951 kubelet[2777]: E1029 05:34:02.767936 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.767951 kubelet[2777]: W1029 05:34:02.767947 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.767993 kubelet[2777]: E1029 05:34:02.767955 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.768153 kubelet[2777]: E1029 05:34:02.768136 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.768153 kubelet[2777]: W1029 05:34:02.768147 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.768213 kubelet[2777]: E1029 05:34:02.768156 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.768342 kubelet[2777]: E1029 05:34:02.768327 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.768342 kubelet[2777]: W1029 05:34:02.768338 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.768390 kubelet[2777]: E1029 05:34:02.768346 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.786214 kubelet[2777]: E1029 05:34:02.786172 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.786214 kubelet[2777]: W1029 05:34:02.786200 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.786214 kubelet[2777]: E1029 05:34:02.786227 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.786414 kubelet[2777]: I1029 05:34:02.786279 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11b4791e-97d9-4b28-b964-d007606a7e18-kubelet-dir\") pod \"csi-node-driver-qkktn\" (UID: \"11b4791e-97d9-4b28-b964-d007606a7e18\") " pod="calico-system/csi-node-driver-qkktn" Oct 29 05:34:02.787113 kubelet[2777]: E1029 05:34:02.786703 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.787113 kubelet[2777]: W1029 05:34:02.786724 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.787113 kubelet[2777]: E1029 05:34:02.786736 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.787113 kubelet[2777]: I1029 05:34:02.786760 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/11b4791e-97d9-4b28-b964-d007606a7e18-registration-dir\") pod \"csi-node-driver-qkktn\" (UID: \"11b4791e-97d9-4b28-b964-d007606a7e18\") " pod="calico-system/csi-node-driver-qkktn" Oct 29 05:34:02.787237 kubelet[2777]: E1029 05:34:02.787115 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.787237 kubelet[2777]: W1029 05:34:02.787166 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.787237 kubelet[2777]: E1029 05:34:02.787187 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.787445 kubelet[2777]: E1029 05:34:02.787391 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.787445 kubelet[2777]: W1029 05:34:02.787439 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.787516 kubelet[2777]: E1029 05:34:02.787449 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.787708 kubelet[2777]: E1029 05:34:02.787632 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.787708 kubelet[2777]: W1029 05:34:02.787645 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.787708 kubelet[2777]: E1029 05:34:02.787653 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.787708 kubelet[2777]: I1029 05:34:02.787687 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/11b4791e-97d9-4b28-b964-d007606a7e18-socket-dir\") pod \"csi-node-driver-qkktn\" (UID: \"11b4791e-97d9-4b28-b964-d007606a7e18\") " pod="calico-system/csi-node-driver-qkktn" Oct 29 05:34:02.787984 kubelet[2777]: E1029 05:34:02.787919 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.787984 kubelet[2777]: W1029 05:34:02.787933 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.787984 kubelet[2777]: E1029 05:34:02.787948 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.787984 kubelet[2777]: I1029 05:34:02.787969 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq6t7\" (UniqueName: \"kubernetes.io/projected/11b4791e-97d9-4b28-b964-d007606a7e18-kube-api-access-wq6t7\") pod \"csi-node-driver-qkktn\" (UID: \"11b4791e-97d9-4b28-b964-d007606a7e18\") " pod="calico-system/csi-node-driver-qkktn" Oct 29 05:34:02.788490 kubelet[2777]: E1029 05:34:02.788265 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.788490 kubelet[2777]: W1029 05:34:02.788287 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.788490 kubelet[2777]: E1029 05:34:02.788296 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.788490 kubelet[2777]: I1029 05:34:02.788377 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/11b4791e-97d9-4b28-b964-d007606a7e18-varrun\") pod \"csi-node-driver-qkktn\" (UID: \"11b4791e-97d9-4b28-b964-d007606a7e18\") " pod="calico-system/csi-node-driver-qkktn" Oct 29 05:34:02.788775 kubelet[2777]: E1029 05:34:02.788613 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.788775 kubelet[2777]: W1029 05:34:02.788627 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.788775 kubelet[2777]: E1029 05:34:02.788636 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.789041 kubelet[2777]: E1029 05:34:02.788862 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.789041 kubelet[2777]: W1029 05:34:02.788879 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.789041 kubelet[2777]: E1029 05:34:02.788887 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.789160 kubelet[2777]: E1029 05:34:02.789120 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.789160 kubelet[2777]: W1029 05:34:02.789129 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.789160 kubelet[2777]: E1029 05:34:02.789138 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.789642 kubelet[2777]: E1029 05:34:02.789338 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.789642 kubelet[2777]: W1029 05:34:02.789351 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.789642 kubelet[2777]: E1029 05:34:02.789360 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.789744 kubelet[2777]: E1029 05:34:02.789718 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.789795 kubelet[2777]: W1029 05:34:02.789747 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.789835 kubelet[2777]: E1029 05:34:02.789780 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.790384 kubelet[2777]: E1029 05:34:02.790356 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.790384 kubelet[2777]: W1029 05:34:02.790373 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.790384 kubelet[2777]: E1029 05:34:02.790383 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.790581 kubelet[2777]: E1029 05:34:02.790557 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.790799 kubelet[2777]: W1029 05:34:02.790583 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.790799 kubelet[2777]: E1029 05:34:02.790592 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.790799 kubelet[2777]: E1029 05:34:02.790800 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.790872 kubelet[2777]: W1029 05:34:02.790808 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.790872 kubelet[2777]: E1029 05:34:02.790817 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.791991 containerd[1601]: time="2025-10-29T05:34:02.790931309Z" level=info msg="connecting to shim d71d8231e95812b0e1388eaf6842c100eed8ff5ce19d2a0cff228d120a5bb25c" address="unix:///run/containerd/s/414d60c938ab80a0cd8236d9b25431ae8d8607db5b377ef1301202bd432f1795" namespace=k8s.io protocol=ttrpc version=3 Oct 29 05:34:02.826263 systemd[1]: Started cri-containerd-d71d8231e95812b0e1388eaf6842c100eed8ff5ce19d2a0cff228d120a5bb25c.scope - libcontainer container d71d8231e95812b0e1388eaf6842c100eed8ff5ce19d2a0cff228d120a5bb25c. Oct 29 05:34:02.832833 kubelet[2777]: E1029 05:34:02.832793 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:34:02.833559 containerd[1601]: time="2025-10-29T05:34:02.833515936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jf7l5,Uid:721e2e49-a817-4b2e-9766-cd09052a36d8,Namespace:calico-system,Attempt:0,}" Oct 29 05:34:02.858241 containerd[1601]: time="2025-10-29T05:34:02.858197084Z" level=info msg="connecting to shim 5cf56bdce3482ff641eedae91d2f026042b8deaef151d76305f64df550a18fa2" address="unix:///run/containerd/s/04b7d5093e90d73640bf15cf8d084380ab36916c485f3236b72a9088292577c6" namespace=k8s.io protocol=ttrpc version=3 Oct 29 05:34:02.884581 containerd[1601]: time="2025-10-29T05:34:02.884528060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f668bffc5-8wr2k,Uid:0a088882-daa6-40a1-87c9-bcf3275a6f67,Namespace:calico-system,Attempt:0,} returns sandbox id \"d71d8231e95812b0e1388eaf6842c100eed8ff5ce19d2a0cff228d120a5bb25c\"" Oct 29 05:34:02.889088 kubelet[2777]: E1029 05:34:02.889006 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:34:02.889713 kubelet[2777]: E1029 05:34:02.889652 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.889713 kubelet[2777]: W1029 05:34:02.889670 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.889794 kubelet[2777]: E1029 05:34:02.889715 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.890193 kubelet[2777]: E1029 05:34:02.890168 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.890193 kubelet[2777]: W1029 05:34:02.890184 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.890193 kubelet[2777]: E1029 05:34:02.890194 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.890589 kubelet[2777]: E1029 05:34:02.890502 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.890589 kubelet[2777]: W1029 05:34:02.890512 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.890589 kubelet[2777]: E1029 05:34:02.890522 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.891178 containerd[1601]: time="2025-10-29T05:34:02.891150288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 29 05:34:02.891445 kubelet[2777]: E1029 05:34:02.891425 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.891504 kubelet[2777]: W1029 05:34:02.891466 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.891504 kubelet[2777]: E1029 05:34:02.891479 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.891805 kubelet[2777]: E1029 05:34:02.891787 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.891805 kubelet[2777]: W1029 05:34:02.891799 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.891805 kubelet[2777]: E1029 05:34:02.891810 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.892112 kubelet[2777]: E1029 05:34:02.892093 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.892158 kubelet[2777]: W1029 05:34:02.892122 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.892158 kubelet[2777]: E1029 05:34:02.892133 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.892461 kubelet[2777]: E1029 05:34:02.892424 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.892461 kubelet[2777]: W1029 05:34:02.892437 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.892461 kubelet[2777]: E1029 05:34:02.892447 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.892749 kubelet[2777]: E1029 05:34:02.892731 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.892749 kubelet[2777]: W1029 05:34:02.892744 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.892814 kubelet[2777]: E1029 05:34:02.892754 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.893157 kubelet[2777]: E1029 05:34:02.893129 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.893157 kubelet[2777]: W1029 05:34:02.893143 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.893157 kubelet[2777]: E1029 05:34:02.893154 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.893562 kubelet[2777]: E1029 05:34:02.893529 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.893562 kubelet[2777]: W1029 05:34:02.893543 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.893562 kubelet[2777]: E1029 05:34:02.893553 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.894538 kubelet[2777]: E1029 05:34:02.894520 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.894538 kubelet[2777]: W1029 05:34:02.894533 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.894620 kubelet[2777]: E1029 05:34:02.894544 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.896462 kubelet[2777]: E1029 05:34:02.896440 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.896524 kubelet[2777]: W1029 05:34:02.896465 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.896524 kubelet[2777]: E1029 05:34:02.896483 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.899876 kubelet[2777]: E1029 05:34:02.899843 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.899876 kubelet[2777]: W1029 05:34:02.899858 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.899876 kubelet[2777]: E1029 05:34:02.899870 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.900261 kubelet[2777]: E1029 05:34:02.900235 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.900261 kubelet[2777]: W1029 05:34:02.900249 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.900261 kubelet[2777]: E1029 05:34:02.900260 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.900372 systemd[1]: Started cri-containerd-5cf56bdce3482ff641eedae91d2f026042b8deaef151d76305f64df550a18fa2.scope - libcontainer container 5cf56bdce3482ff641eedae91d2f026042b8deaef151d76305f64df550a18fa2. Oct 29 05:34:02.900476 kubelet[2777]: E1029 05:34:02.900459 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.900476 kubelet[2777]: W1029 05:34:02.900471 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.900529 kubelet[2777]: E1029 05:34:02.900482 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.900683 kubelet[2777]: E1029 05:34:02.900667 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.900683 kubelet[2777]: W1029 05:34:02.900679 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.900732 kubelet[2777]: E1029 05:34:02.900688 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.900874 kubelet[2777]: E1029 05:34:02.900858 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.900874 kubelet[2777]: W1029 05:34:02.900871 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.900921 kubelet[2777]: E1029 05:34:02.900883 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.904204 kubelet[2777]: E1029 05:34:02.904162 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.904204 kubelet[2777]: W1029 05:34:02.904182 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.904204 kubelet[2777]: E1029 05:34:02.904199 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.904469 kubelet[2777]: E1029 05:34:02.904416 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.904469 kubelet[2777]: W1029 05:34:02.904434 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.904469 kubelet[2777]: E1029 05:34:02.904442 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.906209 kubelet[2777]: E1029 05:34:02.905123 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.906209 kubelet[2777]: W1029 05:34:02.905137 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.906209 kubelet[2777]: E1029 05:34:02.905158 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.906209 kubelet[2777]: E1029 05:34:02.905399 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.906209 kubelet[2777]: W1029 05:34:02.905407 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.906209 kubelet[2777]: E1029 05:34:02.905421 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.906209 kubelet[2777]: E1029 05:34:02.905599 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.906209 kubelet[2777]: W1029 05:34:02.905611 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.906209 kubelet[2777]: E1029 05:34:02.905620 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.906209 kubelet[2777]: E1029 05:34:02.905910 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.906693 kubelet[2777]: W1029 05:34:02.905923 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.906693 kubelet[2777]: E1029 05:34:02.906370 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.907436 kubelet[2777]: E1029 05:34:02.907299 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.907436 kubelet[2777]: W1029 05:34:02.907325 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.907436 kubelet[2777]: E1029 05:34:02.907351 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.908143 kubelet[2777]: E1029 05:34:02.908105 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.909410 kubelet[2777]: W1029 05:34:02.909223 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.909410 kubelet[2777]: E1029 05:34:02.909355 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.914692 kubelet[2777]: E1029 05:34:02.914645 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:02.914692 kubelet[2777]: W1029 05:34:02.914659 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:02.914692 kubelet[2777]: E1029 05:34:02.914669 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:02.933054 containerd[1601]: time="2025-10-29T05:34:02.932983359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jf7l5,Uid:721e2e49-a817-4b2e-9766-cd09052a36d8,Namespace:calico-system,Attempt:0,} returns sandbox id \"5cf56bdce3482ff641eedae91d2f026042b8deaef151d76305f64df550a18fa2\"" Oct 29 05:34:02.933965 kubelet[2777]: E1029 05:34:02.933940 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:34:04.415322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4215079875.mount: Deactivated successfully. Oct 29 05:34:04.676933 kubelet[2777]: E1029 05:34:04.676411 2777 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qkktn" podUID="11b4791e-97d9-4b28-b964-d007606a7e18" Oct 29 05:34:04.861010 containerd[1601]: time="2025-10-29T05:34:04.860953202Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 05:34:04.862028 containerd[1601]: time="2025-10-29T05:34:04.861966779Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Oct 29 05:34:04.863284 containerd[1601]: time="2025-10-29T05:34:04.863228643Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 05:34:04.865397 containerd[1601]: time="2025-10-29T05:34:04.865338194Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 05:34:04.865845 containerd[1601]: time="2025-10-29T05:34:04.865819730Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.974640658s" Oct 29 05:34:04.865883 containerd[1601]: time="2025-10-29T05:34:04.865847402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 29 05:34:04.867026 containerd[1601]: time="2025-10-29T05:34:04.867006473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 29 05:34:04.881060 containerd[1601]: time="2025-10-29T05:34:04.881002812Z" level=info msg="CreateContainer within sandbox \"d71d8231e95812b0e1388eaf6842c100eed8ff5ce19d2a0cff228d120a5bb25c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 29 05:34:04.888834 containerd[1601]: time="2025-10-29T05:34:04.888808732Z" level=info msg="Container bff12f57adac6ae062fbd907f329c2747c7de54488a22c983ed3fe4440e71c01: CDI devices from CRI Config.CDIDevices: []" Oct 29 05:34:04.944749 containerd[1601]: time="2025-10-29T05:34:04.944585266Z" level=info msg="CreateContainer within sandbox \"d71d8231e95812b0e1388eaf6842c100eed8ff5ce19d2a0cff228d120a5bb25c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"bff12f57adac6ae062fbd907f329c2747c7de54488a22c983ed3fe4440e71c01\"" Oct 29 05:34:04.945578 containerd[1601]: time="2025-10-29T05:34:04.945543780Z" level=info msg="StartContainer for \"bff12f57adac6ae062fbd907f329c2747c7de54488a22c983ed3fe4440e71c01\"" Oct 29 05:34:04.947356 containerd[1601]: time="2025-10-29T05:34:04.947311346Z" level=info msg="connecting to shim bff12f57adac6ae062fbd907f329c2747c7de54488a22c983ed3fe4440e71c01" address="unix:///run/containerd/s/414d60c938ab80a0cd8236d9b25431ae8d8607db5b377ef1301202bd432f1795" protocol=ttrpc version=3 Oct 29 05:34:04.975219 systemd[1]: Started cri-containerd-bff12f57adac6ae062fbd907f329c2747c7de54488a22c983ed3fe4440e71c01.scope - libcontainer container bff12f57adac6ae062fbd907f329c2747c7de54488a22c983ed3fe4440e71c01. Oct 29 05:34:05.029556 containerd[1601]: time="2025-10-29T05:34:05.029505914Z" level=info msg="StartContainer for \"bff12f57adac6ae062fbd907f329c2747c7de54488a22c983ed3fe4440e71c01\" returns successfully" Oct 29 05:34:05.742904 kubelet[2777]: E1029 05:34:05.742861 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:34:05.789219 kubelet[2777]: E1029 05:34:05.789156 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:05.789219 kubelet[2777]: W1029 05:34:05.789180 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:05.789219 kubelet[2777]: E1029 05:34:05.789202 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:05.789499 kubelet[2777]: E1029 05:34:05.789389 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:05.789499 kubelet[2777]: W1029 05:34:05.789397 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:05.789499 kubelet[2777]: E1029 05:34:05.789406 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:05.789634 kubelet[2777]: E1029 05:34:05.789608 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:05.789634 kubelet[2777]: W1029 05:34:05.789616 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:05.789634 kubelet[2777]: E1029 05:34:05.789624 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:05.789827 kubelet[2777]: E1029 05:34:05.789804 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:05.789827 kubelet[2777]: W1029 05:34:05.789815 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:05.789827 kubelet[2777]: E1029 05:34:05.789823 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:05.790037 kubelet[2777]: E1029 05:34:05.790014 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:05.790037 kubelet[2777]: W1029 05:34:05.790025 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:05.790037 kubelet[2777]: E1029 05:34:05.790035 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:05.790242 kubelet[2777]: E1029 05:34:05.790221 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:05.790242 kubelet[2777]: W1029 05:34:05.790232 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:05.790242 kubelet[2777]: E1029 05:34:05.790242 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:05.790423 kubelet[2777]: E1029 05:34:05.790402 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:05.790423 kubelet[2777]: W1029 05:34:05.790412 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:05.790423 kubelet[2777]: E1029 05:34:05.790421 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:05.790593 kubelet[2777]: E1029 05:34:05.790572 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:05.790593 kubelet[2777]: W1029 05:34:05.790583 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:05.790593 kubelet[2777]: E1029 05:34:05.790590 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:05.790775 kubelet[2777]: E1029 05:34:05.790755 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:05.790775 kubelet[2777]: W1029 05:34:05.790765 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:05.790775 kubelet[2777]: E1029 05:34:05.790773 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:05.790943 kubelet[2777]: E1029 05:34:05.790923 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:05.790943 kubelet[2777]: W1029 05:34:05.790934 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:05.790943 kubelet[2777]: E1029 05:34:05.790942 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:05.791133 kubelet[2777]: E1029 05:34:05.791126 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:05.791133 kubelet[2777]: W1029 05:34:05.791135 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:05.791213 kubelet[2777]: E1029 05:34:05.791147 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:05.791327 kubelet[2777]: E1029 05:34:05.791301 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:05.791327 kubelet[2777]: W1029 05:34:05.791312 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:05.791327 kubelet[2777]: E1029 05:34:05.791320 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:05.791516 kubelet[2777]: E1029 05:34:05.791491 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:05.791516 kubelet[2777]: W1029 05:34:05.791502 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:05.791603 kubelet[2777]: E1029 05:34:05.791509 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:05.791715 kubelet[2777]: E1029 05:34:05.791691 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:05.791715 kubelet[2777]: W1029 05:34:05.791701 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:05.791715 kubelet[2777]: E1029 05:34:05.791709 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:05.791888 kubelet[2777]: E1029 05:34:05.791864 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:05.791888 kubelet[2777]: W1029 05:34:05.791874 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:05.791888 kubelet[2777]: E1029 05:34:05.791881 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:05.821194 kubelet[2777]: E1029 05:34:05.821144 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:05.821194 kubelet[2777]: W1029 05:34:05.821159 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:05.821194 kubelet[2777]: E1029 05:34:05.821169 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:05.821443 kubelet[2777]: E1029 05:34:05.821435 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:05.821443 kubelet[2777]: W1029 05:34:05.821444 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:05.821524 kubelet[2777]: E1029 05:34:05.821453 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:05.821673 kubelet[2777]: E1029 05:34:05.821640 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:05.821673 kubelet[2777]: W1029 05:34:05.821652 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:05.821673 kubelet[2777]: E1029 05:34:05.821660 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:05.822099 kubelet[2777]: E1029 05:34:05.822037 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:05.822099 kubelet[2777]: W1029 05:34:05.822066 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:05.822286 kubelet[2777]: E1029 05:34:05.822127 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:05.822360 kubelet[2777]: E1029 05:34:05.822344 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:05.822405 kubelet[2777]: W1029 05:34:05.822378 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:05.822405 kubelet[2777]: E1029 05:34:05.822390 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:05.822657 kubelet[2777]: E1029 05:34:05.822638 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:05.822657 kubelet[2777]: W1029 05:34:05.822649 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:05.822657 kubelet[2777]: E1029 05:34:05.822659 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:05.822891 kubelet[2777]: E1029 05:34:05.822875 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:05.822891 kubelet[2777]: W1029 05:34:05.822888 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:05.822957 kubelet[2777]: E1029 05:34:05.822897 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:05.823136 kubelet[2777]: E1029 05:34:05.823119 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:05.823136 kubelet[2777]: W1029 05:34:05.823132 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:05.823216 kubelet[2777]: E1029 05:34:05.823141 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:05.823336 kubelet[2777]: E1029 05:34:05.823311 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:05.823336 kubelet[2777]: W1029 05:34:05.823321 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:05.823336 kubelet[2777]: E1029 05:34:05.823329 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:05.823580 kubelet[2777]: E1029 05:34:05.823553 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:05.823580 kubelet[2777]: W1029 05:34:05.823567 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:05.823639 kubelet[2777]: E1029 05:34:05.823580 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:05.823815 kubelet[2777]: E1029 05:34:05.823800 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:05.823815 kubelet[2777]: W1029 05:34:05.823812 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:05.824031 kubelet[2777]: E1029 05:34:05.823823 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:05.824223 kubelet[2777]: E1029 05:34:05.824204 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:05.824223 kubelet[2777]: W1029 05:34:05.824219 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:05.824303 kubelet[2777]: E1029 05:34:05.824231 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:05.824517 kubelet[2777]: E1029 05:34:05.824498 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:05.824517 kubelet[2777]: W1029 05:34:05.824512 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:05.824592 kubelet[2777]: E1029 05:34:05.824523 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:05.824732 kubelet[2777]: E1029 05:34:05.824710 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:05.824732 kubelet[2777]: W1029 05:34:05.824723 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:05.824732 kubelet[2777]: E1029 05:34:05.824735 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:05.824928 kubelet[2777]: E1029 05:34:05.824899 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:05.824928 kubelet[2777]: W1029 05:34:05.824921 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:05.825013 kubelet[2777]: E1029 05:34:05.824931 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:05.825151 kubelet[2777]: E1029 05:34:05.825135 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:05.825151 kubelet[2777]: W1029 05:34:05.825146 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:05.825252 kubelet[2777]: E1029 05:34:05.825155 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:05.825501 kubelet[2777]: E1029 05:34:05.825464 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:05.825501 kubelet[2777]: W1029 05:34:05.825476 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:05.825501 kubelet[2777]: E1029 05:34:05.825487 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:05.825733 kubelet[2777]: E1029 05:34:05.825717 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 05:34:05.825733 kubelet[2777]: W1029 05:34:05.825728 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 05:34:05.825785 kubelet[2777]: E1029 05:34:05.825736 2777 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 05:34:06.101706 containerd[1601]: time="2025-10-29T05:34:06.101628820Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 05:34:06.102380 containerd[1601]: time="2025-10-29T05:34:06.102337412Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Oct 29 05:34:06.103718 containerd[1601]: time="2025-10-29T05:34:06.103679787Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 05:34:06.105911 containerd[1601]: time="2025-10-29T05:34:06.105865088Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 05:34:06.106663 containerd[1601]: time="2025-10-29T05:34:06.106618375Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.239582196s" Oct 29 05:34:06.106710 containerd[1601]: time="2025-10-29T05:34:06.106664923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 29 05:34:06.110999 containerd[1601]: time="2025-10-29T05:34:06.110938331Z" level=info msg="CreateContainer within sandbox \"5cf56bdce3482ff641eedae91d2f026042b8deaef151d76305f64df550a18fa2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 29 05:34:06.120895 containerd[1601]: time="2025-10-29T05:34:06.120828265Z" level=info msg="Container 3621f9777bca23a34e856c8e1c99aa15887c8b3a363199e13763b3e0359af135: CDI devices from CRI Config.CDIDevices: []" Oct 29 05:34:06.131978 containerd[1601]: time="2025-10-29T05:34:06.131901174Z" level=info msg="CreateContainer within sandbox \"5cf56bdce3482ff641eedae91d2f026042b8deaef151d76305f64df550a18fa2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3621f9777bca23a34e856c8e1c99aa15887c8b3a363199e13763b3e0359af135\"" Oct 29 05:34:06.132615 containerd[1601]: time="2025-10-29T05:34:06.132573408Z" level=info msg="StartContainer for \"3621f9777bca23a34e856c8e1c99aa15887c8b3a363199e13763b3e0359af135\"" Oct 29 05:34:06.134577 containerd[1601]: time="2025-10-29T05:34:06.134542873Z" level=info msg="connecting to shim 3621f9777bca23a34e856c8e1c99aa15887c8b3a363199e13763b3e0359af135" address="unix:///run/containerd/s/04b7d5093e90d73640bf15cf8d084380ab36916c485f3236b72a9088292577c6" protocol=ttrpc version=3 Oct 29 05:34:06.165372 systemd[1]: Started cri-containerd-3621f9777bca23a34e856c8e1c99aa15887c8b3a363199e13763b3e0359af135.scope - libcontainer container 3621f9777bca23a34e856c8e1c99aa15887c8b3a363199e13763b3e0359af135. Oct 29 05:34:06.218366 containerd[1601]: time="2025-10-29T05:34:06.218327017Z" level=info msg="StartContainer for \"3621f9777bca23a34e856c8e1c99aa15887c8b3a363199e13763b3e0359af135\" returns successfully" Oct 29 05:34:06.282461 systemd[1]: cri-containerd-3621f9777bca23a34e856c8e1c99aa15887c8b3a363199e13763b3e0359af135.scope: Deactivated successfully. Oct 29 05:34:06.284024 systemd[1]: cri-containerd-3621f9777bca23a34e856c8e1c99aa15887c8b3a363199e13763b3e0359af135.scope: Consumed 41ms CPU time, 6.2M memory peak, 4.6M written to disk. Oct 29 05:34:06.286177 containerd[1601]: time="2025-10-29T05:34:06.286118338Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3621f9777bca23a34e856c8e1c99aa15887c8b3a363199e13763b3e0359af135\" id:\"3621f9777bca23a34e856c8e1c99aa15887c8b3a363199e13763b3e0359af135\" pid:3477 exited_at:{seconds:1761716046 nanos:284477301}" Oct 29 05:34:06.286307 containerd[1601]: time="2025-10-29T05:34:06.286244225Z" level=info msg="received exit event container_id:\"3621f9777bca23a34e856c8e1c99aa15887c8b3a363199e13763b3e0359af135\" id:\"3621f9777bca23a34e856c8e1c99aa15887c8b3a363199e13763b3e0359af135\" pid:3477 exited_at:{seconds:1761716046 nanos:284477301}" Oct 29 05:34:06.317950 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3621f9777bca23a34e856c8e1c99aa15887c8b3a363199e13763b3e0359af135-rootfs.mount: Deactivated successfully. Oct 29 05:34:06.673360 kubelet[2777]: E1029 05:34:06.673240 2777 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qkktn" podUID="11b4791e-97d9-4b28-b964-d007606a7e18" Oct 29 05:34:06.745944 kubelet[2777]: I1029 05:34:06.745898 2777 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 29 05:34:06.746444 kubelet[2777]: E1029 05:34:06.746221 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:34:06.746444 kubelet[2777]: E1029 05:34:06.746258 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:34:06.747550 containerd[1601]: time="2025-10-29T05:34:06.747509921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 29 05:34:06.762784 kubelet[2777]: I1029 05:34:06.762704 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-f668bffc5-8wr2k" podStartSLOduration=2.786771402 podStartE2EDuration="4.762683073s" podCreationTimestamp="2025-10-29 05:34:02 +0000 UTC" firstStartedPulling="2025-10-29 05:34:02.890876522 +0000 UTC m=+19.314997381" lastFinishedPulling="2025-10-29 05:34:04.866788193 +0000 UTC m=+21.290909052" observedRunningTime="2025-10-29 05:34:05.763146718 +0000 UTC m=+22.187267577" watchObservedRunningTime="2025-10-29 05:34:06.762683073 +0000 UTC m=+23.186804052" Oct 29 05:34:08.673837 kubelet[2777]: E1029 05:34:08.673443 2777 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qkktn" podUID="11b4791e-97d9-4b28-b964-d007606a7e18" Oct 29 05:34:09.265432 containerd[1601]: time="2025-10-29T05:34:09.265380933Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 05:34:09.266361 containerd[1601]: time="2025-10-29T05:34:09.266326341Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Oct 29 05:34:09.267573 containerd[1601]: time="2025-10-29T05:34:09.267525645Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 05:34:09.269621 containerd[1601]: time="2025-10-29T05:34:09.269572553Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 05:34:09.270117 containerd[1601]: time="2025-10-29T05:34:09.270090177Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.52251895s" Oct 29 05:34:09.270157 containerd[1601]: time="2025-10-29T05:34:09.270123680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 29 05:34:09.273763 containerd[1601]: time="2025-10-29T05:34:09.273730872Z" level=info msg="CreateContainer within sandbox \"5cf56bdce3482ff641eedae91d2f026042b8deaef151d76305f64df550a18fa2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 29 05:34:09.282692 containerd[1601]: time="2025-10-29T05:34:09.282652906Z" level=info msg="Container d2d0d1e671d3e5958df5315de214f5d3954de8d2c6b0a2b4987f52fe13863406: CDI devices from CRI Config.CDIDevices: []" Oct 29 05:34:09.293028 containerd[1601]: time="2025-10-29T05:34:09.292980833Z" level=info msg="CreateContainer within sandbox \"5cf56bdce3482ff641eedae91d2f026042b8deaef151d76305f64df550a18fa2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d2d0d1e671d3e5958df5315de214f5d3954de8d2c6b0a2b4987f52fe13863406\"" Oct 29 05:34:09.293507 containerd[1601]: time="2025-10-29T05:34:09.293487656Z" level=info msg="StartContainer for \"d2d0d1e671d3e5958df5315de214f5d3954de8d2c6b0a2b4987f52fe13863406\"" Oct 29 05:34:09.294915 containerd[1601]: time="2025-10-29T05:34:09.294880254Z" level=info msg="connecting to shim d2d0d1e671d3e5958df5315de214f5d3954de8d2c6b0a2b4987f52fe13863406" address="unix:///run/containerd/s/04b7d5093e90d73640bf15cf8d084380ab36916c485f3236b72a9088292577c6" protocol=ttrpc version=3 Oct 29 05:34:09.315205 systemd[1]: Started cri-containerd-d2d0d1e671d3e5958df5315de214f5d3954de8d2c6b0a2b4987f52fe13863406.scope - libcontainer container d2d0d1e671d3e5958df5315de214f5d3954de8d2c6b0a2b4987f52fe13863406. Oct 29 05:34:09.362429 containerd[1601]: time="2025-10-29T05:34:09.362356664Z" level=info msg="StartContainer for \"d2d0d1e671d3e5958df5315de214f5d3954de8d2c6b0a2b4987f52fe13863406\" returns successfully" Oct 29 05:34:09.755044 kubelet[2777]: E1029 05:34:09.754997 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:34:10.473469 systemd[1]: cri-containerd-d2d0d1e671d3e5958df5315de214f5d3954de8d2c6b0a2b4987f52fe13863406.scope: Deactivated successfully. Oct 29 05:34:10.474398 systemd[1]: cri-containerd-d2d0d1e671d3e5958df5315de214f5d3954de8d2c6b0a2b4987f52fe13863406.scope: Consumed 698ms CPU time, 181.7M memory peak, 3.9M read from disk, 171.3M written to disk. Oct 29 05:34:10.475744 containerd[1601]: time="2025-10-29T05:34:10.475687183Z" level=info msg="received exit event container_id:\"d2d0d1e671d3e5958df5315de214f5d3954de8d2c6b0a2b4987f52fe13863406\" id:\"d2d0d1e671d3e5958df5315de214f5d3954de8d2c6b0a2b4987f52fe13863406\" pid:3537 exited_at:{seconds:1761716050 nanos:474216007}" Oct 29 05:34:10.476113 containerd[1601]: time="2025-10-29T05:34:10.475789896Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d2d0d1e671d3e5958df5315de214f5d3954de8d2c6b0a2b4987f52fe13863406\" id:\"d2d0d1e671d3e5958df5315de214f5d3954de8d2c6b0a2b4987f52fe13863406\" pid:3537 exited_at:{seconds:1761716050 nanos:474216007}" Oct 29 05:34:10.481569 containerd[1601]: time="2025-10-29T05:34:10.481518465Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 29 05:34:10.505424 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d2d0d1e671d3e5958df5315de214f5d3954de8d2c6b0a2b4987f52fe13863406-rootfs.mount: Deactivated successfully. Oct 29 05:34:10.574102 kubelet[2777]: I1029 05:34:10.574034 2777 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Oct 29 05:34:10.756706 kubelet[2777]: E1029 05:34:10.756554 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:34:11.074512 systemd[1]: Created slice kubepods-besteffort-pod5700d2a9_15b1_43c2_8972_37e1ebd6aa09.slice - libcontainer container kubepods-besteffort-pod5700d2a9_15b1_43c2_8972_37e1ebd6aa09.slice. Oct 29 05:34:11.086100 systemd[1]: Created slice kubepods-besteffort-pod77f8cc32_a2bc_482d_9427_8912a3aa5e90.slice - libcontainer container kubepods-besteffort-pod77f8cc32_a2bc_482d_9427_8912a3aa5e90.slice. Oct 29 05:34:11.095421 systemd[1]: Created slice kubepods-besteffort-podca0b83d1_3c73_4368_b48e_26b292faf856.slice - libcontainer container kubepods-besteffort-podca0b83d1_3c73_4368_b48e_26b292faf856.slice. Oct 29 05:34:11.102792 systemd[1]: Created slice kubepods-burstable-poda9af4fbc_4377_4e63_8c55_f50471f996bb.slice - libcontainer container kubepods-burstable-poda9af4fbc_4377_4e63_8c55_f50471f996bb.slice. Oct 29 05:34:11.111123 systemd[1]: Created slice kubepods-burstable-pod3b87e74c_280a_4a81_9ad5_b4bf48d47f03.slice - libcontainer container kubepods-burstable-pod3b87e74c_280a_4a81_9ad5_b4bf48d47f03.slice. Oct 29 05:34:11.119498 systemd[1]: Created slice kubepods-besteffort-pod11b4791e_97d9_4b28_b964_d007606a7e18.slice - libcontainer container kubepods-besteffort-pod11b4791e_97d9_4b28_b964_d007606a7e18.slice. Oct 29 05:34:11.125219 containerd[1601]: time="2025-10-29T05:34:11.125159120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qkktn,Uid:11b4791e-97d9-4b28-b964-d007606a7e18,Namespace:calico-system,Attempt:0,}" Oct 29 05:34:11.127649 systemd[1]: Created slice kubepods-besteffort-podde492ebe_e388_430f_a865_ba2ce27c1431.slice - libcontainer container kubepods-besteffort-podde492ebe_e388_430f_a865_ba2ce27c1431.slice. Oct 29 05:34:11.142158 systemd[1]: Created slice kubepods-besteffort-poddda5bf98_d31b_4c3d_8024_54d20d0506a7.slice - libcontainer container kubepods-besteffort-poddda5bf98_d31b_4c3d_8024_54d20d0506a7.slice. Oct 29 05:34:11.157780 kubelet[2777]: I1029 05:34:11.157737 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5700d2a9-15b1-43c2-8972-37e1ebd6aa09-calico-apiserver-certs\") pod \"calico-apiserver-799b5c4b47-vw8qq\" (UID: \"5700d2a9-15b1-43c2-8972-37e1ebd6aa09\") " pod="calico-apiserver/calico-apiserver-799b5c4b47-vw8qq" Oct 29 05:34:11.157780 kubelet[2777]: I1029 05:34:11.157776 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de492ebe-e388-430f-a865-ba2ce27c1431-config\") pod \"goldmane-7c778bb748-pkk9m\" (UID: \"de492ebe-e388-430f-a865-ba2ce27c1431\") " pod="calico-system/goldmane-7c778bb748-pkk9m" Oct 29 05:34:11.157780 kubelet[2777]: I1029 05:34:11.157792 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd4mn\" (UniqueName: \"kubernetes.io/projected/de492ebe-e388-430f-a865-ba2ce27c1431-kube-api-access-hd4mn\") pod \"goldmane-7c778bb748-pkk9m\" (UID: \"de492ebe-e388-430f-a865-ba2ce27c1431\") " pod="calico-system/goldmane-7c778bb748-pkk9m" Oct 29 05:34:11.158053 kubelet[2777]: I1029 05:34:11.157808 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czf6j\" (UniqueName: \"kubernetes.io/projected/3b87e74c-280a-4a81-9ad5-b4bf48d47f03-kube-api-access-czf6j\") pod \"coredns-66bc5c9577-xcs92\" (UID: \"3b87e74c-280a-4a81-9ad5-b4bf48d47f03\") " pod="kube-system/coredns-66bc5c9577-xcs92" Oct 29 05:34:11.158053 kubelet[2777]: I1029 05:34:11.157823 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dda5bf98-d31b-4c3d-8024-54d20d0506a7-calico-apiserver-certs\") pod \"calico-apiserver-799b5c4b47-5d9gp\" (UID: \"dda5bf98-d31b-4c3d-8024-54d20d0506a7\") " pod="calico-apiserver/calico-apiserver-799b5c4b47-5d9gp" Oct 29 05:34:11.158053 kubelet[2777]: I1029 05:34:11.157838 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hpgq\" (UniqueName: \"kubernetes.io/projected/dda5bf98-d31b-4c3d-8024-54d20d0506a7-kube-api-access-5hpgq\") pod \"calico-apiserver-799b5c4b47-5d9gp\" (UID: \"dda5bf98-d31b-4c3d-8024-54d20d0506a7\") " pod="calico-apiserver/calico-apiserver-799b5c4b47-5d9gp" Oct 29 05:34:11.158053 kubelet[2777]: I1029 05:34:11.157861 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a9af4fbc-4377-4e63-8c55-f50471f996bb-config-volume\") pod \"coredns-66bc5c9577-srmc7\" (UID: \"a9af4fbc-4377-4e63-8c55-f50471f996bb\") " pod="kube-system/coredns-66bc5c9577-srmc7" Oct 29 05:34:11.158053 kubelet[2777]: I1029 05:34:11.157882 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b87e74c-280a-4a81-9ad5-b4bf48d47f03-config-volume\") pod \"coredns-66bc5c9577-xcs92\" (UID: \"3b87e74c-280a-4a81-9ad5-b4bf48d47f03\") " pod="kube-system/coredns-66bc5c9577-xcs92" Oct 29 05:34:11.158551 kubelet[2777]: I1029 05:34:11.157896 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/77f8cc32-a2bc-482d-9427-8912a3aa5e90-whisker-backend-key-pair\") pod \"whisker-64fdfffbb9-5q25p\" (UID: \"77f8cc32-a2bc-482d-9427-8912a3aa5e90\") " pod="calico-system/whisker-64fdfffbb9-5q25p" Oct 29 05:34:11.158551 kubelet[2777]: I1029 05:34:11.157909 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77f8cc32-a2bc-482d-9427-8912a3aa5e90-whisker-ca-bundle\") pod \"whisker-64fdfffbb9-5q25p\" (UID: \"77f8cc32-a2bc-482d-9427-8912a3aa5e90\") " pod="calico-system/whisker-64fdfffbb9-5q25p" Oct 29 05:34:11.158551 kubelet[2777]: I1029 05:34:11.157922 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhglb\" (UniqueName: \"kubernetes.io/projected/77f8cc32-a2bc-482d-9427-8912a3aa5e90-kube-api-access-dhglb\") pod \"whisker-64fdfffbb9-5q25p\" (UID: \"77f8cc32-a2bc-482d-9427-8912a3aa5e90\") " pod="calico-system/whisker-64fdfffbb9-5q25p" Oct 29 05:34:11.158551 kubelet[2777]: I1029 05:34:11.157943 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/de492ebe-e388-430f-a865-ba2ce27c1431-goldmane-key-pair\") pod \"goldmane-7c778bb748-pkk9m\" (UID: \"de492ebe-e388-430f-a865-ba2ce27c1431\") " pod="calico-system/goldmane-7c778bb748-pkk9m" Oct 29 05:34:11.158551 kubelet[2777]: I1029 05:34:11.157986 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca0b83d1-3c73-4368-b48e-26b292faf856-tigera-ca-bundle\") pod \"calico-kube-controllers-68fbd9f956-5l7nj\" (UID: \"ca0b83d1-3c73-4368-b48e-26b292faf856\") " pod="calico-system/calico-kube-controllers-68fbd9f956-5l7nj" Oct 29 05:34:11.158712 kubelet[2777]: I1029 05:34:11.158005 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de492ebe-e388-430f-a865-ba2ce27c1431-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-pkk9m\" (UID: \"de492ebe-e388-430f-a865-ba2ce27c1431\") " pod="calico-system/goldmane-7c778bb748-pkk9m" Oct 29 05:34:11.158712 kubelet[2777]: I1029 05:34:11.158028 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s9lp\" (UniqueName: \"kubernetes.io/projected/a9af4fbc-4377-4e63-8c55-f50471f996bb-kube-api-access-2s9lp\") pod \"coredns-66bc5c9577-srmc7\" (UID: \"a9af4fbc-4377-4e63-8c55-f50471f996bb\") " pod="kube-system/coredns-66bc5c9577-srmc7" Oct 29 05:34:11.158712 kubelet[2777]: I1029 05:34:11.158048 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs4fr\" (UniqueName: \"kubernetes.io/projected/5700d2a9-15b1-43c2-8972-37e1ebd6aa09-kube-api-access-zs4fr\") pod \"calico-apiserver-799b5c4b47-vw8qq\" (UID: \"5700d2a9-15b1-43c2-8972-37e1ebd6aa09\") " pod="calico-apiserver/calico-apiserver-799b5c4b47-vw8qq" Oct 29 05:34:11.158712 kubelet[2777]: I1029 05:34:11.158637 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzfn4\" (UniqueName: \"kubernetes.io/projected/ca0b83d1-3c73-4368-b48e-26b292faf856-kube-api-access-gzfn4\") pod \"calico-kube-controllers-68fbd9f956-5l7nj\" (UID: \"ca0b83d1-3c73-4368-b48e-26b292faf856\") " pod="calico-system/calico-kube-controllers-68fbd9f956-5l7nj" Oct 29 05:34:11.209966 containerd[1601]: time="2025-10-29T05:34:11.209894931Z" level=error msg="Failed to destroy network for sandbox \"25952eabb7b0edc0cb17f6bfc866baef52775c3afa2123325d07dadff0540cad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 05:34:11.212145 systemd[1]: run-netns-cni\x2dce079f76\x2d799d\x2dabe3\x2df6fc\x2da4376c987ad0.mount: Deactivated successfully. Oct 29 05:34:11.214801 containerd[1601]: time="2025-10-29T05:34:11.214734527Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qkktn,Uid:11b4791e-97d9-4b28-b964-d007606a7e18,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"25952eabb7b0edc0cb17f6bfc866baef52775c3afa2123325d07dadff0540cad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 05:34:11.215159 kubelet[2777]: E1029 05:34:11.215063 2777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25952eabb7b0edc0cb17f6bfc866baef52775c3afa2123325d07dadff0540cad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 05:34:11.215236 kubelet[2777]: E1029 05:34:11.215196 2777 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25952eabb7b0edc0cb17f6bfc866baef52775c3afa2123325d07dadff0540cad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qkktn" Oct 29 05:34:11.215236 kubelet[2777]: E1029 05:34:11.215219 2777 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25952eabb7b0edc0cb17f6bfc866baef52775c3afa2123325d07dadff0540cad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qkktn" Oct 29 05:34:11.215319 kubelet[2777]: E1029 05:34:11.215287 2777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qkktn_calico-system(11b4791e-97d9-4b28-b964-d007606a7e18)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qkktn_calico-system(11b4791e-97d9-4b28-b964-d007606a7e18)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"25952eabb7b0edc0cb17f6bfc866baef52775c3afa2123325d07dadff0540cad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qkktn" podUID="11b4791e-97d9-4b28-b964-d007606a7e18" Oct 29 05:34:11.386742 containerd[1601]: time="2025-10-29T05:34:11.386582574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-799b5c4b47-vw8qq,Uid:5700d2a9-15b1-43c2-8972-37e1ebd6aa09,Namespace:calico-apiserver,Attempt:0,}" Oct 29 05:34:11.392921 containerd[1601]: time="2025-10-29T05:34:11.392296223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64fdfffbb9-5q25p,Uid:77f8cc32-a2bc-482d-9427-8912a3aa5e90,Namespace:calico-system,Attempt:0,}" Oct 29 05:34:11.402380 containerd[1601]: time="2025-10-29T05:34:11.402295738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68fbd9f956-5l7nj,Uid:ca0b83d1-3c73-4368-b48e-26b292faf856,Namespace:calico-system,Attempt:0,}" Oct 29 05:34:11.412146 kubelet[2777]: E1029 05:34:11.411487 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:34:11.412566 containerd[1601]: time="2025-10-29T05:34:11.412522949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-srmc7,Uid:a9af4fbc-4377-4e63-8c55-f50471f996bb,Namespace:kube-system,Attempt:0,}" Oct 29 05:34:11.416102 kubelet[2777]: E1029 05:34:11.415983 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:34:11.416870 containerd[1601]: time="2025-10-29T05:34:11.416801732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xcs92,Uid:3b87e74c-280a-4a81-9ad5-b4bf48d47f03,Namespace:kube-system,Attempt:0,}" Oct 29 05:34:11.441118 containerd[1601]: time="2025-10-29T05:34:11.440761985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-pkk9m,Uid:de492ebe-e388-430f-a865-ba2ce27c1431,Namespace:calico-system,Attempt:0,}" Oct 29 05:34:11.453440 containerd[1601]: time="2025-10-29T05:34:11.453384579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-799b5c4b47-5d9gp,Uid:dda5bf98-d31b-4c3d-8024-54d20d0506a7,Namespace:calico-apiserver,Attempt:0,}" Oct 29 05:34:11.502290 containerd[1601]: time="2025-10-29T05:34:11.502225265Z" level=error msg="Failed to destroy network for sandbox \"88812715116486c7f251c579fbcfcce4867cf0b7a5991b6d611bf9a8c4fe3413\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 05:34:11.513241 containerd[1601]: time="2025-10-29T05:34:11.502323450Z" level=error msg="Failed to destroy network for sandbox \"ffba96eb14156671e17ae5f587908686f4a30d782ca0536872c8a76cca7b599e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 05:34:11.516231 containerd[1601]: time="2025-10-29T05:34:11.515785842Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-799b5c4b47-vw8qq,Uid:5700d2a9-15b1-43c2-8972-37e1ebd6aa09,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"88812715116486c7f251c579fbcfcce4867cf0b7a5991b6d611bf9a8c4fe3413\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 05:34:11.516447 kubelet[2777]: E1029 05:34:11.516356 2777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88812715116486c7f251c579fbcfcce4867cf0b7a5991b6d611bf9a8c4fe3413\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 05:34:11.519581 systemd[1]: run-netns-cni\x2debe07155\x2dd6eb\x2d22c5\x2d121e\x2dbeb82bcb8edd.mount: Deactivated successfully. Oct 29 05:34:11.519687 systemd[1]: run-netns-cni\x2d9cb5e413\x2d436f\x2dd82d\x2d1554\x2db7e63f1c3687.mount: Deactivated successfully. Oct 29 05:34:11.523903 containerd[1601]: time="2025-10-29T05:34:11.523838638Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64fdfffbb9-5q25p,Uid:77f8cc32-a2bc-482d-9427-8912a3aa5e90,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffba96eb14156671e17ae5f587908686f4a30d782ca0536872c8a76cca7b599e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 05:34:11.525175 kubelet[2777]: E1029 05:34:11.525131 2777 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88812715116486c7f251c579fbcfcce4867cf0b7a5991b6d611bf9a8c4fe3413\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-799b5c4b47-vw8qq" Oct 29 05:34:11.525230 kubelet[2777]: E1029 05:34:11.525182 2777 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88812715116486c7f251c579fbcfcce4867cf0b7a5991b6d611bf9a8c4fe3413\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-799b5c4b47-vw8qq" Oct 29 05:34:11.525313 kubelet[2777]: E1029 05:34:11.525272 2777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-799b5c4b47-vw8qq_calico-apiserver(5700d2a9-15b1-43c2-8972-37e1ebd6aa09)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-799b5c4b47-vw8qq_calico-apiserver(5700d2a9-15b1-43c2-8972-37e1ebd6aa09)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"88812715116486c7f251c579fbcfcce4867cf0b7a5991b6d611bf9a8c4fe3413\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-799b5c4b47-vw8qq" podUID="5700d2a9-15b1-43c2-8972-37e1ebd6aa09" Oct 29 05:34:11.527491 kubelet[2777]: E1029 05:34:11.526919 2777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffba96eb14156671e17ae5f587908686f4a30d782ca0536872c8a76cca7b599e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 05:34:11.527491 kubelet[2777]: E1029 05:34:11.526991 2777 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffba96eb14156671e17ae5f587908686f4a30d782ca0536872c8a76cca7b599e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-64fdfffbb9-5q25p" Oct 29 05:34:11.527491 kubelet[2777]: E1029 05:34:11.527011 2777 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffba96eb14156671e17ae5f587908686f4a30d782ca0536872c8a76cca7b599e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-64fdfffbb9-5q25p" Oct 29 05:34:11.527599 kubelet[2777]: E1029 05:34:11.527131 2777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-64fdfffbb9-5q25p_calico-system(77f8cc32-a2bc-482d-9427-8912a3aa5e90)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-64fdfffbb9-5q25p_calico-system(77f8cc32-a2bc-482d-9427-8912a3aa5e90)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ffba96eb14156671e17ae5f587908686f4a30d782ca0536872c8a76cca7b599e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-64fdfffbb9-5q25p" podUID="77f8cc32-a2bc-482d-9427-8912a3aa5e90" Oct 29 05:34:11.572560 containerd[1601]: time="2025-10-29T05:34:11.572501270Z" level=error msg="Failed to destroy network for sandbox \"d074c24470db2265fbf18e50b8bbb4452429236f1b6b83396bd55279688c9a77\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 05:34:11.576050 systemd[1]: run-netns-cni\x2d953a5458\x2d1759\x2dfc61\x2dd870\x2d980d702fda13.mount: Deactivated successfully. Oct 29 05:34:11.578579 containerd[1601]: time="2025-10-29T05:34:11.578523999Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-srmc7,Uid:a9af4fbc-4377-4e63-8c55-f50471f996bb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d074c24470db2265fbf18e50b8bbb4452429236f1b6b83396bd55279688c9a77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 05:34:11.578804 kubelet[2777]: E1029 05:34:11.578763 2777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d074c24470db2265fbf18e50b8bbb4452429236f1b6b83396bd55279688c9a77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 05:34:11.578947 kubelet[2777]: E1029 05:34:11.578826 2777 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d074c24470db2265fbf18e50b8bbb4452429236f1b6b83396bd55279688c9a77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-srmc7" Oct 29 05:34:11.578947 kubelet[2777]: E1029 05:34:11.578849 2777 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d074c24470db2265fbf18e50b8bbb4452429236f1b6b83396bd55279688c9a77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-srmc7" Oct 29 05:34:11.578947 kubelet[2777]: E1029 05:34:11.578921 2777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-srmc7_kube-system(a9af4fbc-4377-4e63-8c55-f50471f996bb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-srmc7_kube-system(a9af4fbc-4377-4e63-8c55-f50471f996bb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d074c24470db2265fbf18e50b8bbb4452429236f1b6b83396bd55279688c9a77\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-srmc7" podUID="a9af4fbc-4377-4e63-8c55-f50471f996bb" Oct 29 05:34:11.584468 containerd[1601]: time="2025-10-29T05:34:11.584331204Z" level=error msg="Failed to destroy network for sandbox \"ecae36e0684d897ab8a4d84bcd7b25884124efe5fa5e7ac84af86d9b608cb956\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 05:34:11.584468 containerd[1601]: time="2025-10-29T05:34:11.584336263Z" level=error msg="Failed to destroy network for sandbox \"be100dbc0b7f985a593eef322e61f24eea3e2ceb98243d8c53c81d87e76fa425\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 05:34:11.586502 containerd[1601]: time="2025-10-29T05:34:11.586441039Z" level=error msg="Failed to destroy network for sandbox \"67493e38dddf9b289c080235e0a4c2c5bce1cb5f3b9d86248d40831296dd975e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 05:34:11.586966 systemd[1]: run-netns-cni\x2de9211dd8\x2d23d9\x2d21d8\x2d08d4\x2d1c018da8edc8.mount: Deactivated successfully. Oct 29 05:34:11.587208 systemd[1]: run-netns-cni\x2db10fe2fa\x2ddb79\x2d777b\x2de705\x2d424a00274586.mount: Deactivated successfully. Oct 29 05:34:11.587455 containerd[1601]: time="2025-10-29T05:34:11.587414609Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-pkk9m,Uid:de492ebe-e388-430f-a865-ba2ce27c1431,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"be100dbc0b7f985a593eef322e61f24eea3e2ceb98243d8c53c81d87e76fa425\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 05:34:11.587912 kubelet[2777]: E1029 05:34:11.587849 2777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be100dbc0b7f985a593eef322e61f24eea3e2ceb98243d8c53c81d87e76fa425\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 05:34:11.587991 kubelet[2777]: E1029 05:34:11.587925 2777 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be100dbc0b7f985a593eef322e61f24eea3e2ceb98243d8c53c81d87e76fa425\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-pkk9m" Oct 29 05:34:11.587991 kubelet[2777]: E1029 05:34:11.587947 2777 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be100dbc0b7f985a593eef322e61f24eea3e2ceb98243d8c53c81d87e76fa425\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-pkk9m" Oct 29 05:34:11.588097 kubelet[2777]: E1029 05:34:11.588016 2777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-pkk9m_calico-system(de492ebe-e388-430f-a865-ba2ce27c1431)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-pkk9m_calico-system(de492ebe-e388-430f-a865-ba2ce27c1431)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be100dbc0b7f985a593eef322e61f24eea3e2ceb98243d8c53c81d87e76fa425\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-pkk9m" podUID="de492ebe-e388-430f-a865-ba2ce27c1431" Oct 29 05:34:11.588623 containerd[1601]: time="2025-10-29T05:34:11.588500389Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68fbd9f956-5l7nj,Uid:ca0b83d1-3c73-4368-b48e-26b292faf856,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecae36e0684d897ab8a4d84bcd7b25884124efe5fa5e7ac84af86d9b608cb956\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 05:34:11.589322 kubelet[2777]: E1029 05:34:11.589216 2777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecae36e0684d897ab8a4d84bcd7b25884124efe5fa5e7ac84af86d9b608cb956\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 05:34:11.589322 kubelet[2777]: E1029 05:34:11.589299 2777 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecae36e0684d897ab8a4d84bcd7b25884124efe5fa5e7ac84af86d9b608cb956\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68fbd9f956-5l7nj" Oct 29 05:34:11.589322 kubelet[2777]: E1029 05:34:11.589316 2777 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecae36e0684d897ab8a4d84bcd7b25884124efe5fa5e7ac84af86d9b608cb956\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68fbd9f956-5l7nj" Oct 29 05:34:11.590104 kubelet[2777]: E1029 05:34:11.589375 2777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68fbd9f956-5l7nj_calico-system(ca0b83d1-3c73-4368-b48e-26b292faf856)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68fbd9f956-5l7nj_calico-system(ca0b83d1-3c73-4368-b48e-26b292faf856)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ecae36e0684d897ab8a4d84bcd7b25884124efe5fa5e7ac84af86d9b608cb956\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68fbd9f956-5l7nj" podUID="ca0b83d1-3c73-4368-b48e-26b292faf856" Oct 29 05:34:11.593098 containerd[1601]: time="2025-10-29T05:34:11.592147935Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xcs92,Uid:3b87e74c-280a-4a81-9ad5-b4bf48d47f03,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"67493e38dddf9b289c080235e0a4c2c5bce1cb5f3b9d86248d40831296dd975e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 05:34:11.593098 containerd[1601]: time="2025-10-29T05:34:11.592756980Z" level=error msg="Failed to destroy network for sandbox \"ceb36058575910060093476db9987a20554b468f66271ec66c6cc1fb1b268f8d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 05:34:11.592650 systemd[1]: run-netns-cni\x2d37f188cb\x2dc37f\x2d37ba\x2dccd3\x2da4f73dbc7f11.mount: Deactivated successfully. Oct 29 05:34:11.593323 kubelet[2777]: E1029 05:34:11.592358 2777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67493e38dddf9b289c080235e0a4c2c5bce1cb5f3b9d86248d40831296dd975e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 05:34:11.593323 kubelet[2777]: E1029 05:34:11.592444 2777 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67493e38dddf9b289c080235e0a4c2c5bce1cb5f3b9d86248d40831296dd975e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xcs92" Oct 29 05:34:11.593323 kubelet[2777]: E1029 05:34:11.592470 2777 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67493e38dddf9b289c080235e0a4c2c5bce1cb5f3b9d86248d40831296dd975e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xcs92" Oct 29 05:34:11.593429 kubelet[2777]: E1029 05:34:11.592526 2777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-xcs92_kube-system(3b87e74c-280a-4a81-9ad5-b4bf48d47f03)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-xcs92_kube-system(3b87e74c-280a-4a81-9ad5-b4bf48d47f03)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"67493e38dddf9b289c080235e0a4c2c5bce1cb5f3b9d86248d40831296dd975e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-xcs92" podUID="3b87e74c-280a-4a81-9ad5-b4bf48d47f03" Oct 29 05:34:11.594142 containerd[1601]: time="2025-10-29T05:34:11.594093542Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-799b5c4b47-5d9gp,Uid:dda5bf98-d31b-4c3d-8024-54d20d0506a7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ceb36058575910060093476db9987a20554b468f66271ec66c6cc1fb1b268f8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 05:34:11.594428 kubelet[2777]: E1029 05:34:11.594401 2777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ceb36058575910060093476db9987a20554b468f66271ec66c6cc1fb1b268f8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 05:34:11.594530 kubelet[2777]: E1029 05:34:11.594431 2777 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ceb36058575910060093476db9987a20554b468f66271ec66c6cc1fb1b268f8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-799b5c4b47-5d9gp" Oct 29 05:34:11.594530 kubelet[2777]: E1029 05:34:11.594445 2777 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ceb36058575910060093476db9987a20554b468f66271ec66c6cc1fb1b268f8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-799b5c4b47-5d9gp" Oct 29 05:34:11.594604 kubelet[2777]: E1029 05:34:11.594483 2777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-799b5c4b47-5d9gp_calico-apiserver(dda5bf98-d31b-4c3d-8024-54d20d0506a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-799b5c4b47-5d9gp_calico-apiserver(dda5bf98-d31b-4c3d-8024-54d20d0506a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ceb36058575910060093476db9987a20554b468f66271ec66c6cc1fb1b268f8d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-799b5c4b47-5d9gp" podUID="dda5bf98-d31b-4c3d-8024-54d20d0506a7" Oct 29 05:34:11.772265 kubelet[2777]: E1029 05:34:11.772109 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:34:11.772762 containerd[1601]: time="2025-10-29T05:34:11.772641913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 29 05:34:12.505234 systemd[1]: run-netns-cni\x2d5c88569b\x2d41eb\x2d8912\x2d4234\x2df9322c221ee9.mount: Deactivated successfully. Oct 29 05:34:16.910763 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3284685015.mount: Deactivated successfully. Oct 29 05:34:17.921413 containerd[1601]: time="2025-10-29T05:34:17.921343724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 05:34:17.950946 containerd[1601]: time="2025-10-29T05:34:17.922478956Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Oct 29 05:34:17.950946 containerd[1601]: time="2025-10-29T05:34:17.932089566Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 05:34:17.951209 containerd[1601]: time="2025-10-29T05:34:17.936523064Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.163839222s" Oct 29 05:34:17.951209 containerd[1601]: time="2025-10-29T05:34:17.951157690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 29 05:34:17.951748 containerd[1601]: time="2025-10-29T05:34:17.951696462Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 05:34:17.983156 containerd[1601]: time="2025-10-29T05:34:17.983090535Z" level=info msg="CreateContainer within sandbox \"5cf56bdce3482ff641eedae91d2f026042b8deaef151d76305f64df550a18fa2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 29 05:34:17.993496 containerd[1601]: time="2025-10-29T05:34:17.993420677Z" level=info msg="Container c9b4b12106685aff306622811e13227e95b402cf220dd13da97b40ae8449383c: CDI devices from CRI Config.CDIDevices: []" Oct 29 05:34:18.020649 containerd[1601]: time="2025-10-29T05:34:18.020583710Z" level=info msg="CreateContainer within sandbox \"5cf56bdce3482ff641eedae91d2f026042b8deaef151d76305f64df550a18fa2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c9b4b12106685aff306622811e13227e95b402cf220dd13da97b40ae8449383c\"" Oct 29 05:34:18.021277 containerd[1601]: time="2025-10-29T05:34:18.021240554Z" level=info msg="StartContainer for \"c9b4b12106685aff306622811e13227e95b402cf220dd13da97b40ae8449383c\"" Oct 29 05:34:18.023392 containerd[1601]: time="2025-10-29T05:34:18.023340438Z" level=info msg="connecting to shim c9b4b12106685aff306622811e13227e95b402cf220dd13da97b40ae8449383c" address="unix:///run/containerd/s/04b7d5093e90d73640bf15cf8d084380ab36916c485f3236b72a9088292577c6" protocol=ttrpc version=3 Oct 29 05:34:18.051326 systemd[1]: Started cri-containerd-c9b4b12106685aff306622811e13227e95b402cf220dd13da97b40ae8449383c.scope - libcontainer container c9b4b12106685aff306622811e13227e95b402cf220dd13da97b40ae8449383c. Oct 29 05:34:18.118952 containerd[1601]: time="2025-10-29T05:34:18.118857257Z" level=info msg="StartContainer for \"c9b4b12106685aff306622811e13227e95b402cf220dd13da97b40ae8449383c\" returns successfully" Oct 29 05:34:18.206665 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 29 05:34:18.207563 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 29 05:34:18.408702 kubelet[2777]: I1029 05:34:18.408646 2777 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/77f8cc32-a2bc-482d-9427-8912a3aa5e90-whisker-backend-key-pair\") pod \"77f8cc32-a2bc-482d-9427-8912a3aa5e90\" (UID: \"77f8cc32-a2bc-482d-9427-8912a3aa5e90\") " Oct 29 05:34:18.408702 kubelet[2777]: I1029 05:34:18.408713 2777 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhglb\" (UniqueName: \"kubernetes.io/projected/77f8cc32-a2bc-482d-9427-8912a3aa5e90-kube-api-access-dhglb\") pod \"77f8cc32-a2bc-482d-9427-8912a3aa5e90\" (UID: \"77f8cc32-a2bc-482d-9427-8912a3aa5e90\") " Oct 29 05:34:18.409409 kubelet[2777]: I1029 05:34:18.408751 2777 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77f8cc32-a2bc-482d-9427-8912a3aa5e90-whisker-ca-bundle\") pod \"77f8cc32-a2bc-482d-9427-8912a3aa5e90\" (UID: \"77f8cc32-a2bc-482d-9427-8912a3aa5e90\") " Oct 29 05:34:18.410379 kubelet[2777]: I1029 05:34:18.410329 2777 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77f8cc32-a2bc-482d-9427-8912a3aa5e90-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "77f8cc32-a2bc-482d-9427-8912a3aa5e90" (UID: "77f8cc32-a2bc-482d-9427-8912a3aa5e90"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 29 05:34:18.413672 kubelet[2777]: I1029 05:34:18.413620 2777 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77f8cc32-a2bc-482d-9427-8912a3aa5e90-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "77f8cc32-a2bc-482d-9427-8912a3aa5e90" (UID: "77f8cc32-a2bc-482d-9427-8912a3aa5e90"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 29 05:34:18.416693 kubelet[2777]: I1029 05:34:18.416544 2777 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77f8cc32-a2bc-482d-9427-8912a3aa5e90-kube-api-access-dhglb" (OuterVolumeSpecName: "kube-api-access-dhglb") pod "77f8cc32-a2bc-482d-9427-8912a3aa5e90" (UID: "77f8cc32-a2bc-482d-9427-8912a3aa5e90"). InnerVolumeSpecName "kube-api-access-dhglb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 29 05:34:18.509184 kubelet[2777]: I1029 05:34:18.509127 2777 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/77f8cc32-a2bc-482d-9427-8912a3aa5e90-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 29 05:34:18.509184 kubelet[2777]: I1029 05:34:18.509174 2777 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dhglb\" (UniqueName: \"kubernetes.io/projected/77f8cc32-a2bc-482d-9427-8912a3aa5e90-kube-api-access-dhglb\") on node \"localhost\" DevicePath \"\"" Oct 29 05:34:18.509184 kubelet[2777]: I1029 05:34:18.509188 2777 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77f8cc32-a2bc-482d-9427-8912a3aa5e90-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 29 05:34:18.800741 kubelet[2777]: E1029 05:34:18.800683 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:34:18.807473 systemd[1]: Removed slice kubepods-besteffort-pod77f8cc32_a2bc_482d_9427_8912a3aa5e90.slice - libcontainer container kubepods-besteffort-pod77f8cc32_a2bc_482d_9427_8912a3aa5e90.slice. Oct 29 05:34:18.897326 containerd[1601]: time="2025-10-29T05:34:18.897233563Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c9b4b12106685aff306622811e13227e95b402cf220dd13da97b40ae8449383c\" id:\"c79e21b612f9746d572822bf8ac8aa607b5aacd521399f2e111f99ef477dca6c\" pid:3923 exit_status:1 exited_at:{seconds:1761716058 nanos:896632173}" Oct 29 05:34:18.977612 systemd[1]: var-lib-kubelet-pods-77f8cc32\x2da2bc\x2d482d\x2d9427\x2d8912a3aa5e90-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddhglb.mount: Deactivated successfully. Oct 29 05:34:18.977764 systemd[1]: var-lib-kubelet-pods-77f8cc32\x2da2bc\x2d482d\x2d9427\x2d8912a3aa5e90-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 29 05:34:19.191972 kubelet[2777]: I1029 05:34:19.191625 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jf7l5" podStartSLOduration=2.155421112 podStartE2EDuration="17.191608676s" podCreationTimestamp="2025-10-29 05:34:02 +0000 UTC" firstStartedPulling="2025-10-29 05:34:02.934557213 +0000 UTC m=+19.358678072" lastFinishedPulling="2025-10-29 05:34:17.970744777 +0000 UTC m=+34.394865636" observedRunningTime="2025-10-29 05:34:18.97150827 +0000 UTC m=+35.395629139" watchObservedRunningTime="2025-10-29 05:34:19.191608676 +0000 UTC m=+35.615729535" Oct 29 05:34:19.614136 systemd[1]: Created slice kubepods-besteffort-pod970fb908_fc28_49f1_87f4_48f55e612234.slice - libcontainer container kubepods-besteffort-pod970fb908_fc28_49f1_87f4_48f55e612234.slice. Oct 29 05:34:19.680299 kubelet[2777]: I1029 05:34:19.679615 2777 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77f8cc32-a2bc-482d-9427-8912a3aa5e90" path="/var/lib/kubelet/pods/77f8cc32-a2bc-482d-9427-8912a3aa5e90/volumes" Oct 29 05:34:19.718117 kubelet[2777]: I1029 05:34:19.718002 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/970fb908-fc28-49f1-87f4-48f55e612234-whisker-backend-key-pair\") pod \"whisker-77c85fccc-5vb2w\" (UID: \"970fb908-fc28-49f1-87f4-48f55e612234\") " pod="calico-system/whisker-77c85fccc-5vb2w" Oct 29 05:34:19.718117 kubelet[2777]: I1029 05:34:19.718132 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/970fb908-fc28-49f1-87f4-48f55e612234-whisker-ca-bundle\") pod \"whisker-77c85fccc-5vb2w\" (UID: \"970fb908-fc28-49f1-87f4-48f55e612234\") " pod="calico-system/whisker-77c85fccc-5vb2w" Oct 29 05:34:19.718479 kubelet[2777]: I1029 05:34:19.718158 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92h8t\" (UniqueName: \"kubernetes.io/projected/970fb908-fc28-49f1-87f4-48f55e612234-kube-api-access-92h8t\") pod \"whisker-77c85fccc-5vb2w\" (UID: \"970fb908-fc28-49f1-87f4-48f55e612234\") " pod="calico-system/whisker-77c85fccc-5vb2w" Oct 29 05:34:19.807312 kubelet[2777]: E1029 05:34:19.807254 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:34:19.937314 containerd[1601]: time="2025-10-29T05:34:19.937159763Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c9b4b12106685aff306622811e13227e95b402cf220dd13da97b40ae8449383c\" id:\"399512a9a1c083a327dea892f4a519d8a5f33c43e7415675d67163cfcd452d59\" pid:4052 exit_status:1 exited_at:{seconds:1761716059 nanos:930744005}" Oct 29 05:34:20.041866 containerd[1601]: time="2025-10-29T05:34:20.041775297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77c85fccc-5vb2w,Uid:970fb908-fc28-49f1-87f4-48f55e612234,Namespace:calico-system,Attempt:0,}" Oct 29 05:34:20.430117 systemd-networkd[1508]: cali7c658c15b4e: Link UP Oct 29 05:34:20.430561 systemd-networkd[1508]: cali7c658c15b4e: Gained carrier Oct 29 05:34:20.458419 containerd[1601]: 2025-10-29 05:34:20.112 [INFO][4066] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 05:34:20.458419 containerd[1601]: 2025-10-29 05:34:20.163 [INFO][4066] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--77c85fccc--5vb2w-eth0 whisker-77c85fccc- calico-system 970fb908-fc28-49f1-87f4-48f55e612234 895 0 2025-10-29 05:34:19 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:77c85fccc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-77c85fccc-5vb2w eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali7c658c15b4e [] [] }} ContainerID="706206a6f7550f5cf60f773dcd9d989adc2452a4956a6a8cb9efdafb70ef3e9c" Namespace="calico-system" Pod="whisker-77c85fccc-5vb2w" WorkloadEndpoint="localhost-k8s-whisker--77c85fccc--5vb2w-" Oct 29 05:34:20.458419 containerd[1601]: 2025-10-29 05:34:20.164 [INFO][4066] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="706206a6f7550f5cf60f773dcd9d989adc2452a4956a6a8cb9efdafb70ef3e9c" Namespace="calico-system" Pod="whisker-77c85fccc-5vb2w" WorkloadEndpoint="localhost-k8s-whisker--77c85fccc--5vb2w-eth0" Oct 29 05:34:20.458419 containerd[1601]: 2025-10-29 05:34:20.348 [INFO][4080] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="706206a6f7550f5cf60f773dcd9d989adc2452a4956a6a8cb9efdafb70ef3e9c" HandleID="k8s-pod-network.706206a6f7550f5cf60f773dcd9d989adc2452a4956a6a8cb9efdafb70ef3e9c" Workload="localhost-k8s-whisker--77c85fccc--5vb2w-eth0" Oct 29 05:34:20.459928 containerd[1601]: 2025-10-29 05:34:20.350 [INFO][4080] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="706206a6f7550f5cf60f773dcd9d989adc2452a4956a6a8cb9efdafb70ef3e9c" HandleID="k8s-pod-network.706206a6f7550f5cf60f773dcd9d989adc2452a4956a6a8cb9efdafb70ef3e9c" Workload="localhost-k8s-whisker--77c85fccc--5vb2w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001246e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-77c85fccc-5vb2w", "timestamp":"2025-10-29 05:34:20.348404757 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 05:34:20.459928 containerd[1601]: 2025-10-29 05:34:20.350 [INFO][4080] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 05:34:20.459928 containerd[1601]: 2025-10-29 05:34:20.350 [INFO][4080] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 05:34:20.459928 containerd[1601]: 2025-10-29 05:34:20.350 [INFO][4080] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 05:34:20.459928 containerd[1601]: 2025-10-29 05:34:20.369 [INFO][4080] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.706206a6f7550f5cf60f773dcd9d989adc2452a4956a6a8cb9efdafb70ef3e9c" host="localhost" Oct 29 05:34:20.459928 containerd[1601]: 2025-10-29 05:34:20.375 [INFO][4080] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 05:34:20.459928 containerd[1601]: 2025-10-29 05:34:20.385 [INFO][4080] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 05:34:20.459928 containerd[1601]: 2025-10-29 05:34:20.388 [INFO][4080] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 05:34:20.459928 containerd[1601]: 2025-10-29 05:34:20.391 [INFO][4080] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 05:34:20.459928 containerd[1601]: 2025-10-29 05:34:20.391 [INFO][4080] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.706206a6f7550f5cf60f773dcd9d989adc2452a4956a6a8cb9efdafb70ef3e9c" host="localhost" Oct 29 05:34:20.460963 containerd[1601]: 2025-10-29 05:34:20.393 [INFO][4080] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.706206a6f7550f5cf60f773dcd9d989adc2452a4956a6a8cb9efdafb70ef3e9c Oct 29 05:34:20.460963 containerd[1601]: 2025-10-29 05:34:20.397 [INFO][4080] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.706206a6f7550f5cf60f773dcd9d989adc2452a4956a6a8cb9efdafb70ef3e9c" host="localhost" Oct 29 05:34:20.460963 containerd[1601]: 2025-10-29 05:34:20.405 [INFO][4080] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.706206a6f7550f5cf60f773dcd9d989adc2452a4956a6a8cb9efdafb70ef3e9c" host="localhost" Oct 29 05:34:20.460963 containerd[1601]: 2025-10-29 05:34:20.406 [INFO][4080] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.706206a6f7550f5cf60f773dcd9d989adc2452a4956a6a8cb9efdafb70ef3e9c" host="localhost" Oct 29 05:34:20.460963 containerd[1601]: 2025-10-29 05:34:20.406 [INFO][4080] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 05:34:20.460963 containerd[1601]: 2025-10-29 05:34:20.406 [INFO][4080] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="706206a6f7550f5cf60f773dcd9d989adc2452a4956a6a8cb9efdafb70ef3e9c" HandleID="k8s-pod-network.706206a6f7550f5cf60f773dcd9d989adc2452a4956a6a8cb9efdafb70ef3e9c" Workload="localhost-k8s-whisker--77c85fccc--5vb2w-eth0" Oct 29 05:34:20.461656 containerd[1601]: 2025-10-29 05:34:20.412 [INFO][4066] cni-plugin/k8s.go 418: Populated endpoint ContainerID="706206a6f7550f5cf60f773dcd9d989adc2452a4956a6a8cb9efdafb70ef3e9c" Namespace="calico-system" Pod="whisker-77c85fccc-5vb2w" WorkloadEndpoint="localhost-k8s-whisker--77c85fccc--5vb2w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--77c85fccc--5vb2w-eth0", GenerateName:"whisker-77c85fccc-", Namespace:"calico-system", SelfLink:"", UID:"970fb908-fc28-49f1-87f4-48f55e612234", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 5, 34, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"77c85fccc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-77c85fccc-5vb2w", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7c658c15b4e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 05:34:20.461656 containerd[1601]: 2025-10-29 05:34:20.412 [INFO][4066] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="706206a6f7550f5cf60f773dcd9d989adc2452a4956a6a8cb9efdafb70ef3e9c" Namespace="calico-system" Pod="whisker-77c85fccc-5vb2w" WorkloadEndpoint="localhost-k8s-whisker--77c85fccc--5vb2w-eth0" Oct 29 05:34:20.461987 containerd[1601]: 2025-10-29 05:34:20.412 [INFO][4066] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7c658c15b4e ContainerID="706206a6f7550f5cf60f773dcd9d989adc2452a4956a6a8cb9efdafb70ef3e9c" Namespace="calico-system" Pod="whisker-77c85fccc-5vb2w" WorkloadEndpoint="localhost-k8s-whisker--77c85fccc--5vb2w-eth0" Oct 29 05:34:20.461987 containerd[1601]: 2025-10-29 05:34:20.428 [INFO][4066] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="706206a6f7550f5cf60f773dcd9d989adc2452a4956a6a8cb9efdafb70ef3e9c" Namespace="calico-system" Pod="whisker-77c85fccc-5vb2w" WorkloadEndpoint="localhost-k8s-whisker--77c85fccc--5vb2w-eth0" Oct 29 05:34:20.462257 containerd[1601]: 2025-10-29 05:34:20.436 [INFO][4066] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="706206a6f7550f5cf60f773dcd9d989adc2452a4956a6a8cb9efdafb70ef3e9c" Namespace="calico-system" Pod="whisker-77c85fccc-5vb2w" WorkloadEndpoint="localhost-k8s-whisker--77c85fccc--5vb2w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--77c85fccc--5vb2w-eth0", GenerateName:"whisker-77c85fccc-", Namespace:"calico-system", SelfLink:"", UID:"970fb908-fc28-49f1-87f4-48f55e612234", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 5, 34, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"77c85fccc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"706206a6f7550f5cf60f773dcd9d989adc2452a4956a6a8cb9efdafb70ef3e9c", Pod:"whisker-77c85fccc-5vb2w", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7c658c15b4e", MAC:"76:a1:06:66:e2:7c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 05:34:20.462517 containerd[1601]: 2025-10-29 05:34:20.451 [INFO][4066] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="706206a6f7550f5cf60f773dcd9d989adc2452a4956a6a8cb9efdafb70ef3e9c" Namespace="calico-system" Pod="whisker-77c85fccc-5vb2w" WorkloadEndpoint="localhost-k8s-whisker--77c85fccc--5vb2w-eth0" Oct 29 05:34:20.537040 containerd[1601]: time="2025-10-29T05:34:20.536951636Z" level=info msg="connecting to shim 706206a6f7550f5cf60f773dcd9d989adc2452a4956a6a8cb9efdafb70ef3e9c" address="unix:///run/containerd/s/0d3c579634d2212896d17981781cedff6655d5c27ee18672e4d747547469b766" namespace=k8s.io protocol=ttrpc version=3 Oct 29 05:34:20.564249 systemd[1]: Started cri-containerd-706206a6f7550f5cf60f773dcd9d989adc2452a4956a6a8cb9efdafb70ef3e9c.scope - libcontainer container 706206a6f7550f5cf60f773dcd9d989adc2452a4956a6a8cb9efdafb70ef3e9c. Oct 29 05:34:20.578109 systemd-resolved[1300]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 05:34:20.731842 containerd[1601]: time="2025-10-29T05:34:20.731649595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77c85fccc-5vb2w,Uid:970fb908-fc28-49f1-87f4-48f55e612234,Namespace:calico-system,Attempt:0,} returns sandbox id \"706206a6f7550f5cf60f773dcd9d989adc2452a4956a6a8cb9efdafb70ef3e9c\"" Oct 29 05:34:20.733684 containerd[1601]: time="2025-10-29T05:34:20.733633030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 29 05:34:21.136488 containerd[1601]: time="2025-10-29T05:34:21.136417658Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 05:34:21.138736 containerd[1601]: time="2025-10-29T05:34:21.138683352Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 29 05:34:21.148199 containerd[1601]: time="2025-10-29T05:34:21.148148353Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 29 05:34:21.148523 kubelet[2777]: E1029 05:34:21.148463 2777 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 05:34:21.148966 kubelet[2777]: E1029 05:34:21.148543 2777 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 05:34:21.148966 kubelet[2777]: E1029 05:34:21.148716 2777 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-77c85fccc-5vb2w_calico-system(970fb908-fc28-49f1-87f4-48f55e612234): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 29 05:34:21.149669 containerd[1601]: time="2025-10-29T05:34:21.149627961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 29 05:34:21.510827 containerd[1601]: time="2025-10-29T05:34:21.510606298Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 05:34:21.512054 containerd[1601]: time="2025-10-29T05:34:21.512005776Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 29 05:34:21.512142 containerd[1601]: time="2025-10-29T05:34:21.512065688Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 29 05:34:21.512403 kubelet[2777]: E1029 05:34:21.512321 2777 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 05:34:21.512403 kubelet[2777]: E1029 05:34:21.512390 2777 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 05:34:21.512532 kubelet[2777]: E1029 05:34:21.512495 2777 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-77c85fccc-5vb2w_calico-system(970fb908-fc28-49f1-87f4-48f55e612234): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 29 05:34:21.512622 kubelet[2777]: E1029 05:34:21.512565 2777 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77c85fccc-5vb2w" podUID="970fb908-fc28-49f1-87f4-48f55e612234" Oct 29 05:34:21.608404 systemd-networkd[1508]: cali7c658c15b4e: Gained IPv6LL Oct 29 05:34:21.818777 kubelet[2777]: E1029 05:34:21.818700 2777 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77c85fccc-5vb2w" podUID="970fb908-fc28-49f1-87f4-48f55e612234" Oct 29 05:34:22.466728 systemd[1]: Started sshd@9-10.0.0.106:22-10.0.0.1:44940.service - OpenSSH per-connection server daemon (10.0.0.1:44940). Oct 29 05:34:22.712427 sshd[4193]: Accepted publickey for core from 10.0.0.1 port 44940 ssh2: RSA SHA256:XlI1mMWbAUEpbMdibrfNtyLuAe47fXxox5VA8A+V0wo Oct 29 05:34:22.714297 sshd-session[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 05:34:22.719665 systemd-logind[1587]: New session 10 of user core. Oct 29 05:34:22.727209 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 29 05:34:23.051877 sshd[4199]: Connection closed by 10.0.0.1 port 44940 Oct 29 05:34:23.052316 sshd-session[4193]: pam_unix(sshd:session): session closed for user core Oct 29 05:34:23.057041 systemd[1]: sshd@9-10.0.0.106:22-10.0.0.1:44940.service: Deactivated successfully. Oct 29 05:34:23.059389 systemd[1]: session-10.scope: Deactivated successfully. Oct 29 05:34:23.060360 systemd-logind[1587]: Session 10 logged out. Waiting for processes to exit. Oct 29 05:34:23.061674 systemd-logind[1587]: Removed session 10. Oct 29 05:34:24.678281 kubelet[2777]: E1029 05:34:24.678206 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:34:24.678866 containerd[1601]: time="2025-10-29T05:34:24.678803815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xcs92,Uid:3b87e74c-280a-4a81-9ad5-b4bf48d47f03,Namespace:kube-system,Attempt:0,}" Oct 29 05:34:24.680767 containerd[1601]: time="2025-10-29T05:34:24.680688682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-799b5c4b47-vw8qq,Uid:5700d2a9-15b1-43c2-8972-37e1ebd6aa09,Namespace:calico-apiserver,Attempt:0,}" Oct 29 05:34:24.682718 containerd[1601]: time="2025-10-29T05:34:24.682657699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68fbd9f956-5l7nj,Uid:ca0b83d1-3c73-4368-b48e-26b292faf856,Namespace:calico-system,Attempt:0,}" Oct 29 05:34:24.802238 systemd-networkd[1508]: cali96d4736d554: Link UP Oct 29 05:34:24.802510 systemd-networkd[1508]: cali96d4736d554: Gained carrier Oct 29 05:34:24.818362 containerd[1601]: 2025-10-29 05:34:24.716 [INFO][4271] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 05:34:24.818362 containerd[1601]: 2025-10-29 05:34:24.728 [INFO][4271] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--799b5c4b47--vw8qq-eth0 calico-apiserver-799b5c4b47- calico-apiserver 5700d2a9-15b1-43c2-8972-37e1ebd6aa09 815 0 2025-10-29 05:33:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:799b5c4b47 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-799b5c4b47-vw8qq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali96d4736d554 [] [] }} ContainerID="027b533f09a7f02f1b602b90aada6f1f04e983a73022b8b963aec37459fc78fa" Namespace="calico-apiserver" Pod="calico-apiserver-799b5c4b47-vw8qq" WorkloadEndpoint="localhost-k8s-calico--apiserver--799b5c4b47--vw8qq-" Oct 29 05:34:24.818362 containerd[1601]: 2025-10-29 05:34:24.728 [INFO][4271] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="027b533f09a7f02f1b602b90aada6f1f04e983a73022b8b963aec37459fc78fa" Namespace="calico-apiserver" Pod="calico-apiserver-799b5c4b47-vw8qq" WorkloadEndpoint="localhost-k8s-calico--apiserver--799b5c4b47--vw8qq-eth0" Oct 29 05:34:24.818362 containerd[1601]: 2025-10-29 05:34:24.766 [INFO][4309] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="027b533f09a7f02f1b602b90aada6f1f04e983a73022b8b963aec37459fc78fa" HandleID="k8s-pod-network.027b533f09a7f02f1b602b90aada6f1f04e983a73022b8b963aec37459fc78fa" Workload="localhost-k8s-calico--apiserver--799b5c4b47--vw8qq-eth0" Oct 29 05:34:24.818655 containerd[1601]: 2025-10-29 05:34:24.766 [INFO][4309] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="027b533f09a7f02f1b602b90aada6f1f04e983a73022b8b963aec37459fc78fa" HandleID="k8s-pod-network.027b533f09a7f02f1b602b90aada6f1f04e983a73022b8b963aec37459fc78fa" Workload="localhost-k8s-calico--apiserver--799b5c4b47--vw8qq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ef00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-799b5c4b47-vw8qq", "timestamp":"2025-10-29 05:34:24.766653821 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 05:34:24.818655 containerd[1601]: 2025-10-29 05:34:24.767 [INFO][4309] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 05:34:24.818655 containerd[1601]: 2025-10-29 05:34:24.767 [INFO][4309] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 05:34:24.818655 containerd[1601]: 2025-10-29 05:34:24.767 [INFO][4309] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 05:34:24.818655 containerd[1601]: 2025-10-29 05:34:24.774 [INFO][4309] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.027b533f09a7f02f1b602b90aada6f1f04e983a73022b8b963aec37459fc78fa" host="localhost" Oct 29 05:34:24.818655 containerd[1601]: 2025-10-29 05:34:24.778 [INFO][4309] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 05:34:24.818655 containerd[1601]: 2025-10-29 05:34:24.782 [INFO][4309] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 05:34:24.818655 containerd[1601]: 2025-10-29 05:34:24.784 [INFO][4309] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 05:34:24.818655 containerd[1601]: 2025-10-29 05:34:24.786 [INFO][4309] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 05:34:24.818655 containerd[1601]: 2025-10-29 05:34:24.786 [INFO][4309] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.027b533f09a7f02f1b602b90aada6f1f04e983a73022b8b963aec37459fc78fa" host="localhost" Oct 29 05:34:24.818944 containerd[1601]: 2025-10-29 05:34:24.787 [INFO][4309] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.027b533f09a7f02f1b602b90aada6f1f04e983a73022b8b963aec37459fc78fa Oct 29 05:34:24.818944 containerd[1601]: 2025-10-29 05:34:24.790 [INFO][4309] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.027b533f09a7f02f1b602b90aada6f1f04e983a73022b8b963aec37459fc78fa" host="localhost" Oct 29 05:34:24.818944 containerd[1601]: 2025-10-29 05:34:24.795 [INFO][4309] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.027b533f09a7f02f1b602b90aada6f1f04e983a73022b8b963aec37459fc78fa" host="localhost" Oct 29 05:34:24.818944 containerd[1601]: 2025-10-29 05:34:24.796 [INFO][4309] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.027b533f09a7f02f1b602b90aada6f1f04e983a73022b8b963aec37459fc78fa" host="localhost" Oct 29 05:34:24.818944 containerd[1601]: 2025-10-29 05:34:24.796 [INFO][4309] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 05:34:24.818944 containerd[1601]: 2025-10-29 05:34:24.796 [INFO][4309] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="027b533f09a7f02f1b602b90aada6f1f04e983a73022b8b963aec37459fc78fa" HandleID="k8s-pod-network.027b533f09a7f02f1b602b90aada6f1f04e983a73022b8b963aec37459fc78fa" Workload="localhost-k8s-calico--apiserver--799b5c4b47--vw8qq-eth0" Oct 29 05:34:24.819130 containerd[1601]: 2025-10-29 05:34:24.799 [INFO][4271] cni-plugin/k8s.go 418: Populated endpoint ContainerID="027b533f09a7f02f1b602b90aada6f1f04e983a73022b8b963aec37459fc78fa" Namespace="calico-apiserver" Pod="calico-apiserver-799b5c4b47-vw8qq" WorkloadEndpoint="localhost-k8s-calico--apiserver--799b5c4b47--vw8qq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--799b5c4b47--vw8qq-eth0", GenerateName:"calico-apiserver-799b5c4b47-", Namespace:"calico-apiserver", SelfLink:"", UID:"5700d2a9-15b1-43c2-8972-37e1ebd6aa09", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 5, 33, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"799b5c4b47", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-799b5c4b47-vw8qq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali96d4736d554", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 05:34:24.819210 containerd[1601]: 2025-10-29 05:34:24.799 [INFO][4271] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="027b533f09a7f02f1b602b90aada6f1f04e983a73022b8b963aec37459fc78fa" Namespace="calico-apiserver" Pod="calico-apiserver-799b5c4b47-vw8qq" WorkloadEndpoint="localhost-k8s-calico--apiserver--799b5c4b47--vw8qq-eth0" Oct 29 05:34:24.819210 containerd[1601]: 2025-10-29 05:34:24.799 [INFO][4271] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali96d4736d554 ContainerID="027b533f09a7f02f1b602b90aada6f1f04e983a73022b8b963aec37459fc78fa" Namespace="calico-apiserver" Pod="calico-apiserver-799b5c4b47-vw8qq" WorkloadEndpoint="localhost-k8s-calico--apiserver--799b5c4b47--vw8qq-eth0" Oct 29 05:34:24.819210 containerd[1601]: 2025-10-29 05:34:24.801 [INFO][4271] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="027b533f09a7f02f1b602b90aada6f1f04e983a73022b8b963aec37459fc78fa" Namespace="calico-apiserver" Pod="calico-apiserver-799b5c4b47-vw8qq" WorkloadEndpoint="localhost-k8s-calico--apiserver--799b5c4b47--vw8qq-eth0" Oct 29 05:34:24.819295 containerd[1601]: 2025-10-29 05:34:24.802 [INFO][4271] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="027b533f09a7f02f1b602b90aada6f1f04e983a73022b8b963aec37459fc78fa" Namespace="calico-apiserver" Pod="calico-apiserver-799b5c4b47-vw8qq" WorkloadEndpoint="localhost-k8s-calico--apiserver--799b5c4b47--vw8qq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--799b5c4b47--vw8qq-eth0", GenerateName:"calico-apiserver-799b5c4b47-", Namespace:"calico-apiserver", SelfLink:"", UID:"5700d2a9-15b1-43c2-8972-37e1ebd6aa09", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 5, 33, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"799b5c4b47", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"027b533f09a7f02f1b602b90aada6f1f04e983a73022b8b963aec37459fc78fa", Pod:"calico-apiserver-799b5c4b47-vw8qq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali96d4736d554", MAC:"9a:e3:60:18:c5:b6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 05:34:24.819404 containerd[1601]: 2025-10-29 05:34:24.812 [INFO][4271] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="027b533f09a7f02f1b602b90aada6f1f04e983a73022b8b963aec37459fc78fa" Namespace="calico-apiserver" Pod="calico-apiserver-799b5c4b47-vw8qq" WorkloadEndpoint="localhost-k8s-calico--apiserver--799b5c4b47--vw8qq-eth0" Oct 29 05:34:24.845049 containerd[1601]: time="2025-10-29T05:34:24.844991964Z" level=info msg="connecting to shim 027b533f09a7f02f1b602b90aada6f1f04e983a73022b8b963aec37459fc78fa" address="unix:///run/containerd/s/42eb30fbdbff92e1aed256dc63fcab791c3013be18eba50f614d17aca3303e53" namespace=k8s.io protocol=ttrpc version=3 Oct 29 05:34:24.876258 systemd[1]: Started cri-containerd-027b533f09a7f02f1b602b90aada6f1f04e983a73022b8b963aec37459fc78fa.scope - libcontainer container 027b533f09a7f02f1b602b90aada6f1f04e983a73022b8b963aec37459fc78fa. Oct 29 05:34:24.895016 systemd-resolved[1300]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 05:34:24.915207 systemd-networkd[1508]: cali3175cecf1d7: Link UP Oct 29 05:34:24.915430 systemd-networkd[1508]: cali3175cecf1d7: Gained carrier Oct 29 05:34:24.929905 containerd[1601]: 2025-10-29 05:34:24.716 [INFO][4263] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 05:34:24.929905 containerd[1601]: 2025-10-29 05:34:24.731 [INFO][4263] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--xcs92-eth0 coredns-66bc5c9577- kube-system 3b87e74c-280a-4a81-9ad5-b4bf48d47f03 823 0 2025-10-29 05:33:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-xcs92 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3175cecf1d7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="9b6f9c0029d0b0ad0e795dcd08c8889a0272b52860eb62e6a85cc9421d214fdc" Namespace="kube-system" Pod="coredns-66bc5c9577-xcs92" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xcs92-" Oct 29 05:34:24.929905 containerd[1601]: 2025-10-29 05:34:24.731 [INFO][4263] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9b6f9c0029d0b0ad0e795dcd08c8889a0272b52860eb62e6a85cc9421d214fdc" Namespace="kube-system" Pod="coredns-66bc5c9577-xcs92" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xcs92-eth0" Oct 29 05:34:24.929905 containerd[1601]: 2025-10-29 05:34:24.768 [INFO][4307] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9b6f9c0029d0b0ad0e795dcd08c8889a0272b52860eb62e6a85cc9421d214fdc" HandleID="k8s-pod-network.9b6f9c0029d0b0ad0e795dcd08c8889a0272b52860eb62e6a85cc9421d214fdc" Workload="localhost-k8s-coredns--66bc5c9577--xcs92-eth0" Oct 29 05:34:24.930257 containerd[1601]: 2025-10-29 05:34:24.768 [INFO][4307] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9b6f9c0029d0b0ad0e795dcd08c8889a0272b52860eb62e6a85cc9421d214fdc" HandleID="k8s-pod-network.9b6f9c0029d0b0ad0e795dcd08c8889a0272b52860eb62e6a85cc9421d214fdc" Workload="localhost-k8s-coredns--66bc5c9577--xcs92-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138da0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-xcs92", "timestamp":"2025-10-29 05:34:24.768661681 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 05:34:24.930257 containerd[1601]: 2025-10-29 05:34:24.768 [INFO][4307] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 05:34:24.930257 containerd[1601]: 2025-10-29 05:34:24.796 [INFO][4307] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 05:34:24.930257 containerd[1601]: 2025-10-29 05:34:24.797 [INFO][4307] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 05:34:24.930257 containerd[1601]: 2025-10-29 05:34:24.875 [INFO][4307] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9b6f9c0029d0b0ad0e795dcd08c8889a0272b52860eb62e6a85cc9421d214fdc" host="localhost" Oct 29 05:34:24.930257 containerd[1601]: 2025-10-29 05:34:24.884 [INFO][4307] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 05:34:24.930257 containerd[1601]: 2025-10-29 05:34:24.889 [INFO][4307] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 05:34:24.930257 containerd[1601]: 2025-10-29 05:34:24.890 [INFO][4307] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 05:34:24.930257 containerd[1601]: 2025-10-29 05:34:24.893 [INFO][4307] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 05:34:24.930257 containerd[1601]: 2025-10-29 05:34:24.893 [INFO][4307] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9b6f9c0029d0b0ad0e795dcd08c8889a0272b52860eb62e6a85cc9421d214fdc" host="localhost" Oct 29 05:34:24.930567 containerd[1601]: 2025-10-29 05:34:24.894 [INFO][4307] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9b6f9c0029d0b0ad0e795dcd08c8889a0272b52860eb62e6a85cc9421d214fdc Oct 29 05:34:24.930567 containerd[1601]: 2025-10-29 05:34:24.898 [INFO][4307] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9b6f9c0029d0b0ad0e795dcd08c8889a0272b52860eb62e6a85cc9421d214fdc" host="localhost" Oct 29 05:34:24.930567 containerd[1601]: 2025-10-29 05:34:24.905 [INFO][4307] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.9b6f9c0029d0b0ad0e795dcd08c8889a0272b52860eb62e6a85cc9421d214fdc" host="localhost" Oct 29 05:34:24.930567 containerd[1601]: 2025-10-29 05:34:24.905 [INFO][4307] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.9b6f9c0029d0b0ad0e795dcd08c8889a0272b52860eb62e6a85cc9421d214fdc" host="localhost" Oct 29 05:34:24.930567 containerd[1601]: 2025-10-29 05:34:24.905 [INFO][4307] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 05:34:24.930567 containerd[1601]: 2025-10-29 05:34:24.905 [INFO][4307] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="9b6f9c0029d0b0ad0e795dcd08c8889a0272b52860eb62e6a85cc9421d214fdc" HandleID="k8s-pod-network.9b6f9c0029d0b0ad0e795dcd08c8889a0272b52860eb62e6a85cc9421d214fdc" Workload="localhost-k8s-coredns--66bc5c9577--xcs92-eth0" Oct 29 05:34:24.930745 containerd[1601]: 2025-10-29 05:34:24.909 [INFO][4263] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9b6f9c0029d0b0ad0e795dcd08c8889a0272b52860eb62e6a85cc9421d214fdc" Namespace="kube-system" Pod="coredns-66bc5c9577-xcs92" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xcs92-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--xcs92-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"3b87e74c-280a-4a81-9ad5-b4bf48d47f03", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 5, 33, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-xcs92", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3175cecf1d7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 05:34:24.930745 containerd[1601]: 2025-10-29 05:34:24.910 [INFO][4263] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="9b6f9c0029d0b0ad0e795dcd08c8889a0272b52860eb62e6a85cc9421d214fdc" Namespace="kube-system" Pod="coredns-66bc5c9577-xcs92" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xcs92-eth0" Oct 29 05:34:24.930745 containerd[1601]: 2025-10-29 05:34:24.910 [INFO][4263] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3175cecf1d7 ContainerID="9b6f9c0029d0b0ad0e795dcd08c8889a0272b52860eb62e6a85cc9421d214fdc" Namespace="kube-system" Pod="coredns-66bc5c9577-xcs92" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xcs92-eth0" Oct 29 05:34:24.930745 containerd[1601]: 2025-10-29 05:34:24.913 [INFO][4263] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9b6f9c0029d0b0ad0e795dcd08c8889a0272b52860eb62e6a85cc9421d214fdc" Namespace="kube-system" Pod="coredns-66bc5c9577-xcs92" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xcs92-eth0" Oct 29 05:34:24.930745 containerd[1601]: 2025-10-29 05:34:24.914 [INFO][4263] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9b6f9c0029d0b0ad0e795dcd08c8889a0272b52860eb62e6a85cc9421d214fdc" Namespace="kube-system" Pod="coredns-66bc5c9577-xcs92" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xcs92-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--xcs92-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"3b87e74c-280a-4a81-9ad5-b4bf48d47f03", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 5, 33, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9b6f9c0029d0b0ad0e795dcd08c8889a0272b52860eb62e6a85cc9421d214fdc", Pod:"coredns-66bc5c9577-xcs92", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3175cecf1d7", MAC:"9a:bb:a6:1b:05:29", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 05:34:24.930745 containerd[1601]: 2025-10-29 05:34:24.925 [INFO][4263] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9b6f9c0029d0b0ad0e795dcd08c8889a0272b52860eb62e6a85cc9421d214fdc" Namespace="kube-system" Pod="coredns-66bc5c9577-xcs92" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xcs92-eth0" Oct 29 05:34:24.937346 containerd[1601]: time="2025-10-29T05:34:24.937291403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-799b5c4b47-vw8qq,Uid:5700d2a9-15b1-43c2-8972-37e1ebd6aa09,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"027b533f09a7f02f1b602b90aada6f1f04e983a73022b8b963aec37459fc78fa\"" Oct 29 05:34:24.939097 containerd[1601]: time="2025-10-29T05:34:24.939055134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 05:34:24.960916 containerd[1601]: time="2025-10-29T05:34:24.960849886Z" level=info msg="connecting to shim 9b6f9c0029d0b0ad0e795dcd08c8889a0272b52860eb62e6a85cc9421d214fdc" address="unix:///run/containerd/s/78fc9b19fb9c83e1baea715789757ddfa748621ca8cdd29d71777f72cce7efc9" namespace=k8s.io protocol=ttrpc version=3 Oct 29 05:34:24.989252 systemd[1]: Started cri-containerd-9b6f9c0029d0b0ad0e795dcd08c8889a0272b52860eb62e6a85cc9421d214fdc.scope - libcontainer container 9b6f9c0029d0b0ad0e795dcd08c8889a0272b52860eb62e6a85cc9421d214fdc. Oct 29 05:34:25.004619 systemd-resolved[1300]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 05:34:25.012891 systemd-networkd[1508]: calicdc3e709532: Link UP Oct 29 05:34:25.013505 systemd-networkd[1508]: calicdc3e709532: Gained carrier Oct 29 05:34:25.035960 containerd[1601]: 2025-10-29 05:34:24.720 [INFO][4282] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 05:34:25.035960 containerd[1601]: 2025-10-29 05:34:24.732 [INFO][4282] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--68fbd9f956--5l7nj-eth0 calico-kube-controllers-68fbd9f956- calico-system ca0b83d1-3c73-4368-b48e-26b292faf856 821 0 2025-10-29 05:34:02 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:68fbd9f956 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-68fbd9f956-5l7nj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calicdc3e709532 [] [] }} ContainerID="ee96b3f8d84b595430801c8e0bd443d96fb77fd416beba91491f378ed9bddafc" Namespace="calico-system" Pod="calico-kube-controllers-68fbd9f956-5l7nj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68fbd9f956--5l7nj-" Oct 29 05:34:25.035960 containerd[1601]: 2025-10-29 05:34:24.732 [INFO][4282] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ee96b3f8d84b595430801c8e0bd443d96fb77fd416beba91491f378ed9bddafc" Namespace="calico-system" Pod="calico-kube-controllers-68fbd9f956-5l7nj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68fbd9f956--5l7nj-eth0" Oct 29 05:34:25.035960 containerd[1601]: 2025-10-29 05:34:24.771 [INFO][4306] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ee96b3f8d84b595430801c8e0bd443d96fb77fd416beba91491f378ed9bddafc" HandleID="k8s-pod-network.ee96b3f8d84b595430801c8e0bd443d96fb77fd416beba91491f378ed9bddafc" Workload="localhost-k8s-calico--kube--controllers--68fbd9f956--5l7nj-eth0" Oct 29 05:34:25.035960 containerd[1601]: 2025-10-29 05:34:24.771 [INFO][4306] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ee96b3f8d84b595430801c8e0bd443d96fb77fd416beba91491f378ed9bddafc" HandleID="k8s-pod-network.ee96b3f8d84b595430801c8e0bd443d96fb77fd416beba91491f378ed9bddafc" Workload="localhost-k8s-calico--kube--controllers--68fbd9f956--5l7nj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138230), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-68fbd9f956-5l7nj", "timestamp":"2025-10-29 05:34:24.771135775 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 05:34:25.035960 containerd[1601]: 2025-10-29 05:34:24.771 [INFO][4306] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 05:34:25.035960 containerd[1601]: 2025-10-29 05:34:24.905 [INFO][4306] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 05:34:25.035960 containerd[1601]: 2025-10-29 05:34:24.906 [INFO][4306] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 05:34:25.035960 containerd[1601]: 2025-10-29 05:34:24.976 [INFO][4306] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ee96b3f8d84b595430801c8e0bd443d96fb77fd416beba91491f378ed9bddafc" host="localhost" Oct 29 05:34:25.035960 containerd[1601]: 2025-10-29 05:34:24.984 [INFO][4306] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 05:34:25.035960 containerd[1601]: 2025-10-29 05:34:24.988 [INFO][4306] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 05:34:25.035960 containerd[1601]: 2025-10-29 05:34:24.991 [INFO][4306] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 05:34:25.035960 containerd[1601]: 2025-10-29 05:34:24.993 [INFO][4306] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 05:34:25.035960 containerd[1601]: 2025-10-29 05:34:24.993 [INFO][4306] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ee96b3f8d84b595430801c8e0bd443d96fb77fd416beba91491f378ed9bddafc" host="localhost" Oct 29 05:34:25.035960 containerd[1601]: 2025-10-29 05:34:24.995 [INFO][4306] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ee96b3f8d84b595430801c8e0bd443d96fb77fd416beba91491f378ed9bddafc Oct 29 05:34:25.035960 containerd[1601]: 2025-10-29 05:34:24.999 [INFO][4306] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ee96b3f8d84b595430801c8e0bd443d96fb77fd416beba91491f378ed9bddafc" host="localhost" Oct 29 05:34:25.035960 containerd[1601]: 2025-10-29 05:34:25.004 [INFO][4306] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.ee96b3f8d84b595430801c8e0bd443d96fb77fd416beba91491f378ed9bddafc" host="localhost" Oct 29 05:34:25.035960 containerd[1601]: 2025-10-29 05:34:25.004 [INFO][4306] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.ee96b3f8d84b595430801c8e0bd443d96fb77fd416beba91491f378ed9bddafc" host="localhost" Oct 29 05:34:25.035960 containerd[1601]: 2025-10-29 05:34:25.004 [INFO][4306] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 05:34:25.035960 containerd[1601]: 2025-10-29 05:34:25.004 [INFO][4306] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="ee96b3f8d84b595430801c8e0bd443d96fb77fd416beba91491f378ed9bddafc" HandleID="k8s-pod-network.ee96b3f8d84b595430801c8e0bd443d96fb77fd416beba91491f378ed9bddafc" Workload="localhost-k8s-calico--kube--controllers--68fbd9f956--5l7nj-eth0" Oct 29 05:34:25.036544 containerd[1601]: 2025-10-29 05:34:25.009 [INFO][4282] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ee96b3f8d84b595430801c8e0bd443d96fb77fd416beba91491f378ed9bddafc" Namespace="calico-system" Pod="calico-kube-controllers-68fbd9f956-5l7nj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68fbd9f956--5l7nj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--68fbd9f956--5l7nj-eth0", GenerateName:"calico-kube-controllers-68fbd9f956-", Namespace:"calico-system", SelfLink:"", UID:"ca0b83d1-3c73-4368-b48e-26b292faf856", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 5, 34, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68fbd9f956", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-68fbd9f956-5l7nj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicdc3e709532", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 05:34:25.036544 containerd[1601]: 2025-10-29 05:34:25.009 [INFO][4282] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="ee96b3f8d84b595430801c8e0bd443d96fb77fd416beba91491f378ed9bddafc" Namespace="calico-system" Pod="calico-kube-controllers-68fbd9f956-5l7nj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68fbd9f956--5l7nj-eth0" Oct 29 05:34:25.036544 containerd[1601]: 2025-10-29 05:34:25.009 [INFO][4282] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicdc3e709532 ContainerID="ee96b3f8d84b595430801c8e0bd443d96fb77fd416beba91491f378ed9bddafc" Namespace="calico-system" Pod="calico-kube-controllers-68fbd9f956-5l7nj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68fbd9f956--5l7nj-eth0" Oct 29 05:34:25.036544 containerd[1601]: 2025-10-29 05:34:25.014 [INFO][4282] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ee96b3f8d84b595430801c8e0bd443d96fb77fd416beba91491f378ed9bddafc" Namespace="calico-system" Pod="calico-kube-controllers-68fbd9f956-5l7nj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68fbd9f956--5l7nj-eth0" Oct 29 05:34:25.036544 containerd[1601]: 2025-10-29 05:34:25.014 [INFO][4282] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ee96b3f8d84b595430801c8e0bd443d96fb77fd416beba91491f378ed9bddafc" Namespace="calico-system" Pod="calico-kube-controllers-68fbd9f956-5l7nj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68fbd9f956--5l7nj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--68fbd9f956--5l7nj-eth0", GenerateName:"calico-kube-controllers-68fbd9f956-", Namespace:"calico-system", SelfLink:"", UID:"ca0b83d1-3c73-4368-b48e-26b292faf856", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 5, 34, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68fbd9f956", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ee96b3f8d84b595430801c8e0bd443d96fb77fd416beba91491f378ed9bddafc", Pod:"calico-kube-controllers-68fbd9f956-5l7nj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicdc3e709532", MAC:"de:08:78:87:8c:ac", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 05:34:25.036544 containerd[1601]: 2025-10-29 05:34:25.025 [INFO][4282] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ee96b3f8d84b595430801c8e0bd443d96fb77fd416beba91491f378ed9bddafc" Namespace="calico-system" Pod="calico-kube-controllers-68fbd9f956-5l7nj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68fbd9f956--5l7nj-eth0" Oct 29 05:34:25.046286 containerd[1601]: time="2025-10-29T05:34:25.044660105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xcs92,Uid:3b87e74c-280a-4a81-9ad5-b4bf48d47f03,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b6f9c0029d0b0ad0e795dcd08c8889a0272b52860eb62e6a85cc9421d214fdc\"" Oct 29 05:34:25.046961 kubelet[2777]: E1029 05:34:25.046927 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:34:25.052965 containerd[1601]: time="2025-10-29T05:34:25.052931351Z" level=info msg="CreateContainer within sandbox \"9b6f9c0029d0b0ad0e795dcd08c8889a0272b52860eb62e6a85cc9421d214fdc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 29 05:34:25.064114 containerd[1601]: time="2025-10-29T05:34:25.063755578Z" level=info msg="connecting to shim ee96b3f8d84b595430801c8e0bd443d96fb77fd416beba91491f378ed9bddafc" address="unix:///run/containerd/s/d50e651bd47a80251ec10b0d3de6c8fd0cfb68893ce88095a5f04516e299f8d1" namespace=k8s.io protocol=ttrpc version=3 Oct 29 05:34:25.064456 containerd[1601]: time="2025-10-29T05:34:25.064413724Z" level=info msg="Container 5375b7e3eb6498b9c63d92130df08336a5fa2ac6652dcce8d1bcdf4751103043: CDI devices from CRI Config.CDIDevices: []" Oct 29 05:34:25.072800 containerd[1601]: time="2025-10-29T05:34:25.072764478Z" level=info msg="CreateContainer within sandbox \"9b6f9c0029d0b0ad0e795dcd08c8889a0272b52860eb62e6a85cc9421d214fdc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5375b7e3eb6498b9c63d92130df08336a5fa2ac6652dcce8d1bcdf4751103043\"" Oct 29 05:34:25.074366 containerd[1601]: time="2025-10-29T05:34:25.074345386Z" level=info msg="StartContainer for \"5375b7e3eb6498b9c63d92130df08336a5fa2ac6652dcce8d1bcdf4751103043\"" Oct 29 05:34:25.076320 containerd[1601]: time="2025-10-29T05:34:25.076294865Z" level=info msg="connecting to shim 5375b7e3eb6498b9c63d92130df08336a5fa2ac6652dcce8d1bcdf4751103043" address="unix:///run/containerd/s/78fc9b19fb9c83e1baea715789757ddfa748621ca8cdd29d71777f72cce7efc9" protocol=ttrpc version=3 Oct 29 05:34:25.098224 systemd[1]: Started cri-containerd-ee96b3f8d84b595430801c8e0bd443d96fb77fd416beba91491f378ed9bddafc.scope - libcontainer container ee96b3f8d84b595430801c8e0bd443d96fb77fd416beba91491f378ed9bddafc. Oct 29 05:34:25.101972 systemd[1]: Started cri-containerd-5375b7e3eb6498b9c63d92130df08336a5fa2ac6652dcce8d1bcdf4751103043.scope - libcontainer container 5375b7e3eb6498b9c63d92130df08336a5fa2ac6652dcce8d1bcdf4751103043. Oct 29 05:34:25.117226 systemd-resolved[1300]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 05:34:25.202065 containerd[1601]: time="2025-10-29T05:34:25.201588005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68fbd9f956-5l7nj,Uid:ca0b83d1-3c73-4368-b48e-26b292faf856,Namespace:calico-system,Attempt:0,} returns sandbox id \"ee96b3f8d84b595430801c8e0bd443d96fb77fd416beba91491f378ed9bddafc\"" Oct 29 05:34:25.202640 containerd[1601]: time="2025-10-29T05:34:25.202599743Z" level=info msg="StartContainer for \"5375b7e3eb6498b9c63d92130df08336a5fa2ac6652dcce8d1bcdf4751103043\" returns successfully" Oct 29 05:34:25.244302 containerd[1601]: time="2025-10-29T05:34:25.244096374Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 05:34:25.245543 containerd[1601]: time="2025-10-29T05:34:25.245481965Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 05:34:25.245625 containerd[1601]: time="2025-10-29T05:34:25.245584858Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 05:34:25.245736 kubelet[2777]: E1029 05:34:25.245697 2777 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 05:34:25.245852 kubelet[2777]: E1029 05:34:25.245736 2777 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 05:34:25.245907 kubelet[2777]: E1029 05:34:25.245870 2777 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-799b5c4b47-vw8qq_calico-apiserver(5700d2a9-15b1-43c2-8972-37e1ebd6aa09): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 05:34:25.245965 kubelet[2777]: E1029 05:34:25.245902 2777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799b5c4b47-vw8qq" podUID="5700d2a9-15b1-43c2-8972-37e1ebd6aa09" Oct 29 05:34:25.246289 containerd[1601]: time="2025-10-29T05:34:25.246259154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 29 05:34:25.521667 kubelet[2777]: I1029 05:34:25.521473 2777 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 29 05:34:25.522248 kubelet[2777]: E1029 05:34:25.522197 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:34:25.574711 containerd[1601]: time="2025-10-29T05:34:25.574648232Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 05:34:25.576839 containerd[1601]: time="2025-10-29T05:34:25.576771236Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 29 05:34:25.576916 containerd[1601]: time="2025-10-29T05:34:25.576796754Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 29 05:34:25.577137 kubelet[2777]: E1029 05:34:25.577057 2777 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 05:34:25.577137 kubelet[2777]: E1029 05:34:25.577130 2777 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 05:34:25.577332 kubelet[2777]: E1029 05:34:25.577217 2777 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-68fbd9f956-5l7nj_calico-system(ca0b83d1-3c73-4368-b48e-26b292faf856): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 29 05:34:25.577332 kubelet[2777]: E1029 05:34:25.577255 2777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68fbd9f956-5l7nj" podUID="ca0b83d1-3c73-4368-b48e-26b292faf856" Oct 29 05:34:25.677435 containerd[1601]: time="2025-10-29T05:34:25.677355496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qkktn,Uid:11b4791e-97d9-4b28-b964-d007606a7e18,Namespace:calico-system,Attempt:0,}" Oct 29 05:34:25.679017 containerd[1601]: time="2025-10-29T05:34:25.678961641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-pkk9m,Uid:de492ebe-e388-430f-a865-ba2ce27c1431,Namespace:calico-system,Attempt:0,}" Oct 29 05:34:25.681340 kubelet[2777]: E1029 05:34:25.681289 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:34:25.681844 containerd[1601]: time="2025-10-29T05:34:25.681764002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-srmc7,Uid:a9af4fbc-4377-4e63-8c55-f50471f996bb,Namespace:kube-system,Attempt:0,}" Oct 29 05:34:25.823244 systemd-networkd[1508]: cali28a6beb577e: Link UP Oct 29 05:34:25.823521 systemd-networkd[1508]: cali28a6beb577e: Gained carrier Oct 29 05:34:25.830473 kubelet[2777]: E1029 05:34:25.830435 2777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68fbd9f956-5l7nj" podUID="ca0b83d1-3c73-4368-b48e-26b292faf856" Oct 29 05:34:25.839035 kubelet[2777]: E1029 05:34:25.838995 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:34:25.839709 containerd[1601]: 2025-10-29 05:34:25.722 [INFO][4563] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 05:34:25.839709 containerd[1601]: 2025-10-29 05:34:25.736 [INFO][4563] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--srmc7-eth0 coredns-66bc5c9577- kube-system a9af4fbc-4377-4e63-8c55-f50471f996bb 822 0 2025-10-29 05:33:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-srmc7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali28a6beb577e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="fc0218478a00c168680e8c67db0e87cbc3c4ad33cd8d90cfa70b011a94eb0986" Namespace="kube-system" Pod="coredns-66bc5c9577-srmc7" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--srmc7-" Oct 29 05:34:25.839709 containerd[1601]: 2025-10-29 05:34:25.736 [INFO][4563] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fc0218478a00c168680e8c67db0e87cbc3c4ad33cd8d90cfa70b011a94eb0986" Namespace="kube-system" Pod="coredns-66bc5c9577-srmc7" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--srmc7-eth0" Oct 29 05:34:25.839709 containerd[1601]: 2025-10-29 05:34:25.783 [INFO][4589] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fc0218478a00c168680e8c67db0e87cbc3c4ad33cd8d90cfa70b011a94eb0986" HandleID="k8s-pod-network.fc0218478a00c168680e8c67db0e87cbc3c4ad33cd8d90cfa70b011a94eb0986" Workload="localhost-k8s-coredns--66bc5c9577--srmc7-eth0" Oct 29 05:34:25.839709 containerd[1601]: 2025-10-29 05:34:25.783 [INFO][4589] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fc0218478a00c168680e8c67db0e87cbc3c4ad33cd8d90cfa70b011a94eb0986" HandleID="k8s-pod-network.fc0218478a00c168680e8c67db0e87cbc3c4ad33cd8d90cfa70b011a94eb0986" Workload="localhost-k8s-coredns--66bc5c9577--srmc7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005964e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-srmc7", "timestamp":"2025-10-29 05:34:25.783107136 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 05:34:25.839709 containerd[1601]: 2025-10-29 05:34:25.783 [INFO][4589] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 05:34:25.839709 containerd[1601]: 2025-10-29 05:34:25.783 [INFO][4589] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 05:34:25.839709 containerd[1601]: 2025-10-29 05:34:25.783 [INFO][4589] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 05:34:25.839709 containerd[1601]: 2025-10-29 05:34:25.791 [INFO][4589] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fc0218478a00c168680e8c67db0e87cbc3c4ad33cd8d90cfa70b011a94eb0986" host="localhost" Oct 29 05:34:25.839709 containerd[1601]: 2025-10-29 05:34:25.797 [INFO][4589] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 05:34:25.839709 containerd[1601]: 2025-10-29 05:34:25.802 [INFO][4589] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 05:34:25.839709 containerd[1601]: 2025-10-29 05:34:25.803 [INFO][4589] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 05:34:25.839709 containerd[1601]: 2025-10-29 05:34:25.805 [INFO][4589] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 05:34:25.839709 containerd[1601]: 2025-10-29 05:34:25.805 [INFO][4589] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fc0218478a00c168680e8c67db0e87cbc3c4ad33cd8d90cfa70b011a94eb0986" host="localhost" Oct 29 05:34:25.839709 containerd[1601]: 2025-10-29 05:34:25.807 [INFO][4589] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fc0218478a00c168680e8c67db0e87cbc3c4ad33cd8d90cfa70b011a94eb0986 Oct 29 05:34:25.839709 containerd[1601]: 2025-10-29 05:34:25.810 [INFO][4589] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fc0218478a00c168680e8c67db0e87cbc3c4ad33cd8d90cfa70b011a94eb0986" host="localhost" Oct 29 05:34:25.839709 containerd[1601]: 2025-10-29 05:34:25.815 [INFO][4589] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.fc0218478a00c168680e8c67db0e87cbc3c4ad33cd8d90cfa70b011a94eb0986" host="localhost" Oct 29 05:34:25.839709 containerd[1601]: 2025-10-29 05:34:25.815 [INFO][4589] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.fc0218478a00c168680e8c67db0e87cbc3c4ad33cd8d90cfa70b011a94eb0986" host="localhost" Oct 29 05:34:25.839709 containerd[1601]: 2025-10-29 05:34:25.815 [INFO][4589] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 05:34:25.839709 containerd[1601]: 2025-10-29 05:34:25.815 [INFO][4589] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="fc0218478a00c168680e8c67db0e87cbc3c4ad33cd8d90cfa70b011a94eb0986" HandleID="k8s-pod-network.fc0218478a00c168680e8c67db0e87cbc3c4ad33cd8d90cfa70b011a94eb0986" Workload="localhost-k8s-coredns--66bc5c9577--srmc7-eth0" Oct 29 05:34:25.840696 containerd[1601]: 2025-10-29 05:34:25.818 [INFO][4563] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fc0218478a00c168680e8c67db0e87cbc3c4ad33cd8d90cfa70b011a94eb0986" Namespace="kube-system" Pod="coredns-66bc5c9577-srmc7" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--srmc7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--srmc7-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a9af4fbc-4377-4e63-8c55-f50471f996bb", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 5, 33, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-srmc7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali28a6beb577e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 05:34:25.840696 containerd[1601]: 2025-10-29 05:34:25.819 [INFO][4563] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="fc0218478a00c168680e8c67db0e87cbc3c4ad33cd8d90cfa70b011a94eb0986" Namespace="kube-system" Pod="coredns-66bc5c9577-srmc7" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--srmc7-eth0" Oct 29 05:34:25.840696 containerd[1601]: 2025-10-29 05:34:25.819 [INFO][4563] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali28a6beb577e ContainerID="fc0218478a00c168680e8c67db0e87cbc3c4ad33cd8d90cfa70b011a94eb0986" Namespace="kube-system" Pod="coredns-66bc5c9577-srmc7" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--srmc7-eth0" Oct 29 05:34:25.840696 containerd[1601]: 2025-10-29 05:34:25.823 [INFO][4563] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fc0218478a00c168680e8c67db0e87cbc3c4ad33cd8d90cfa70b011a94eb0986" Namespace="kube-system" Pod="coredns-66bc5c9577-srmc7" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--srmc7-eth0" Oct 29 05:34:25.840696 containerd[1601]: 2025-10-29 05:34:25.823 [INFO][4563] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fc0218478a00c168680e8c67db0e87cbc3c4ad33cd8d90cfa70b011a94eb0986" Namespace="kube-system" Pod="coredns-66bc5c9577-srmc7" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--srmc7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--srmc7-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a9af4fbc-4377-4e63-8c55-f50471f996bb", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 5, 33, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fc0218478a00c168680e8c67db0e87cbc3c4ad33cd8d90cfa70b011a94eb0986", Pod:"coredns-66bc5c9577-srmc7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali28a6beb577e", MAC:"b2:60:88:51:ae:e5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 05:34:25.840696 containerd[1601]: 2025-10-29 05:34:25.834 [INFO][4563] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fc0218478a00c168680e8c67db0e87cbc3c4ad33cd8d90cfa70b011a94eb0986" Namespace="kube-system" Pod="coredns-66bc5c9577-srmc7" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--srmc7-eth0" Oct 29 05:34:25.844834 kubelet[2777]: E1029 05:34:25.844755 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:34:25.845522 kubelet[2777]: E1029 05:34:25.845478 2777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799b5c4b47-vw8qq" podUID="5700d2a9-15b1-43c2-8972-37e1ebd6aa09" Oct 29 05:34:25.895231 kubelet[2777]: I1029 05:34:25.895158 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-xcs92" podStartSLOduration=36.895131641 podStartE2EDuration="36.895131641s" podCreationTimestamp="2025-10-29 05:33:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 05:34:25.873052346 +0000 UTC m=+42.297173225" watchObservedRunningTime="2025-10-29 05:34:25.895131641 +0000 UTC m=+42.319252500" Oct 29 05:34:25.911693 containerd[1601]: time="2025-10-29T05:34:25.911635628Z" level=info msg="connecting to shim fc0218478a00c168680e8c67db0e87cbc3c4ad33cd8d90cfa70b011a94eb0986" address="unix:///run/containerd/s/45974c191a0fca3125c0a52754b991052323c48eee5a9649f1059c797670cd7f" namespace=k8s.io protocol=ttrpc version=3 Oct 29 05:34:25.942312 systemd[1]: Started cri-containerd-fc0218478a00c168680e8c67db0e87cbc3c4ad33cd8d90cfa70b011a94eb0986.scope - libcontainer container fc0218478a00c168680e8c67db0e87cbc3c4ad33cd8d90cfa70b011a94eb0986. Oct 29 05:34:25.945801 systemd-networkd[1508]: cali148a9905162: Link UP Oct 29 05:34:25.947360 systemd-networkd[1508]: cali148a9905162: Gained carrier Oct 29 05:34:25.964550 containerd[1601]: 2025-10-29 05:34:25.733 [INFO][4558] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 05:34:25.964550 containerd[1601]: 2025-10-29 05:34:25.745 [INFO][4558] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--pkk9m-eth0 goldmane-7c778bb748- calico-system de492ebe-e388-430f-a865-ba2ce27c1431 824 0 2025-10-29 05:34:00 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-pkk9m eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali148a9905162 [] [] }} ContainerID="bbe89015352d89442be2e519c5f6617508b30c728a6843843924e57343aebecc" Namespace="calico-system" Pod="goldmane-7c778bb748-pkk9m" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--pkk9m-" Oct 29 05:34:25.964550 containerd[1601]: 2025-10-29 05:34:25.745 [INFO][4558] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bbe89015352d89442be2e519c5f6617508b30c728a6843843924e57343aebecc" Namespace="calico-system" Pod="goldmane-7c778bb748-pkk9m" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--pkk9m-eth0" Oct 29 05:34:25.964550 containerd[1601]: 2025-10-29 05:34:25.791 [INFO][4595] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bbe89015352d89442be2e519c5f6617508b30c728a6843843924e57343aebecc" HandleID="k8s-pod-network.bbe89015352d89442be2e519c5f6617508b30c728a6843843924e57343aebecc" Workload="localhost-k8s-goldmane--7c778bb748--pkk9m-eth0" Oct 29 05:34:25.964550 containerd[1601]: 2025-10-29 05:34:25.792 [INFO][4595] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bbe89015352d89442be2e519c5f6617508b30c728a6843843924e57343aebecc" HandleID="k8s-pod-network.bbe89015352d89442be2e519c5f6617508b30c728a6843843924e57343aebecc" Workload="localhost-k8s-goldmane--7c778bb748--pkk9m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003cf720), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-pkk9m", "timestamp":"2025-10-29 05:34:25.791956146 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 05:34:25.964550 containerd[1601]: 2025-10-29 05:34:25.792 [INFO][4595] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 05:34:25.964550 containerd[1601]: 2025-10-29 05:34:25.815 [INFO][4595] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 05:34:25.964550 containerd[1601]: 2025-10-29 05:34:25.815 [INFO][4595] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 05:34:25.964550 containerd[1601]: 2025-10-29 05:34:25.891 [INFO][4595] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bbe89015352d89442be2e519c5f6617508b30c728a6843843924e57343aebecc" host="localhost" Oct 29 05:34:25.964550 containerd[1601]: 2025-10-29 05:34:25.903 [INFO][4595] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 05:34:25.964550 containerd[1601]: 2025-10-29 05:34:25.908 [INFO][4595] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 05:34:25.964550 containerd[1601]: 2025-10-29 05:34:25.910 [INFO][4595] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 05:34:25.964550 containerd[1601]: 2025-10-29 05:34:25.913 [INFO][4595] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 05:34:25.964550 containerd[1601]: 2025-10-29 05:34:25.913 [INFO][4595] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bbe89015352d89442be2e519c5f6617508b30c728a6843843924e57343aebecc" host="localhost" Oct 29 05:34:25.964550 containerd[1601]: 2025-10-29 05:34:25.915 [INFO][4595] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bbe89015352d89442be2e519c5f6617508b30c728a6843843924e57343aebecc Oct 29 05:34:25.964550 containerd[1601]: 2025-10-29 05:34:25.922 [INFO][4595] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bbe89015352d89442be2e519c5f6617508b30c728a6843843924e57343aebecc" host="localhost" Oct 29 05:34:25.964550 containerd[1601]: 2025-10-29 05:34:25.931 [INFO][4595] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.bbe89015352d89442be2e519c5f6617508b30c728a6843843924e57343aebecc" host="localhost" Oct 29 05:34:25.964550 containerd[1601]: 2025-10-29 05:34:25.931 [INFO][4595] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.bbe89015352d89442be2e519c5f6617508b30c728a6843843924e57343aebecc" host="localhost" Oct 29 05:34:25.964550 containerd[1601]: 2025-10-29 05:34:25.931 [INFO][4595] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 05:34:25.964550 containerd[1601]: 2025-10-29 05:34:25.931 [INFO][4595] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="bbe89015352d89442be2e519c5f6617508b30c728a6843843924e57343aebecc" HandleID="k8s-pod-network.bbe89015352d89442be2e519c5f6617508b30c728a6843843924e57343aebecc" Workload="localhost-k8s-goldmane--7c778bb748--pkk9m-eth0" Oct 29 05:34:25.965704 containerd[1601]: 2025-10-29 05:34:25.936 [INFO][4558] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bbe89015352d89442be2e519c5f6617508b30c728a6843843924e57343aebecc" Namespace="calico-system" Pod="goldmane-7c778bb748-pkk9m" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--pkk9m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--pkk9m-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"de492ebe-e388-430f-a865-ba2ce27c1431", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 5, 34, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-pkk9m", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali148a9905162", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 05:34:25.965704 containerd[1601]: 2025-10-29 05:34:25.936 [INFO][4558] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="bbe89015352d89442be2e519c5f6617508b30c728a6843843924e57343aebecc" Namespace="calico-system" Pod="goldmane-7c778bb748-pkk9m" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--pkk9m-eth0" Oct 29 05:34:25.965704 containerd[1601]: 2025-10-29 05:34:25.936 [INFO][4558] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali148a9905162 ContainerID="bbe89015352d89442be2e519c5f6617508b30c728a6843843924e57343aebecc" Namespace="calico-system" Pod="goldmane-7c778bb748-pkk9m" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--pkk9m-eth0" Oct 29 05:34:25.965704 containerd[1601]: 2025-10-29 05:34:25.947 [INFO][4558] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bbe89015352d89442be2e519c5f6617508b30c728a6843843924e57343aebecc" Namespace="calico-system" Pod="goldmane-7c778bb748-pkk9m" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--pkk9m-eth0" Oct 29 05:34:25.965704 containerd[1601]: 2025-10-29 05:34:25.947 [INFO][4558] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bbe89015352d89442be2e519c5f6617508b30c728a6843843924e57343aebecc" Namespace="calico-system" Pod="goldmane-7c778bb748-pkk9m" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--pkk9m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--pkk9m-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"de492ebe-e388-430f-a865-ba2ce27c1431", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 5, 34, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bbe89015352d89442be2e519c5f6617508b30c728a6843843924e57343aebecc", Pod:"goldmane-7c778bb748-pkk9m", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali148a9905162", MAC:"ce:21:9b:1a:a7:d3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 05:34:25.965704 containerd[1601]: 2025-10-29 05:34:25.959 [INFO][4558] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bbe89015352d89442be2e519c5f6617508b30c728a6843843924e57343aebecc" Namespace="calico-system" Pod="goldmane-7c778bb748-pkk9m" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--pkk9m-eth0" Oct 29 05:34:25.966932 systemd-resolved[1300]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 05:34:26.013161 containerd[1601]: time="2025-10-29T05:34:26.013095522Z" level=info msg="connecting to shim bbe89015352d89442be2e519c5f6617508b30c728a6843843924e57343aebecc" address="unix:///run/containerd/s/2b1993083adbe62abc998518c3775168abcf30dff4a2efb3cf6a1834b8a9a36d" namespace=k8s.io protocol=ttrpc version=3 Oct 29 05:34:26.025238 systemd-networkd[1508]: cali96d4736d554: Gained IPv6LL Oct 29 05:34:26.049443 containerd[1601]: time="2025-10-29T05:34:26.049375456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-srmc7,Uid:a9af4fbc-4377-4e63-8c55-f50471f996bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc0218478a00c168680e8c67db0e87cbc3c4ad33cd8d90cfa70b011a94eb0986\"" Oct 29 05:34:26.050389 systemd[1]: Started cri-containerd-bbe89015352d89442be2e519c5f6617508b30c728a6843843924e57343aebecc.scope - libcontainer container bbe89015352d89442be2e519c5f6617508b30c728a6843843924e57343aebecc. Oct 29 05:34:26.050844 kubelet[2777]: E1029 05:34:26.050416 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:34:26.067942 systemd-resolved[1300]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 05:34:26.112335 containerd[1601]: time="2025-10-29T05:34:26.112198000Z" level=info msg="CreateContainer within sandbox \"fc0218478a00c168680e8c67db0e87cbc3c4ad33cd8d90cfa70b011a94eb0986\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 29 05:34:26.152695 containerd[1601]: time="2025-10-29T05:34:26.152636671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-pkk9m,Uid:de492ebe-e388-430f-a865-ba2ce27c1431,Namespace:calico-system,Attempt:0,} returns sandbox id \"bbe89015352d89442be2e519c5f6617508b30c728a6843843924e57343aebecc\"" Oct 29 05:34:26.153286 systemd-networkd[1508]: calicdc3e709532: Gained IPv6LL Oct 29 05:34:26.155280 containerd[1601]: time="2025-10-29T05:34:26.155223967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 29 05:34:26.421030 systemd-networkd[1508]: cali3661f6c84af: Link UP Oct 29 05:34:26.422489 systemd-networkd[1508]: cali3661f6c84af: Gained carrier Oct 29 05:34:26.431112 containerd[1601]: time="2025-10-29T05:34:26.430497977Z" level=info msg="Container 7e4600dcd41bbd78ed4254c6f3222b77b5b95d962b79b09818ebaf5050b6ddcf: CDI devices from CRI Config.CDIDevices: []" Oct 29 05:34:26.536425 containerd[1601]: time="2025-10-29T05:34:26.536361545Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 05:34:26.607769 containerd[1601]: time="2025-10-29T05:34:26.607370884Z" level=info msg="CreateContainer within sandbox \"fc0218478a00c168680e8c67db0e87cbc3c4ad33cd8d90cfa70b011a94eb0986\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7e4600dcd41bbd78ed4254c6f3222b77b5b95d962b79b09818ebaf5050b6ddcf\"" Oct 29 05:34:26.616227 containerd[1601]: 2025-10-29 05:34:25.724 [INFO][4546] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 05:34:26.616227 containerd[1601]: 2025-10-29 05:34:25.737 [INFO][4546] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--qkktn-eth0 csi-node-driver- calico-system 11b4791e-97d9-4b28-b964-d007606a7e18 714 0 2025-10-29 05:34:02 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-qkktn eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3661f6c84af [] [] }} ContainerID="ac640a1f91462e90a53bbf34eb5454dce59f48b686e35fb4565c7b8c6f5415f6" Namespace="calico-system" Pod="csi-node-driver-qkktn" WorkloadEndpoint="localhost-k8s-csi--node--driver--qkktn-" Oct 29 05:34:26.616227 containerd[1601]: 2025-10-29 05:34:25.738 [INFO][4546] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ac640a1f91462e90a53bbf34eb5454dce59f48b686e35fb4565c7b8c6f5415f6" Namespace="calico-system" Pod="csi-node-driver-qkktn" WorkloadEndpoint="localhost-k8s-csi--node--driver--qkktn-eth0" Oct 29 05:34:26.616227 containerd[1601]: 2025-10-29 05:34:25.792 [INFO][4597] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ac640a1f91462e90a53bbf34eb5454dce59f48b686e35fb4565c7b8c6f5415f6" HandleID="k8s-pod-network.ac640a1f91462e90a53bbf34eb5454dce59f48b686e35fb4565c7b8c6f5415f6" Workload="localhost-k8s-csi--node--driver--qkktn-eth0" Oct 29 05:34:26.616227 containerd[1601]: 2025-10-29 05:34:25.792 [INFO][4597] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ac640a1f91462e90a53bbf34eb5454dce59f48b686e35fb4565c7b8c6f5415f6" HandleID="k8s-pod-network.ac640a1f91462e90a53bbf34eb5454dce59f48b686e35fb4565c7b8c6f5415f6" Workload="localhost-k8s-csi--node--driver--qkktn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000135b30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-qkktn", "timestamp":"2025-10-29 05:34:25.79284265 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 05:34:26.616227 containerd[1601]: 2025-10-29 05:34:25.793 [INFO][4597] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 05:34:26.616227 containerd[1601]: 2025-10-29 05:34:25.931 [INFO][4597] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 05:34:26.616227 containerd[1601]: 2025-10-29 05:34:25.931 [INFO][4597] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 05:34:26.616227 containerd[1601]: 2025-10-29 05:34:25.995 [INFO][4597] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ac640a1f91462e90a53bbf34eb5454dce59f48b686e35fb4565c7b8c6f5415f6" host="localhost" Oct 29 05:34:26.616227 containerd[1601]: 2025-10-29 05:34:26.094 [INFO][4597] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 05:34:26.616227 containerd[1601]: 2025-10-29 05:34:26.168 [INFO][4597] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 05:34:26.616227 containerd[1601]: 2025-10-29 05:34:26.241 [INFO][4597] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 05:34:26.616227 containerd[1601]: 2025-10-29 05:34:26.244 [INFO][4597] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 05:34:26.616227 containerd[1601]: 2025-10-29 05:34:26.244 [INFO][4597] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ac640a1f91462e90a53bbf34eb5454dce59f48b686e35fb4565c7b8c6f5415f6" host="localhost" Oct 29 05:34:26.616227 containerd[1601]: 2025-10-29 05:34:26.246 [INFO][4597] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ac640a1f91462e90a53bbf34eb5454dce59f48b686e35fb4565c7b8c6f5415f6 Oct 29 05:34:26.616227 containerd[1601]: 2025-10-29 05:34:26.337 [INFO][4597] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ac640a1f91462e90a53bbf34eb5454dce59f48b686e35fb4565c7b8c6f5415f6" host="localhost" Oct 29 05:34:26.616227 containerd[1601]: 2025-10-29 05:34:26.411 [INFO][4597] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.ac640a1f91462e90a53bbf34eb5454dce59f48b686e35fb4565c7b8c6f5415f6" host="localhost" Oct 29 05:34:26.616227 containerd[1601]: 2025-10-29 05:34:26.411 [INFO][4597] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.ac640a1f91462e90a53bbf34eb5454dce59f48b686e35fb4565c7b8c6f5415f6" host="localhost" Oct 29 05:34:26.616227 containerd[1601]: 2025-10-29 05:34:26.411 [INFO][4597] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 05:34:26.616227 containerd[1601]: 2025-10-29 05:34:26.411 [INFO][4597] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="ac640a1f91462e90a53bbf34eb5454dce59f48b686e35fb4565c7b8c6f5415f6" HandleID="k8s-pod-network.ac640a1f91462e90a53bbf34eb5454dce59f48b686e35fb4565c7b8c6f5415f6" Workload="localhost-k8s-csi--node--driver--qkktn-eth0" Oct 29 05:34:26.617005 containerd[1601]: 2025-10-29 05:34:26.416 [INFO][4546] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ac640a1f91462e90a53bbf34eb5454dce59f48b686e35fb4565c7b8c6f5415f6" Namespace="calico-system" Pod="csi-node-driver-qkktn" WorkloadEndpoint="localhost-k8s-csi--node--driver--qkktn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qkktn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"11b4791e-97d9-4b28-b964-d007606a7e18", ResourceVersion:"714", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 5, 34, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-qkktn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3661f6c84af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 05:34:26.617005 containerd[1601]: 2025-10-29 05:34:26.416 [INFO][4546] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="ac640a1f91462e90a53bbf34eb5454dce59f48b686e35fb4565c7b8c6f5415f6" Namespace="calico-system" Pod="csi-node-driver-qkktn" WorkloadEndpoint="localhost-k8s-csi--node--driver--qkktn-eth0" Oct 29 05:34:26.617005 containerd[1601]: 2025-10-29 05:34:26.417 [INFO][4546] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3661f6c84af ContainerID="ac640a1f91462e90a53bbf34eb5454dce59f48b686e35fb4565c7b8c6f5415f6" Namespace="calico-system" Pod="csi-node-driver-qkktn" WorkloadEndpoint="localhost-k8s-csi--node--driver--qkktn-eth0" Oct 29 05:34:26.617005 containerd[1601]: 2025-10-29 05:34:26.423 [INFO][4546] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ac640a1f91462e90a53bbf34eb5454dce59f48b686e35fb4565c7b8c6f5415f6" Namespace="calico-system" Pod="csi-node-driver-qkktn" WorkloadEndpoint="localhost-k8s-csi--node--driver--qkktn-eth0" Oct 29 05:34:26.617005 containerd[1601]: 2025-10-29 05:34:26.425 [INFO][4546] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ac640a1f91462e90a53bbf34eb5454dce59f48b686e35fb4565c7b8c6f5415f6" Namespace="calico-system" Pod="csi-node-driver-qkktn" WorkloadEndpoint="localhost-k8s-csi--node--driver--qkktn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qkktn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"11b4791e-97d9-4b28-b964-d007606a7e18", ResourceVersion:"714", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 5, 34, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ac640a1f91462e90a53bbf34eb5454dce59f48b686e35fb4565c7b8c6f5415f6", Pod:"csi-node-driver-qkktn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3661f6c84af", MAC:"ce:f1:c5:d1:ed:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 05:34:26.617005 containerd[1601]: 2025-10-29 05:34:26.610 [INFO][4546] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ac640a1f91462e90a53bbf34eb5454dce59f48b686e35fb4565c7b8c6f5415f6" Namespace="calico-system" Pod="csi-node-driver-qkktn" WorkloadEndpoint="localhost-k8s-csi--node--driver--qkktn-eth0" Oct 29 05:34:26.617005 containerd[1601]: time="2025-10-29T05:34:26.616642216Z" level=info msg="StartContainer for \"7e4600dcd41bbd78ed4254c6f3222b77b5b95d962b79b09818ebaf5050b6ddcf\"" Oct 29 05:34:26.618707 containerd[1601]: time="2025-10-29T05:34:26.618618926Z" level=info msg="connecting to shim 7e4600dcd41bbd78ed4254c6f3222b77b5b95d962b79b09818ebaf5050b6ddcf" address="unix:///run/containerd/s/45974c191a0fca3125c0a52754b991052323c48eee5a9649f1059c797670cd7f" protocol=ttrpc version=3 Oct 29 05:34:26.619214 containerd[1601]: time="2025-10-29T05:34:26.618912568Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 29 05:34:26.619319 containerd[1601]: time="2025-10-29T05:34:26.619286399Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 29 05:34:26.619765 kubelet[2777]: E1029 05:34:26.619724 2777 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 05:34:26.619865 kubelet[2777]: E1029 05:34:26.619774 2777 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 05:34:26.619865 kubelet[2777]: E1029 05:34:26.619855 2777 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-pkk9m_calico-system(de492ebe-e388-430f-a865-ba2ce27c1431): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 29 05:34:26.620153 kubelet[2777]: E1029 05:34:26.619891 2777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pkk9m" podUID="de492ebe-e388-430f-a865-ba2ce27c1431" Oct 29 05:34:26.654116 systemd[1]: Started cri-containerd-7e4600dcd41bbd78ed4254c6f3222b77b5b95d962b79b09818ebaf5050b6ddcf.scope - libcontainer container 7e4600dcd41bbd78ed4254c6f3222b77b5b95d962b79b09818ebaf5050b6ddcf. Oct 29 05:34:26.678008 containerd[1601]: time="2025-10-29T05:34:26.677891818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-799b5c4b47-5d9gp,Uid:dda5bf98-d31b-4c3d-8024-54d20d0506a7,Namespace:calico-apiserver,Attempt:0,}" Oct 29 05:34:26.680006 containerd[1601]: time="2025-10-29T05:34:26.679897041Z" level=info msg="connecting to shim ac640a1f91462e90a53bbf34eb5454dce59f48b686e35fb4565c7b8c6f5415f6" address="unix:///run/containerd/s/de6613b2aba0f5013459b376f3f0c6678a4d9e62e2079b9a9e468dd80ee99d41" namespace=k8s.io protocol=ttrpc version=3 Oct 29 05:34:26.750109 containerd[1601]: time="2025-10-29T05:34:26.750018304Z" level=info msg="StartContainer for \"7e4600dcd41bbd78ed4254c6f3222b77b5b95d962b79b09818ebaf5050b6ddcf\" returns successfully" Oct 29 05:34:26.773320 systemd[1]: Started cri-containerd-ac640a1f91462e90a53bbf34eb5454dce59f48b686e35fb4565c7b8c6f5415f6.scope - libcontainer container ac640a1f91462e90a53bbf34eb5454dce59f48b686e35fb4565c7b8c6f5415f6. Oct 29 05:34:26.812145 systemd-resolved[1300]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 05:34:26.855371 kubelet[2777]: E1029 05:34:26.855179 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:34:26.859525 containerd[1601]: time="2025-10-29T05:34:26.859375650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qkktn,Uid:11b4791e-97d9-4b28-b964-d007606a7e18,Namespace:calico-system,Attempt:0,} returns sandbox id \"ac640a1f91462e90a53bbf34eb5454dce59f48b686e35fb4565c7b8c6f5415f6\"" Oct 29 05:34:26.865584 kubelet[2777]: E1029 05:34:26.865542 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:34:26.868294 kubelet[2777]: E1029 05:34:26.868213 2777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68fbd9f956-5l7nj" podUID="ca0b83d1-3c73-4368-b48e-26b292faf856" Oct 29 05:34:26.868635 kubelet[2777]: E1029 05:34:26.868605 2777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pkk9m" podUID="de492ebe-e388-430f-a865-ba2ce27c1431" Oct 29 05:34:26.868694 kubelet[2777]: E1029 05:34:26.868634 2777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799b5c4b47-vw8qq" podUID="5700d2a9-15b1-43c2-8972-37e1ebd6aa09" Oct 29 05:34:26.868847 containerd[1601]: time="2025-10-29T05:34:26.868801351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 29 05:34:26.878345 kubelet[2777]: I1029 05:34:26.877855 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-srmc7" podStartSLOduration=38.877838093 podStartE2EDuration="38.877838093s" podCreationTimestamp="2025-10-29 05:33:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 05:34:26.876166936 +0000 UTC m=+43.300287805" watchObservedRunningTime="2025-10-29 05:34:26.877838093 +0000 UTC m=+43.301958952" Oct 29 05:34:26.920524 systemd-networkd[1508]: cali3175cecf1d7: Gained IPv6LL Oct 29 05:34:26.940420 systemd-networkd[1508]: cali3c0c985e0a9: Link UP Oct 29 05:34:26.941235 systemd-networkd[1508]: cali3c0c985e0a9: Gained carrier Oct 29 05:34:26.973155 containerd[1601]: 2025-10-29 05:34:26.782 [INFO][4798] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--799b5c4b47--5d9gp-eth0 calico-apiserver-799b5c4b47- calico-apiserver dda5bf98-d31b-4c3d-8024-54d20d0506a7 825 0 2025-10-29 05:33:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:799b5c4b47 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-799b5c4b47-5d9gp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3c0c985e0a9 [] [] }} ContainerID="61b4b2c628aacea61de62cee6b98d4027ce47b53a6c5a9d5f911f0c2b7b1e579" Namespace="calico-apiserver" Pod="calico-apiserver-799b5c4b47-5d9gp" WorkloadEndpoint="localhost-k8s-calico--apiserver--799b5c4b47--5d9gp-" Oct 29 05:34:26.973155 containerd[1601]: 2025-10-29 05:34:26.783 [INFO][4798] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="61b4b2c628aacea61de62cee6b98d4027ce47b53a6c5a9d5f911f0c2b7b1e579" Namespace="calico-apiserver" Pod="calico-apiserver-799b5c4b47-5d9gp" WorkloadEndpoint="localhost-k8s-calico--apiserver--799b5c4b47--5d9gp-eth0" Oct 29 05:34:26.973155 containerd[1601]: 2025-10-29 05:34:26.830 [INFO][4874] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="61b4b2c628aacea61de62cee6b98d4027ce47b53a6c5a9d5f911f0c2b7b1e579" HandleID="k8s-pod-network.61b4b2c628aacea61de62cee6b98d4027ce47b53a6c5a9d5f911f0c2b7b1e579" Workload="localhost-k8s-calico--apiserver--799b5c4b47--5d9gp-eth0" Oct 29 05:34:26.973155 containerd[1601]: 2025-10-29 05:34:26.831 [INFO][4874] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="61b4b2c628aacea61de62cee6b98d4027ce47b53a6c5a9d5f911f0c2b7b1e579" HandleID="k8s-pod-network.61b4b2c628aacea61de62cee6b98d4027ce47b53a6c5a9d5f911f0c2b7b1e579" Workload="localhost-k8s-calico--apiserver--799b5c4b47--5d9gp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000502e90), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-799b5c4b47-5d9gp", "timestamp":"2025-10-29 05:34:26.830822578 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 05:34:26.973155 containerd[1601]: 2025-10-29 05:34:26.831 [INFO][4874] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 05:34:26.973155 containerd[1601]: 2025-10-29 05:34:26.831 [INFO][4874] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 05:34:26.973155 containerd[1601]: 2025-10-29 05:34:26.832 [INFO][4874] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 05:34:26.973155 containerd[1601]: 2025-10-29 05:34:26.840 [INFO][4874] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.61b4b2c628aacea61de62cee6b98d4027ce47b53a6c5a9d5f911f0c2b7b1e579" host="localhost" Oct 29 05:34:26.973155 containerd[1601]: 2025-10-29 05:34:26.851 [INFO][4874] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 05:34:26.973155 containerd[1601]: 2025-10-29 05:34:26.856 [INFO][4874] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 05:34:26.973155 containerd[1601]: 2025-10-29 05:34:26.859 [INFO][4874] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 05:34:26.973155 containerd[1601]: 2025-10-29 05:34:26.863 [INFO][4874] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 05:34:26.973155 containerd[1601]: 2025-10-29 05:34:26.863 [INFO][4874] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.61b4b2c628aacea61de62cee6b98d4027ce47b53a6c5a9d5f911f0c2b7b1e579" host="localhost" Oct 29 05:34:26.973155 containerd[1601]: 2025-10-29 05:34:26.865 [INFO][4874] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.61b4b2c628aacea61de62cee6b98d4027ce47b53a6c5a9d5f911f0c2b7b1e579 Oct 29 05:34:26.973155 containerd[1601]: 2025-10-29 05:34:26.877 [INFO][4874] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.61b4b2c628aacea61de62cee6b98d4027ce47b53a6c5a9d5f911f0c2b7b1e579" host="localhost" Oct 29 05:34:26.973155 containerd[1601]: 2025-10-29 05:34:26.913 [INFO][4874] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.61b4b2c628aacea61de62cee6b98d4027ce47b53a6c5a9d5f911f0c2b7b1e579" host="localhost" Oct 29 05:34:26.973155 containerd[1601]: 2025-10-29 05:34:26.913 [INFO][4874] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.61b4b2c628aacea61de62cee6b98d4027ce47b53a6c5a9d5f911f0c2b7b1e579" host="localhost" Oct 29 05:34:26.973155 containerd[1601]: 2025-10-29 05:34:26.913 [INFO][4874] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 05:34:26.973155 containerd[1601]: 2025-10-29 05:34:26.913 [INFO][4874] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="61b4b2c628aacea61de62cee6b98d4027ce47b53a6c5a9d5f911f0c2b7b1e579" HandleID="k8s-pod-network.61b4b2c628aacea61de62cee6b98d4027ce47b53a6c5a9d5f911f0c2b7b1e579" Workload="localhost-k8s-calico--apiserver--799b5c4b47--5d9gp-eth0" Oct 29 05:34:26.973822 containerd[1601]: 2025-10-29 05:34:26.926 [INFO][4798] cni-plugin/k8s.go 418: Populated endpoint ContainerID="61b4b2c628aacea61de62cee6b98d4027ce47b53a6c5a9d5f911f0c2b7b1e579" Namespace="calico-apiserver" Pod="calico-apiserver-799b5c4b47-5d9gp" WorkloadEndpoint="localhost-k8s-calico--apiserver--799b5c4b47--5d9gp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--799b5c4b47--5d9gp-eth0", GenerateName:"calico-apiserver-799b5c4b47-", Namespace:"calico-apiserver", SelfLink:"", UID:"dda5bf98-d31b-4c3d-8024-54d20d0506a7", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 5, 33, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"799b5c4b47", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-799b5c4b47-5d9gp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3c0c985e0a9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 05:34:26.973822 containerd[1601]: 2025-10-29 05:34:26.926 [INFO][4798] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="61b4b2c628aacea61de62cee6b98d4027ce47b53a6c5a9d5f911f0c2b7b1e579" Namespace="calico-apiserver" Pod="calico-apiserver-799b5c4b47-5d9gp" WorkloadEndpoint="localhost-k8s-calico--apiserver--799b5c4b47--5d9gp-eth0" Oct 29 05:34:26.973822 containerd[1601]: 2025-10-29 05:34:26.926 [INFO][4798] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3c0c985e0a9 ContainerID="61b4b2c628aacea61de62cee6b98d4027ce47b53a6c5a9d5f911f0c2b7b1e579" Namespace="calico-apiserver" Pod="calico-apiserver-799b5c4b47-5d9gp" WorkloadEndpoint="localhost-k8s-calico--apiserver--799b5c4b47--5d9gp-eth0" Oct 29 05:34:26.973822 containerd[1601]: 2025-10-29 05:34:26.946 [INFO][4798] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="61b4b2c628aacea61de62cee6b98d4027ce47b53a6c5a9d5f911f0c2b7b1e579" Namespace="calico-apiserver" Pod="calico-apiserver-799b5c4b47-5d9gp" WorkloadEndpoint="localhost-k8s-calico--apiserver--799b5c4b47--5d9gp-eth0" Oct 29 05:34:26.973822 containerd[1601]: 2025-10-29 05:34:26.949 [INFO][4798] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="61b4b2c628aacea61de62cee6b98d4027ce47b53a6c5a9d5f911f0c2b7b1e579" Namespace="calico-apiserver" Pod="calico-apiserver-799b5c4b47-5d9gp" WorkloadEndpoint="localhost-k8s-calico--apiserver--799b5c4b47--5d9gp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--799b5c4b47--5d9gp-eth0", GenerateName:"calico-apiserver-799b5c4b47-", Namespace:"calico-apiserver", SelfLink:"", UID:"dda5bf98-d31b-4c3d-8024-54d20d0506a7", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 5, 33, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"799b5c4b47", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"61b4b2c628aacea61de62cee6b98d4027ce47b53a6c5a9d5f911f0c2b7b1e579", Pod:"calico-apiserver-799b5c4b47-5d9gp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3c0c985e0a9", MAC:"fa:97:b6:cd:c0:dd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 05:34:26.973822 containerd[1601]: 2025-10-29 05:34:26.968 [INFO][4798] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="61b4b2c628aacea61de62cee6b98d4027ce47b53a6c5a9d5f911f0c2b7b1e579" Namespace="calico-apiserver" Pod="calico-apiserver-799b5c4b47-5d9gp" WorkloadEndpoint="localhost-k8s-calico--apiserver--799b5c4b47--5d9gp-eth0" Oct 29 05:34:27.020335 systemd-networkd[1508]: vxlan.calico: Link UP Oct 29 05:34:27.020397 systemd-networkd[1508]: vxlan.calico: Gained carrier Oct 29 05:34:27.049917 systemd-networkd[1508]: cali28a6beb577e: Gained IPv6LL Oct 29 05:34:27.055105 containerd[1601]: time="2025-10-29T05:34:27.055002052Z" level=info msg="connecting to shim 61b4b2c628aacea61de62cee6b98d4027ce47b53a6c5a9d5f911f0c2b7b1e579" address="unix:///run/containerd/s/f8644bc7b8b71ce0f17bb539c91bb6b5e8eb1bfc6102198b7880733cb3a5d8fc" namespace=k8s.io protocol=ttrpc version=3 Oct 29 05:34:27.100317 systemd[1]: Started cri-containerd-61b4b2c628aacea61de62cee6b98d4027ce47b53a6c5a9d5f911f0c2b7b1e579.scope - libcontainer container 61b4b2c628aacea61de62cee6b98d4027ce47b53a6c5a9d5f911f0c2b7b1e579. Oct 29 05:34:27.126639 systemd-resolved[1300]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 05:34:27.177450 containerd[1601]: time="2025-10-29T05:34:27.177390505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-799b5c4b47-5d9gp,Uid:dda5bf98-d31b-4c3d-8024-54d20d0506a7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"61b4b2c628aacea61de62cee6b98d4027ce47b53a6c5a9d5f911f0c2b7b1e579\"" Oct 29 05:34:27.256452 containerd[1601]: time="2025-10-29T05:34:27.256285016Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 05:34:27.257654 containerd[1601]: time="2025-10-29T05:34:27.257615784Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 29 05:34:27.257730 containerd[1601]: time="2025-10-29T05:34:27.257706804Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 29 05:34:27.258006 kubelet[2777]: E1029 05:34:27.257932 2777 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 05:34:27.258006 kubelet[2777]: E1029 05:34:27.258008 2777 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 05:34:27.259154 kubelet[2777]: E1029 05:34:27.258215 2777 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-qkktn_calico-system(11b4791e-97d9-4b28-b964-d007606a7e18): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 29 05:34:27.259301 containerd[1601]: time="2025-10-29T05:34:27.258830534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 05:34:27.621055 containerd[1601]: time="2025-10-29T05:34:27.620990677Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 05:34:27.729476 containerd[1601]: time="2025-10-29T05:34:27.729393801Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 05:34:27.729476 containerd[1601]: time="2025-10-29T05:34:27.729444306Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 05:34:27.729974 kubelet[2777]: E1029 05:34:27.729693 2777 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 05:34:27.729974 kubelet[2777]: E1029 05:34:27.729745 2777 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 05:34:27.730134 kubelet[2777]: E1029 05:34:27.729977 2777 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-799b5c4b47-5d9gp_calico-apiserver(dda5bf98-d31b-4c3d-8024-54d20d0506a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 05:34:27.730134 kubelet[2777]: E1029 05:34:27.730032 2777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799b5c4b47-5d9gp" podUID="dda5bf98-d31b-4c3d-8024-54d20d0506a7" Oct 29 05:34:27.730226 containerd[1601]: time="2025-10-29T05:34:27.730193242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 29 05:34:27.816247 systemd-networkd[1508]: cali3661f6c84af: Gained IPv6LL Oct 29 05:34:27.855023 systemd[1]: Started sshd@10-10.0.0.106:22-10.0.0.1:46842.service - OpenSSH per-connection server daemon (10.0.0.1:46842). Oct 29 05:34:27.870852 kubelet[2777]: E1029 05:34:27.870788 2777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799b5c4b47-5d9gp" podUID="dda5bf98-d31b-4c3d-8024-54d20d0506a7" Oct 29 05:34:27.873710 kubelet[2777]: E1029 05:34:27.873317 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:34:27.873960 kubelet[2777]: E1029 05:34:27.873902 2777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pkk9m" podUID="de492ebe-e388-430f-a865-ba2ce27c1431" Oct 29 05:34:27.880659 systemd-networkd[1508]: cali148a9905162: Gained IPv6LL Oct 29 05:34:27.932743 sshd[5033]: Accepted publickey for core from 10.0.0.1 port 46842 ssh2: RSA SHA256:XlI1mMWbAUEpbMdibrfNtyLuAe47fXxox5VA8A+V0wo Oct 29 05:34:27.934801 sshd-session[5033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 05:34:27.940096 systemd-logind[1587]: New session 11 of user core. Oct 29 05:34:27.951279 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 29 05:34:28.054964 containerd[1601]: time="2025-10-29T05:34:28.054904886Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 05:34:28.056146 containerd[1601]: time="2025-10-29T05:34:28.056101182Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 29 05:34:28.056207 containerd[1601]: time="2025-10-29T05:34:28.056188716Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 29 05:34:28.056447 kubelet[2777]: E1029 05:34:28.056406 2777 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 05:34:28.056524 kubelet[2777]: E1029 05:34:28.056460 2777 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 05:34:28.056670 kubelet[2777]: E1029 05:34:28.056599 2777 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-qkktn_calico-system(11b4791e-97d9-4b28-b964-d007606a7e18): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 29 05:34:28.056670 kubelet[2777]: E1029 05:34:28.056654 2777 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qkktn" podUID="11b4791e-97d9-4b28-b964-d007606a7e18" Oct 29 05:34:28.065478 sshd[5036]: Connection closed by 10.0.0.1 port 46842 Oct 29 05:34:28.066089 sshd-session[5033]: pam_unix(sshd:session): session closed for user core Oct 29 05:34:28.071178 systemd[1]: sshd@10-10.0.0.106:22-10.0.0.1:46842.service: Deactivated successfully. Oct 29 05:34:28.073435 systemd[1]: session-11.scope: Deactivated successfully. Oct 29 05:34:28.074578 systemd-logind[1587]: Session 11 logged out. Waiting for processes to exit. Oct 29 05:34:28.076371 systemd-logind[1587]: Removed session 11. Oct 29 05:34:28.456325 systemd-networkd[1508]: cali3c0c985e0a9: Gained IPv6LL Oct 29 05:34:28.776283 systemd-networkd[1508]: vxlan.calico: Gained IPv6LL Oct 29 05:34:28.875225 kubelet[2777]: E1029 05:34:28.875172 2777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799b5c4b47-5d9gp" podUID="dda5bf98-d31b-4c3d-8024-54d20d0506a7" Oct 29 05:34:28.876032 kubelet[2777]: E1029 05:34:28.875930 2777 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qkktn" podUID="11b4791e-97d9-4b28-b964-d007606a7e18" Oct 29 05:34:33.076231 systemd[1]: Started sshd@11-10.0.0.106:22-10.0.0.1:46848.service - OpenSSH per-connection server daemon (10.0.0.1:46848). Oct 29 05:34:33.134010 sshd[5063]: Accepted publickey for core from 10.0.0.1 port 46848 ssh2: RSA SHA256:XlI1mMWbAUEpbMdibrfNtyLuAe47fXxox5VA8A+V0wo Oct 29 05:34:33.135982 sshd-session[5063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 05:34:33.141534 systemd-logind[1587]: New session 12 of user core. Oct 29 05:34:33.150244 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 29 05:34:33.236267 sshd[5066]: Connection closed by 10.0.0.1 port 46848 Oct 29 05:34:33.236707 sshd-session[5063]: pam_unix(sshd:session): session closed for user core Oct 29 05:34:33.241520 systemd[1]: sshd@11-10.0.0.106:22-10.0.0.1:46848.service: Deactivated successfully. Oct 29 05:34:33.244269 systemd[1]: session-12.scope: Deactivated successfully. Oct 29 05:34:33.245969 systemd-logind[1587]: Session 12 logged out. Waiting for processes to exit. Oct 29 05:34:33.247803 systemd-logind[1587]: Removed session 12. Oct 29 05:34:35.675569 containerd[1601]: time="2025-10-29T05:34:35.675262571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 29 05:34:35.999494 containerd[1601]: time="2025-10-29T05:34:35.999322814Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 05:34:36.000590 containerd[1601]: time="2025-10-29T05:34:36.000525420Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 29 05:34:36.000750 containerd[1601]: time="2025-10-29T05:34:36.000597386Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 29 05:34:36.000851 kubelet[2777]: E1029 05:34:36.000801 2777 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 05:34:36.001216 kubelet[2777]: E1029 05:34:36.000862 2777 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 05:34:36.001216 kubelet[2777]: E1029 05:34:36.000950 2777 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-77c85fccc-5vb2w_calico-system(970fb908-fc28-49f1-87f4-48f55e612234): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 29 05:34:36.002880 containerd[1601]: time="2025-10-29T05:34:36.002597037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 29 05:34:36.338377 containerd[1601]: time="2025-10-29T05:34:36.338292163Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 05:34:36.339580 containerd[1601]: time="2025-10-29T05:34:36.339533281Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 29 05:34:36.339707 containerd[1601]: time="2025-10-29T05:34:36.339635483Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 29 05:34:36.339904 kubelet[2777]: E1029 05:34:36.339774 2777 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 05:34:36.339904 kubelet[2777]: E1029 05:34:36.339835 2777 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 05:34:36.339980 kubelet[2777]: E1029 05:34:36.339927 2777 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-77c85fccc-5vb2w_calico-system(970fb908-fc28-49f1-87f4-48f55e612234): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 29 05:34:36.340025 kubelet[2777]: E1029 05:34:36.339971 2777 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77c85fccc-5vb2w" podUID="970fb908-fc28-49f1-87f4-48f55e612234" Oct 29 05:34:38.253399 systemd[1]: Started sshd@12-10.0.0.106:22-10.0.0.1:51630.service - OpenSSH per-connection server daemon (10.0.0.1:51630). Oct 29 05:34:38.306869 sshd[5090]: Accepted publickey for core from 10.0.0.1 port 51630 ssh2: RSA SHA256:XlI1mMWbAUEpbMdibrfNtyLuAe47fXxox5VA8A+V0wo Oct 29 05:34:38.308389 sshd-session[5090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 05:34:38.312880 systemd-logind[1587]: New session 13 of user core. Oct 29 05:34:38.326245 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 29 05:34:38.412890 sshd[5093]: Connection closed by 10.0.0.1 port 51630 Oct 29 05:34:38.414267 sshd-session[5090]: pam_unix(sshd:session): session closed for user core Oct 29 05:34:38.422526 systemd[1]: sshd@12-10.0.0.106:22-10.0.0.1:51630.service: Deactivated successfully. Oct 29 05:34:38.424771 systemd[1]: session-13.scope: Deactivated successfully. Oct 29 05:34:38.425744 systemd-logind[1587]: Session 13 logged out. Waiting for processes to exit. Oct 29 05:34:38.429855 systemd[1]: Started sshd@13-10.0.0.106:22-10.0.0.1:51642.service - OpenSSH per-connection server daemon (10.0.0.1:51642). Oct 29 05:34:38.430572 systemd-logind[1587]: Removed session 13. Oct 29 05:34:38.485131 sshd[5107]: Accepted publickey for core from 10.0.0.1 port 51642 ssh2: RSA SHA256:XlI1mMWbAUEpbMdibrfNtyLuAe47fXxox5VA8A+V0wo Oct 29 05:34:38.486817 sshd-session[5107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 05:34:38.491310 systemd-logind[1587]: New session 14 of user core. Oct 29 05:34:38.502462 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 29 05:34:38.736586 sshd[5110]: Connection closed by 10.0.0.1 port 51642 Oct 29 05:34:38.737912 sshd-session[5107]: pam_unix(sshd:session): session closed for user core Oct 29 05:34:38.752330 systemd[1]: sshd@13-10.0.0.106:22-10.0.0.1:51642.service: Deactivated successfully. Oct 29 05:34:38.754869 systemd[1]: session-14.scope: Deactivated successfully. Oct 29 05:34:38.755856 systemd-logind[1587]: Session 14 logged out. Waiting for processes to exit. Oct 29 05:34:38.759910 systemd[1]: Started sshd@14-10.0.0.106:22-10.0.0.1:51644.service - OpenSSH per-connection server daemon (10.0.0.1:51644). Oct 29 05:34:38.763220 systemd-logind[1587]: Removed session 14. Oct 29 05:34:38.818118 sshd[5122]: Accepted publickey for core from 10.0.0.1 port 51644 ssh2: RSA SHA256:XlI1mMWbAUEpbMdibrfNtyLuAe47fXxox5VA8A+V0wo Oct 29 05:34:38.819964 sshd-session[5122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 05:34:38.825062 systemd-logind[1587]: New session 15 of user core. Oct 29 05:34:38.833249 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 29 05:34:38.971249 sshd[5125]: Connection closed by 10.0.0.1 port 51644 Oct 29 05:34:38.971582 sshd-session[5122]: pam_unix(sshd:session): session closed for user core Oct 29 05:34:38.975335 systemd[1]: sshd@14-10.0.0.106:22-10.0.0.1:51644.service: Deactivated successfully. Oct 29 05:34:38.977708 systemd[1]: session-15.scope: Deactivated successfully. Oct 29 05:34:38.979357 systemd-logind[1587]: Session 15 logged out. Waiting for processes to exit. Oct 29 05:34:38.980642 systemd-logind[1587]: Removed session 15. Oct 29 05:34:39.674772 containerd[1601]: time="2025-10-29T05:34:39.674697607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 29 05:34:40.019489 containerd[1601]: time="2025-10-29T05:34:40.019292209Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 05:34:40.020514 containerd[1601]: time="2025-10-29T05:34:40.020475331Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 29 05:34:40.020594 containerd[1601]: time="2025-10-29T05:34:40.020516931Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 29 05:34:40.020782 kubelet[2777]: E1029 05:34:40.020729 2777 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 05:34:40.021259 kubelet[2777]: E1029 05:34:40.020787 2777 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 05:34:40.021259 kubelet[2777]: E1029 05:34:40.021055 2777 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-qkktn_calico-system(11b4791e-97d9-4b28-b964-d007606a7e18): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 29 05:34:40.021681 containerd[1601]: time="2025-10-29T05:34:40.021481780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 29 05:34:40.381427 containerd[1601]: time="2025-10-29T05:34:40.381356047Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 05:34:40.382539 containerd[1601]: time="2025-10-29T05:34:40.382508219Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 29 05:34:40.382660 containerd[1601]: time="2025-10-29T05:34:40.382577893Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 29 05:34:40.382829 kubelet[2777]: E1029 05:34:40.382774 2777 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 05:34:40.382889 kubelet[2777]: E1029 05:34:40.382845 2777 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 05:34:40.383237 kubelet[2777]: E1029 05:34:40.383130 2777 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-68fbd9f956-5l7nj_calico-system(ca0b83d1-3c73-4368-b48e-26b292faf856): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 29 05:34:40.383237 kubelet[2777]: E1029 05:34:40.383199 2777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68fbd9f956-5l7nj" podUID="ca0b83d1-3c73-4368-b48e-26b292faf856" Oct 29 05:34:40.383529 containerd[1601]: time="2025-10-29T05:34:40.383159274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 29 05:34:40.692948 containerd[1601]: time="2025-10-29T05:34:40.692804711Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 05:34:40.694092 containerd[1601]: time="2025-10-29T05:34:40.694037849Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 29 05:34:40.694146 containerd[1601]: time="2025-10-29T05:34:40.694090300Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 29 05:34:40.694325 kubelet[2777]: E1029 05:34:40.694277 2777 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 05:34:40.694396 kubelet[2777]: E1029 05:34:40.694338 2777 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 05:34:40.694446 kubelet[2777]: E1029 05:34:40.694427 2777 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-qkktn_calico-system(11b4791e-97d9-4b28-b964-d007606a7e18): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 29 05:34:40.694514 kubelet[2777]: E1029 05:34:40.694469 2777 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qkktn" podUID="11b4791e-97d9-4b28-b964-d007606a7e18" Oct 29 05:34:41.675256 containerd[1601]: time="2025-10-29T05:34:41.674910896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 29 05:34:42.014020 containerd[1601]: time="2025-10-29T05:34:42.013870904Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 05:34:42.015002 containerd[1601]: time="2025-10-29T05:34:42.014952006Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 29 05:34:42.015093 containerd[1601]: time="2025-10-29T05:34:42.015023995Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 29 05:34:42.015313 kubelet[2777]: E1029 05:34:42.015261 2777 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 05:34:42.015638 kubelet[2777]: E1029 05:34:42.015320 2777 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 05:34:42.015638 kubelet[2777]: E1029 05:34:42.015420 2777 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-pkk9m_calico-system(de492ebe-e388-430f-a865-ba2ce27c1431): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 29 05:34:42.015638 kubelet[2777]: E1029 05:34:42.015463 2777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pkk9m" podUID="de492ebe-e388-430f-a865-ba2ce27c1431" Oct 29 05:34:42.674661 containerd[1601]: time="2025-10-29T05:34:42.674609303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 05:34:43.030148 containerd[1601]: time="2025-10-29T05:34:43.029942252Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 05:34:43.031277 containerd[1601]: time="2025-10-29T05:34:43.031216303Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 05:34:43.031335 containerd[1601]: time="2025-10-29T05:34:43.031296888Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 05:34:43.031495 kubelet[2777]: E1029 05:34:43.031452 2777 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 05:34:43.031761 kubelet[2777]: E1029 05:34:43.031494 2777 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 05:34:43.031761 kubelet[2777]: E1029 05:34:43.031570 2777 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-799b5c4b47-vw8qq_calico-apiserver(5700d2a9-15b1-43c2-8972-37e1ebd6aa09): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 05:34:43.031761 kubelet[2777]: E1029 05:34:43.031604 2777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799b5c4b47-vw8qq" podUID="5700d2a9-15b1-43c2-8972-37e1ebd6aa09" Oct 29 05:34:43.985200 systemd[1]: Started sshd@15-10.0.0.106:22-10.0.0.1:51658.service - OpenSSH per-connection server daemon (10.0.0.1:51658). Oct 29 05:34:44.044581 sshd[5145]: Accepted publickey for core from 10.0.0.1 port 51658 ssh2: RSA SHA256:XlI1mMWbAUEpbMdibrfNtyLuAe47fXxox5VA8A+V0wo Oct 29 05:34:44.046003 sshd-session[5145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 05:34:44.050465 systemd-logind[1587]: New session 16 of user core. Oct 29 05:34:44.060231 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 29 05:34:44.138226 sshd[5148]: Connection closed by 10.0.0.1 port 51658 Oct 29 05:34:44.138672 sshd-session[5145]: pam_unix(sshd:session): session closed for user core Oct 29 05:34:44.143884 systemd[1]: sshd@15-10.0.0.106:22-10.0.0.1:51658.service: Deactivated successfully. Oct 29 05:34:44.146196 systemd[1]: session-16.scope: Deactivated successfully. Oct 29 05:34:44.146945 systemd-logind[1587]: Session 16 logged out. Waiting for processes to exit. Oct 29 05:34:44.148650 systemd-logind[1587]: Removed session 16. Oct 29 05:34:44.674924 containerd[1601]: time="2025-10-29T05:34:44.674846601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 05:34:45.013746 containerd[1601]: time="2025-10-29T05:34:45.013581653Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 05:34:45.015067 containerd[1601]: time="2025-10-29T05:34:45.014998044Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 05:34:45.015154 containerd[1601]: time="2025-10-29T05:34:45.015053150Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 05:34:45.015398 kubelet[2777]: E1029 05:34:45.015340 2777 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 05:34:45.015398 kubelet[2777]: E1029 05:34:45.015397 2777 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 05:34:45.015894 kubelet[2777]: E1029 05:34:45.015488 2777 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-799b5c4b47-5d9gp_calico-apiserver(dda5bf98-d31b-4c3d-8024-54d20d0506a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 05:34:45.015894 kubelet[2777]: E1029 05:34:45.015527 2777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799b5c4b47-5d9gp" podUID="dda5bf98-d31b-4c3d-8024-54d20d0506a7" Oct 29 05:34:46.675047 kubelet[2777]: E1029 05:34:46.674981 2777 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77c85fccc-5vb2w" podUID="970fb908-fc28-49f1-87f4-48f55e612234" Oct 29 05:34:49.152988 systemd[1]: Started sshd@16-10.0.0.106:22-10.0.0.1:44776.service - OpenSSH per-connection server daemon (10.0.0.1:44776). Oct 29 05:34:49.223635 sshd[5169]: Accepted publickey for core from 10.0.0.1 port 44776 ssh2: RSA SHA256:XlI1mMWbAUEpbMdibrfNtyLuAe47fXxox5VA8A+V0wo Oct 29 05:34:49.225378 sshd-session[5169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 05:34:49.230793 systemd-logind[1587]: New session 17 of user core. Oct 29 05:34:49.240281 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 29 05:34:49.322332 sshd[5172]: Connection closed by 10.0.0.1 port 44776 Oct 29 05:34:49.322776 sshd-session[5169]: pam_unix(sshd:session): session closed for user core Oct 29 05:34:49.339223 systemd[1]: sshd@16-10.0.0.106:22-10.0.0.1:44776.service: Deactivated successfully. Oct 29 05:34:49.341974 systemd[1]: session-17.scope: Deactivated successfully. Oct 29 05:34:49.343118 systemd-logind[1587]: Session 17 logged out. Waiting for processes to exit. Oct 29 05:34:49.347145 systemd[1]: Started sshd@17-10.0.0.106:22-10.0.0.1:44784.service - OpenSSH per-connection server daemon (10.0.0.1:44784). Oct 29 05:34:49.347822 systemd-logind[1587]: Removed session 17. Oct 29 05:34:49.420180 sshd[5185]: Accepted publickey for core from 10.0.0.1 port 44784 ssh2: RSA SHA256:XlI1mMWbAUEpbMdibrfNtyLuAe47fXxox5VA8A+V0wo Oct 29 05:34:49.421976 sshd-session[5185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 05:34:49.427312 systemd-logind[1587]: New session 18 of user core. Oct 29 05:34:49.434279 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 29 05:34:49.622032 sshd[5188]: Connection closed by 10.0.0.1 port 44784 Oct 29 05:34:49.622525 sshd-session[5185]: pam_unix(sshd:session): session closed for user core Oct 29 05:34:49.638106 systemd[1]: sshd@17-10.0.0.106:22-10.0.0.1:44784.service: Deactivated successfully. Oct 29 05:34:49.640342 systemd[1]: session-18.scope: Deactivated successfully. Oct 29 05:34:49.641489 systemd-logind[1587]: Session 18 logged out. Waiting for processes to exit. Oct 29 05:34:49.645022 systemd[1]: Started sshd@18-10.0.0.106:22-10.0.0.1:44800.service - OpenSSH per-connection server daemon (10.0.0.1:44800). Oct 29 05:34:49.646292 systemd-logind[1587]: Removed session 18. Oct 29 05:34:49.698772 sshd[5203]: Accepted publickey for core from 10.0.0.1 port 44800 ssh2: RSA SHA256:XlI1mMWbAUEpbMdibrfNtyLuAe47fXxox5VA8A+V0wo Oct 29 05:34:49.700597 sshd-session[5203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 05:34:49.705324 systemd-logind[1587]: New session 19 of user core. Oct 29 05:34:49.713215 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 29 05:34:49.901198 containerd[1601]: time="2025-10-29T05:34:49.901140000Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c9b4b12106685aff306622811e13227e95b402cf220dd13da97b40ae8449383c\" id:\"6e7d255c0c7c92c13388f23aac8ecbfb42cb0ad0e13bcc58db9e2f068b616c0f\" pid:5224 exited_at:{seconds:1761716089 nanos:899906005}" Oct 29 05:34:49.903432 kubelet[2777]: E1029 05:34:49.903378 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:34:50.287484 sshd[5206]: Connection closed by 10.0.0.1 port 44800 Oct 29 05:34:50.287856 sshd-session[5203]: pam_unix(sshd:session): session closed for user core Oct 29 05:34:50.300521 systemd[1]: sshd@18-10.0.0.106:22-10.0.0.1:44800.service: Deactivated successfully. Oct 29 05:34:50.304615 systemd[1]: session-19.scope: Deactivated successfully. Oct 29 05:34:50.306199 systemd-logind[1587]: Session 19 logged out. Waiting for processes to exit. Oct 29 05:34:50.310727 systemd[1]: Started sshd@19-10.0.0.106:22-10.0.0.1:44808.service - OpenSSH per-connection server daemon (10.0.0.1:44808). Oct 29 05:34:50.311557 systemd-logind[1587]: Removed session 19. Oct 29 05:34:50.372443 sshd[5248]: Accepted publickey for core from 10.0.0.1 port 44808 ssh2: RSA SHA256:XlI1mMWbAUEpbMdibrfNtyLuAe47fXxox5VA8A+V0wo Oct 29 05:34:50.373918 sshd-session[5248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 05:34:50.378704 systemd-logind[1587]: New session 20 of user core. Oct 29 05:34:50.389202 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 29 05:34:50.574029 sshd[5251]: Connection closed by 10.0.0.1 port 44808 Oct 29 05:34:50.576301 sshd-session[5248]: pam_unix(sshd:session): session closed for user core Oct 29 05:34:50.588456 systemd[1]: sshd@19-10.0.0.106:22-10.0.0.1:44808.service: Deactivated successfully. Oct 29 05:34:50.590708 systemd[1]: session-20.scope: Deactivated successfully. Oct 29 05:34:50.591577 systemd-logind[1587]: Session 20 logged out. Waiting for processes to exit. Oct 29 05:34:50.594603 systemd[1]: Started sshd@20-10.0.0.106:22-10.0.0.1:44818.service - OpenSSH per-connection server daemon (10.0.0.1:44818). Oct 29 05:34:50.595402 systemd-logind[1587]: Removed session 20. Oct 29 05:34:50.650780 sshd[5263]: Accepted publickey for core from 10.0.0.1 port 44818 ssh2: RSA SHA256:XlI1mMWbAUEpbMdibrfNtyLuAe47fXxox5VA8A+V0wo Oct 29 05:34:50.652276 sshd-session[5263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 05:34:50.657438 systemd-logind[1587]: New session 21 of user core. Oct 29 05:34:50.667208 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 29 05:34:50.744248 sshd[5266]: Connection closed by 10.0.0.1 port 44818 Oct 29 05:34:50.744567 sshd-session[5263]: pam_unix(sshd:session): session closed for user core Oct 29 05:34:50.748870 systemd[1]: sshd@20-10.0.0.106:22-10.0.0.1:44818.service: Deactivated successfully. Oct 29 05:34:50.751107 systemd[1]: session-21.scope: Deactivated successfully. Oct 29 05:34:50.751896 systemd-logind[1587]: Session 21 logged out. Waiting for processes to exit. Oct 29 05:34:50.753430 systemd-logind[1587]: Removed session 21. Oct 29 05:34:51.675180 kubelet[2777]: E1029 05:34:51.675054 2777 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qkktn" podUID="11b4791e-97d9-4b28-b964-d007606a7e18" Oct 29 05:34:52.674652 kubelet[2777]: E1029 05:34:52.674528 2777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pkk9m" podUID="de492ebe-e388-430f-a865-ba2ce27c1431" Oct 29 05:34:52.674652 kubelet[2777]: E1029 05:34:52.674550 2777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68fbd9f956-5l7nj" podUID="ca0b83d1-3c73-4368-b48e-26b292faf856" Oct 29 05:34:55.756868 systemd[1]: Started sshd@21-10.0.0.106:22-10.0.0.1:44830.service - OpenSSH per-connection server daemon (10.0.0.1:44830). Oct 29 05:34:55.835931 sshd[5284]: Accepted publickey for core from 10.0.0.1 port 44830 ssh2: RSA SHA256:XlI1mMWbAUEpbMdibrfNtyLuAe47fXxox5VA8A+V0wo Oct 29 05:34:55.837657 sshd-session[5284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 05:34:55.842146 systemd-logind[1587]: New session 22 of user core. Oct 29 05:34:55.852337 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 29 05:34:55.938542 sshd[5287]: Connection closed by 10.0.0.1 port 44830 Oct 29 05:34:55.938910 sshd-session[5284]: pam_unix(sshd:session): session closed for user core Oct 29 05:34:55.943047 systemd-logind[1587]: Session 22 logged out. Waiting for processes to exit. Oct 29 05:34:55.943404 systemd[1]: sshd@21-10.0.0.106:22-10.0.0.1:44830.service: Deactivated successfully. Oct 29 05:34:55.945323 systemd[1]: session-22.scope: Deactivated successfully. Oct 29 05:34:55.946966 systemd-logind[1587]: Removed session 22. Oct 29 05:34:56.673886 kubelet[2777]: E1029 05:34:56.673838 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:34:57.679107 kubelet[2777]: E1029 05:34:57.678964 2777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799b5c4b47-vw8qq" podUID="5700d2a9-15b1-43c2-8972-37e1ebd6aa09" Oct 29 05:34:58.675952 containerd[1601]: time="2025-10-29T05:34:58.675475485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 29 05:34:59.018606 containerd[1601]: time="2025-10-29T05:34:59.018418368Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 05:34:59.019795 containerd[1601]: time="2025-10-29T05:34:59.019745798Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 29 05:34:59.019851 containerd[1601]: time="2025-10-29T05:34:59.019804470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 29 05:34:59.020052 kubelet[2777]: E1029 05:34:59.020008 2777 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 05:34:59.020453 kubelet[2777]: E1029 05:34:59.020062 2777 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 05:34:59.020453 kubelet[2777]: E1029 05:34:59.020184 2777 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-77c85fccc-5vb2w_calico-system(970fb908-fc28-49f1-87f4-48f55e612234): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 29 05:34:59.021200 containerd[1601]: time="2025-10-29T05:34:59.021151589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 29 05:34:59.353660 containerd[1601]: time="2025-10-29T05:34:59.353590886Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 05:34:59.354990 containerd[1601]: time="2025-10-29T05:34:59.354920972Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 29 05:34:59.355183 containerd[1601]: time="2025-10-29T05:34:59.355010584Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 29 05:34:59.355288 kubelet[2777]: E1029 05:34:59.355239 2777 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 05:34:59.355353 kubelet[2777]: E1029 05:34:59.355297 2777 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 05:34:59.355409 kubelet[2777]: E1029 05:34:59.355386 2777 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-77c85fccc-5vb2w_calico-system(970fb908-fc28-49f1-87f4-48f55e612234): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 29 05:34:59.355466 kubelet[2777]: E1029 05:34:59.355431 2777 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77c85fccc-5vb2w" podUID="970fb908-fc28-49f1-87f4-48f55e612234" Oct 29 05:34:59.675647 kubelet[2777]: E1029 05:34:59.675443 2777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799b5c4b47-5d9gp" podUID="dda5bf98-d31b-4c3d-8024-54d20d0506a7" Oct 29 05:35:00.954021 systemd[1]: Started sshd@22-10.0.0.106:22-10.0.0.1:40056.service - OpenSSH per-connection server daemon (10.0.0.1:40056). Oct 29 05:35:01.004302 sshd[5302]: Accepted publickey for core from 10.0.0.1 port 40056 ssh2: RSA SHA256:XlI1mMWbAUEpbMdibrfNtyLuAe47fXxox5VA8A+V0wo Oct 29 05:35:01.006296 sshd-session[5302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 05:35:01.011518 systemd-logind[1587]: New session 23 of user core. Oct 29 05:35:01.020329 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 29 05:35:01.118015 sshd[5305]: Connection closed by 10.0.0.1 port 40056 Oct 29 05:35:01.120393 sshd-session[5302]: pam_unix(sshd:session): session closed for user core Oct 29 05:35:01.125622 systemd[1]: sshd@22-10.0.0.106:22-10.0.0.1:40056.service: Deactivated successfully. Oct 29 05:35:01.128418 systemd[1]: session-23.scope: Deactivated successfully. Oct 29 05:35:01.129481 systemd-logind[1587]: Session 23 logged out. Waiting for processes to exit. Oct 29 05:35:01.131375 systemd-logind[1587]: Removed session 23. Oct 29 05:35:01.674128 kubelet[2777]: E1029 05:35:01.674046 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:35:01.674829 kubelet[2777]: E1029 05:35:01.674798 2777 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 05:35:04.674796 containerd[1601]: time="2025-10-29T05:35:04.674745277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 29 05:35:05.033431 containerd[1601]: time="2025-10-29T05:35:05.033275397Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 05:35:05.034534 containerd[1601]: time="2025-10-29T05:35:05.034463256Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 29 05:35:05.034534 containerd[1601]: time="2025-10-29T05:35:05.034507761Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 29 05:35:05.034769 kubelet[2777]: E1029 05:35:05.034686 2777 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 05:35:05.034769 kubelet[2777]: E1029 05:35:05.034746 2777 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 05:35:05.035204 kubelet[2777]: E1029 05:35:05.034840 2777 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-qkktn_calico-system(11b4791e-97d9-4b28-b964-d007606a7e18): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 29 05:35:05.035901 containerd[1601]: time="2025-10-29T05:35:05.035845766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 29 05:35:05.367818 containerd[1601]: time="2025-10-29T05:35:05.367731186Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 05:35:05.369014 containerd[1601]: time="2025-10-29T05:35:05.368937240Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 29 05:35:05.369216 containerd[1601]: time="2025-10-29T05:35:05.369006822Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 29 05:35:05.369320 kubelet[2777]: E1029 05:35:05.369262 2777 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 05:35:05.369398 kubelet[2777]: E1029 05:35:05.369327 2777 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 05:35:05.369445 kubelet[2777]: E1029 05:35:05.369420 2777 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-qkktn_calico-system(11b4791e-97d9-4b28-b964-d007606a7e18): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 29 05:35:05.369519 kubelet[2777]: E1029 05:35:05.369466 2777 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qkktn" podUID="11b4791e-97d9-4b28-b964-d007606a7e18" Oct 29 05:35:05.677949 containerd[1601]: time="2025-10-29T05:35:05.677797919Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 29 05:35:06.029494 containerd[1601]: time="2025-10-29T05:35:06.029327079Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 05:35:06.030556 containerd[1601]: time="2025-10-29T05:35:06.030488527Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 29 05:35:06.030650 containerd[1601]: time="2025-10-29T05:35:06.030580602Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 29 05:35:06.030740 kubelet[2777]: E1029 05:35:06.030696 2777 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 05:35:06.030843 kubelet[2777]: E1029 05:35:06.030741 2777 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 05:35:06.031018 containerd[1601]: time="2025-10-29T05:35:06.030995181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 29 05:35:06.031191 kubelet[2777]: E1029 05:35:06.031112 2777 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-pkk9m_calico-system(de492ebe-e388-430f-a865-ba2ce27c1431): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 29 05:35:06.031381 kubelet[2777]: E1029 05:35:06.031216 2777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pkk9m" podUID="de492ebe-e388-430f-a865-ba2ce27c1431" Oct 29 05:35:06.131424 systemd[1]: Started sshd@23-10.0.0.106:22-10.0.0.1:49632.service - OpenSSH per-connection server daemon (10.0.0.1:49632). Oct 29 05:35:06.183405 sshd[5318]: Accepted publickey for core from 10.0.0.1 port 49632 ssh2: RSA SHA256:XlI1mMWbAUEpbMdibrfNtyLuAe47fXxox5VA8A+V0wo Oct 29 05:35:06.185472 sshd-session[5318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 05:35:06.190308 systemd-logind[1587]: New session 24 of user core. Oct 29 05:35:06.198256 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 29 05:35:06.277825 sshd[5321]: Connection closed by 10.0.0.1 port 49632 Oct 29 05:35:06.278281 sshd-session[5318]: pam_unix(sshd:session): session closed for user core Oct 29 05:35:06.284387 systemd[1]: sshd@23-10.0.0.106:22-10.0.0.1:49632.service: Deactivated successfully. Oct 29 05:35:06.286446 systemd[1]: session-24.scope: Deactivated successfully. Oct 29 05:35:06.287462 systemd-logind[1587]: Session 24 logged out. Waiting for processes to exit. Oct 29 05:35:06.288863 systemd-logind[1587]: Removed session 24. Oct 29 05:35:06.354265 containerd[1601]: time="2025-10-29T05:35:06.354219402Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 05:35:06.355449 containerd[1601]: time="2025-10-29T05:35:06.355403644Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 29 05:35:06.355548 containerd[1601]: time="2025-10-29T05:35:06.355466594Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 29 05:35:06.355721 kubelet[2777]: E1029 05:35:06.355635 2777 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 05:35:06.355721 kubelet[2777]: E1029 05:35:06.355701 2777 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 05:35:06.356211 kubelet[2777]: E1029 05:35:06.355789 2777 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-68fbd9f956-5l7nj_calico-system(ca0b83d1-3c73-4368-b48e-26b292faf856): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 29 05:35:06.356211 kubelet[2777]: E1029 05:35:06.355826 2777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68fbd9f956-5l7nj" podUID="ca0b83d1-3c73-4368-b48e-26b292faf856"