Oct 27 08:30:45.564929 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Mon Oct 27 06:24:35 -00 2025 Oct 27 08:30:45.564955 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e6ac205aca0358d0b739fe2cba6f8244850dbdc9027fd8e7442161fce065515e Oct 27 08:30:45.564972 kernel: BIOS-provided physical RAM map: Oct 27 08:30:45.564980 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Oct 27 08:30:45.564986 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Oct 27 08:30:45.564996 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Oct 27 08:30:45.565004 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Oct 27 08:30:45.565011 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Oct 27 08:30:45.565021 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Oct 27 08:30:45.565028 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Oct 27 08:30:45.565035 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Oct 27 08:30:45.565042 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Oct 27 08:30:45.565049 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Oct 27 08:30:45.565058 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Oct 27 08:30:45.565066 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Oct 27 08:30:45.565074 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Oct 27 08:30:45.565084 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Oct 27 08:30:45.565093 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 27 08:30:45.565101 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 27 08:30:45.565109 kernel: NX (Execute Disable) protection: active Oct 27 08:30:45.565116 kernel: APIC: Static calls initialized Oct 27 08:30:45.565123 kernel: e820: update [mem 0x9a13d018-0x9a146c57] usable ==> usable Oct 27 08:30:45.565131 kernel: e820: update [mem 0x9a100018-0x9a13ce57] usable ==> usable Oct 27 08:30:45.565138 kernel: extended physical RAM map: Oct 27 08:30:45.565146 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Oct 27 08:30:45.565154 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Oct 27 08:30:45.565161 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Oct 27 08:30:45.565169 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Oct 27 08:30:45.565179 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a100017] usable Oct 27 08:30:45.565186 kernel: reserve setup_data: [mem 0x000000009a100018-0x000000009a13ce57] usable Oct 27 08:30:45.565194 kernel: reserve setup_data: [mem 0x000000009a13ce58-0x000000009a13d017] usable Oct 27 08:30:45.565201 kernel: reserve setup_data: [mem 0x000000009a13d018-0x000000009a146c57] usable Oct 27 08:30:45.565208 kernel: reserve setup_data: [mem 0x000000009a146c58-0x000000009b8ecfff] usable Oct 27 08:30:45.565216 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Oct 27 08:30:45.565223 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Oct 27 08:30:45.565231 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Oct 27 08:30:45.565238 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Oct 27 08:30:45.565246 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Oct 27 08:30:45.565255 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Oct 27 08:30:45.565263 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Oct 27 08:30:45.565274 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Oct 27 08:30:45.565281 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Oct 27 08:30:45.565304 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 27 08:30:45.565315 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 27 08:30:45.565323 kernel: efi: EFI v2.7 by EDK II Oct 27 08:30:45.565330 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Oct 27 08:30:45.565338 kernel: random: crng init done Oct 27 08:30:45.565346 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Oct 27 08:30:45.565353 kernel: secureboot: Secure boot enabled Oct 27 08:30:45.565361 kernel: SMBIOS 2.8 present. Oct 27 08:30:45.565369 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Oct 27 08:30:45.565376 kernel: DMI: Memory slots populated: 1/1 Oct 27 08:30:45.565389 kernel: Hypervisor detected: KVM Oct 27 08:30:45.565397 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Oct 27 08:30:45.565404 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 27 08:30:45.565412 kernel: kvm-clock: using sched offset of 5120604179 cycles Oct 27 08:30:45.565420 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 27 08:30:45.565428 kernel: tsc: Detected 2794.750 MHz processor Oct 27 08:30:45.565437 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 27 08:30:45.565445 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 27 08:30:45.565453 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Oct 27 08:30:45.565465 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Oct 27 08:30:45.565475 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 27 08:30:45.565486 kernel: Using GB pages for direct mapping Oct 27 08:30:45.565494 kernel: ACPI: Early table checksum verification disabled Oct 27 08:30:45.565502 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Oct 27 08:30:45.565510 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Oct 27 08:30:45.565518 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 08:30:45.565528 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 08:30:45.565536 kernel: ACPI: FACS 0x000000009BBDD000 000040 Oct 27 08:30:45.565544 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 08:30:45.565553 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 08:30:45.565561 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 08:30:45.565569 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 08:30:45.565577 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Oct 27 08:30:45.565587 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Oct 27 08:30:45.565595 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Oct 27 08:30:45.565603 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Oct 27 08:30:45.565611 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Oct 27 08:30:45.565619 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Oct 27 08:30:45.565627 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Oct 27 08:30:45.565635 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Oct 27 08:30:45.565643 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Oct 27 08:30:45.565653 kernel: No NUMA configuration found Oct 27 08:30:45.565661 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Oct 27 08:30:45.565669 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Oct 27 08:30:45.565677 kernel: Zone ranges: Oct 27 08:30:45.565686 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 27 08:30:45.565694 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Oct 27 08:30:45.565702 kernel: Normal empty Oct 27 08:30:45.565712 kernel: Device empty Oct 27 08:30:45.565719 kernel: Movable zone start for each node Oct 27 08:30:45.565728 kernel: Early memory node ranges Oct 27 08:30:45.565736 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Oct 27 08:30:45.565746 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Oct 27 08:30:45.565754 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Oct 27 08:30:45.565764 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Oct 27 08:30:45.565772 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Oct 27 08:30:45.565782 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Oct 27 08:30:45.565791 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 27 08:30:45.565799 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Oct 27 08:30:45.565807 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 27 08:30:45.565815 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Oct 27 08:30:45.565823 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Oct 27 08:30:45.565831 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Oct 27 08:30:45.565841 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 27 08:30:45.565849 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 27 08:30:45.565857 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 27 08:30:45.565867 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 27 08:30:45.565881 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 27 08:30:45.565892 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 27 08:30:45.565903 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 27 08:30:45.565917 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 27 08:30:45.565928 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 27 08:30:45.565937 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 27 08:30:45.565945 kernel: TSC deadline timer available Oct 27 08:30:45.565953 kernel: CPU topo: Max. logical packages: 1 Oct 27 08:30:45.565961 kernel: CPU topo: Max. logical dies: 1 Oct 27 08:30:45.565988 kernel: CPU topo: Max. dies per package: 1 Oct 27 08:30:45.565996 kernel: CPU topo: Max. threads per core: 1 Oct 27 08:30:45.566005 kernel: CPU topo: Num. cores per package: 4 Oct 27 08:30:45.566013 kernel: CPU topo: Num. threads per package: 4 Oct 27 08:30:45.566028 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Oct 27 08:30:45.566036 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 27 08:30:45.566044 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 27 08:30:45.566053 kernel: kvm-guest: setup PV sched yield Oct 27 08:30:45.566063 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Oct 27 08:30:45.566072 kernel: Booting paravirtualized kernel on KVM Oct 27 08:30:45.566080 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 27 08:30:45.566089 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 27 08:30:45.566097 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Oct 27 08:30:45.566105 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Oct 27 08:30:45.566113 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 27 08:30:45.566124 kernel: kvm-guest: PV spinlocks enabled Oct 27 08:30:45.566132 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 27 08:30:45.566142 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e6ac205aca0358d0b739fe2cba6f8244850dbdc9027fd8e7442161fce065515e Oct 27 08:30:45.566151 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 27 08:30:45.566159 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 27 08:30:45.566167 kernel: Fallback order for Node 0: 0 Oct 27 08:30:45.566176 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Oct 27 08:30:45.566189 kernel: Policy zone: DMA32 Oct 27 08:30:45.566197 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 27 08:30:45.566206 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 27 08:30:45.566214 kernel: ftrace: allocating 40092 entries in 157 pages Oct 27 08:30:45.566222 kernel: ftrace: allocated 157 pages with 5 groups Oct 27 08:30:45.566231 kernel: Dynamic Preempt: voluntary Oct 27 08:30:45.566239 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 27 08:30:45.566251 kernel: rcu: RCU event tracing is enabled. Oct 27 08:30:45.566259 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 27 08:30:45.566268 kernel: Trampoline variant of Tasks RCU enabled. Oct 27 08:30:45.566276 kernel: Rude variant of Tasks RCU enabled. Oct 27 08:30:45.566285 kernel: Tracing variant of Tasks RCU enabled. Oct 27 08:30:45.566306 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 27 08:30:45.566315 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 27 08:30:45.566326 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 27 08:30:45.566334 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 27 08:30:45.566345 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 27 08:30:45.566354 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 27 08:30:45.566362 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 27 08:30:45.566370 kernel: Console: colour dummy device 80x25 Oct 27 08:30:45.566379 kernel: printk: legacy console [ttyS0] enabled Oct 27 08:30:45.566389 kernel: ACPI: Core revision 20240827 Oct 27 08:30:45.566397 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 27 08:30:45.566406 kernel: APIC: Switch to symmetric I/O mode setup Oct 27 08:30:45.566414 kernel: x2apic enabled Oct 27 08:30:45.566422 kernel: APIC: Switched APIC routing to: physical x2apic Oct 27 08:30:45.566431 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 27 08:30:45.566439 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 27 08:30:45.566450 kernel: kvm-guest: setup PV IPIs Oct 27 08:30:45.566458 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 27 08:30:45.566466 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Oct 27 08:30:45.566475 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Oct 27 08:30:45.566483 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 27 08:30:45.566492 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 27 08:30:45.566500 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 27 08:30:45.566510 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 27 08:30:45.566521 kernel: Spectre V2 : Mitigation: Retpolines Oct 27 08:30:45.566529 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 27 08:30:45.566538 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 27 08:30:45.566546 kernel: active return thunk: retbleed_return_thunk Oct 27 08:30:45.566554 kernel: RETBleed: Mitigation: untrained return thunk Oct 27 08:30:45.566563 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 27 08:30:45.566573 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 27 08:30:45.566582 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 27 08:30:45.566591 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 27 08:30:45.566599 kernel: active return thunk: srso_return_thunk Oct 27 08:30:45.566608 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 27 08:30:45.566616 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 27 08:30:45.566624 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 27 08:30:45.566635 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 27 08:30:45.566643 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 27 08:30:45.566651 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 27 08:30:45.566660 kernel: Freeing SMP alternatives memory: 32K Oct 27 08:30:45.566668 kernel: pid_max: default: 32768 minimum: 301 Oct 27 08:30:45.566676 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 27 08:30:45.566684 kernel: landlock: Up and running. Oct 27 08:30:45.566695 kernel: SELinux: Initializing. Oct 27 08:30:45.566703 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 27 08:30:45.566711 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 27 08:30:45.566720 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 27 08:30:45.566728 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 27 08:30:45.566736 kernel: ... version: 0 Oct 27 08:30:45.566747 kernel: ... bit width: 48 Oct 27 08:30:45.566759 kernel: ... generic registers: 6 Oct 27 08:30:45.566768 kernel: ... value mask: 0000ffffffffffff Oct 27 08:30:45.566776 kernel: ... max period: 00007fffffffffff Oct 27 08:30:45.566784 kernel: ... fixed-purpose events: 0 Oct 27 08:30:45.566793 kernel: ... event mask: 000000000000003f Oct 27 08:30:45.566801 kernel: signal: max sigframe size: 1776 Oct 27 08:30:45.566809 kernel: rcu: Hierarchical SRCU implementation. Oct 27 08:30:45.566818 kernel: rcu: Max phase no-delay instances is 400. Oct 27 08:30:45.566828 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 27 08:30:45.566837 kernel: smp: Bringing up secondary CPUs ... Oct 27 08:30:45.566845 kernel: smpboot: x86: Booting SMP configuration: Oct 27 08:30:45.566853 kernel: .... node #0, CPUs: #1 #2 #3 Oct 27 08:30:45.566861 kernel: smp: Brought up 1 node, 4 CPUs Oct 27 08:30:45.566869 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Oct 27 08:30:45.566878 kernel: Memory: 2431744K/2552216K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15964K init, 2080K bss, 114536K reserved, 0K cma-reserved) Oct 27 08:30:45.566888 kernel: devtmpfs: initialized Oct 27 08:30:45.566897 kernel: x86/mm: Memory block size: 128MB Oct 27 08:30:45.566905 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Oct 27 08:30:45.566913 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Oct 27 08:30:45.566922 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 27 08:30:45.566930 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 27 08:30:45.566939 kernel: pinctrl core: initialized pinctrl subsystem Oct 27 08:30:45.566949 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 27 08:30:45.566957 kernel: audit: initializing netlink subsys (disabled) Oct 27 08:30:45.566974 kernel: audit: type=2000 audit(1761553843.408:1): state=initialized audit_enabled=0 res=1 Oct 27 08:30:45.566982 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 27 08:30:45.566991 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 27 08:30:45.566999 kernel: cpuidle: using governor menu Oct 27 08:30:45.567007 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 27 08:30:45.567018 kernel: dca service started, version 1.12.1 Oct 27 08:30:45.567026 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Oct 27 08:30:45.567035 kernel: PCI: Using configuration type 1 for base access Oct 27 08:30:45.567043 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 27 08:30:45.567051 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 27 08:30:45.567060 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 27 08:30:45.567068 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 27 08:30:45.567078 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 27 08:30:45.567087 kernel: ACPI: Added _OSI(Module Device) Oct 27 08:30:45.567095 kernel: ACPI: Added _OSI(Processor Device) Oct 27 08:30:45.567103 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 27 08:30:45.567111 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 27 08:30:45.567120 kernel: ACPI: Interpreter enabled Oct 27 08:30:45.567128 kernel: ACPI: PM: (supports S0 S5) Oct 27 08:30:45.567138 kernel: ACPI: Using IOAPIC for interrupt routing Oct 27 08:30:45.567147 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 27 08:30:45.567155 kernel: PCI: Using E820 reservations for host bridge windows Oct 27 08:30:45.567163 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 27 08:30:45.567172 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 27 08:30:45.567439 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 27 08:30:45.567621 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 27 08:30:45.567804 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 27 08:30:45.567816 kernel: PCI host bridge to bus 0000:00 Oct 27 08:30:45.567996 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 27 08:30:45.568153 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 27 08:30:45.568325 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 27 08:30:45.568486 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Oct 27 08:30:45.568640 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Oct 27 08:30:45.568792 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Oct 27 08:30:45.568947 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 27 08:30:45.569144 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Oct 27 08:30:45.569346 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Oct 27 08:30:45.569517 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Oct 27 08:30:45.569770 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Oct 27 08:30:45.569937 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Oct 27 08:30:45.570115 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 27 08:30:45.570316 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 27 08:30:45.570497 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Oct 27 08:30:45.570666 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Oct 27 08:30:45.570837 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Oct 27 08:30:45.571047 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Oct 27 08:30:45.571220 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Oct 27 08:30:45.571408 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Oct 27 08:30:45.571582 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Oct 27 08:30:45.571758 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Oct 27 08:30:45.571926 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Oct 27 08:30:45.572104 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Oct 27 08:30:45.572270 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Oct 27 08:30:45.572457 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Oct 27 08:30:45.572869 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Oct 27 08:30:45.573048 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 27 08:30:45.573228 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Oct 27 08:30:45.573412 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Oct 27 08:30:45.573580 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Oct 27 08:30:45.573762 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Oct 27 08:30:45.573931 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Oct 27 08:30:45.573943 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 27 08:30:45.573951 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 27 08:30:45.573960 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 27 08:30:45.573977 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 27 08:30:45.573989 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 27 08:30:45.573998 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 27 08:30:45.574006 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 27 08:30:45.574015 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 27 08:30:45.574023 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 27 08:30:45.574032 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 27 08:30:45.574040 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 27 08:30:45.574050 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 27 08:30:45.574059 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 27 08:30:45.574067 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 27 08:30:45.574076 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 27 08:30:45.574084 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 27 08:30:45.574092 kernel: iommu: Default domain type: Translated Oct 27 08:30:45.574101 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 27 08:30:45.574111 kernel: efivars: Registered efivars operations Oct 27 08:30:45.574119 kernel: PCI: Using ACPI for IRQ routing Oct 27 08:30:45.574128 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 27 08:30:45.574136 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Oct 27 08:30:45.574144 kernel: e820: reserve RAM buffer [mem 0x9a100018-0x9bffffff] Oct 27 08:30:45.574152 kernel: e820: reserve RAM buffer [mem 0x9a13d018-0x9bffffff] Oct 27 08:30:45.574160 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Oct 27 08:30:45.574169 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Oct 27 08:30:45.574351 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 27 08:30:45.574518 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 27 08:30:45.574682 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 27 08:30:45.574693 kernel: vgaarb: loaded Oct 27 08:30:45.574701 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 27 08:30:45.574710 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 27 08:30:45.574721 kernel: clocksource: Switched to clocksource kvm-clock Oct 27 08:30:45.574730 kernel: VFS: Disk quotas dquot_6.6.0 Oct 27 08:30:45.574738 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 27 08:30:45.574747 kernel: pnp: PnP ACPI init Oct 27 08:30:45.574931 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Oct 27 08:30:45.574943 kernel: pnp: PnP ACPI: found 6 devices Oct 27 08:30:45.574952 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 27 08:30:45.574973 kernel: NET: Registered PF_INET protocol family Oct 27 08:30:45.574982 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 27 08:30:45.574991 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 27 08:30:45.575000 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 27 08:30:45.575008 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 27 08:30:45.575017 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 27 08:30:45.575025 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 27 08:30:45.575036 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 27 08:30:45.575045 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 27 08:30:45.575053 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 27 08:30:45.575061 kernel: NET: Registered PF_XDP protocol family Oct 27 08:30:45.575230 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Oct 27 08:30:45.575417 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Oct 27 08:30:45.575578 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 27 08:30:45.575736 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 27 08:30:45.575927 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 27 08:30:45.576106 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Oct 27 08:30:45.576259 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Oct 27 08:30:45.576435 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Oct 27 08:30:45.576447 kernel: PCI: CLS 0 bytes, default 64 Oct 27 08:30:45.576460 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Oct 27 08:30:45.576469 kernel: Initialise system trusted keyrings Oct 27 08:30:45.576477 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 27 08:30:45.576486 kernel: Key type asymmetric registered Oct 27 08:30:45.576494 kernel: Asymmetric key parser 'x509' registered Oct 27 08:30:45.576517 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 27 08:30:45.576532 kernel: io scheduler mq-deadline registered Oct 27 08:30:45.576545 kernel: io scheduler kyber registered Oct 27 08:30:45.576554 kernel: io scheduler bfq registered Oct 27 08:30:45.576563 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 27 08:30:45.576572 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 27 08:30:45.576581 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 27 08:30:45.576589 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 27 08:30:45.576598 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 27 08:30:45.576609 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 27 08:30:45.576618 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 27 08:30:45.576626 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 27 08:30:45.576635 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 27 08:30:45.576813 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 27 08:30:45.576826 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 27 08:30:45.576994 kernel: rtc_cmos 00:04: registered as rtc0 Oct 27 08:30:45.577171 kernel: rtc_cmos 00:04: setting system clock to 2025-10-27T08:30:43 UTC (1761553843) Oct 27 08:30:45.577350 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Oct 27 08:30:45.577363 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 27 08:30:45.577371 kernel: efifb: probing for efifb Oct 27 08:30:45.577380 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Oct 27 08:30:45.577389 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Oct 27 08:30:45.577401 kernel: efifb: scrolling: redraw Oct 27 08:30:45.577409 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Oct 27 08:30:45.577418 kernel: Console: switching to colour frame buffer device 160x50 Oct 27 08:30:45.577429 kernel: fb0: EFI VGA frame buffer device Oct 27 08:30:45.577438 kernel: pstore: Using crash dump compression: deflate Oct 27 08:30:45.577452 kernel: pstore: Registered efi_pstore as persistent store backend Oct 27 08:30:45.577461 kernel: NET: Registered PF_INET6 protocol family Oct 27 08:30:45.577469 kernel: Segment Routing with IPv6 Oct 27 08:30:45.577478 kernel: In-situ OAM (IOAM) with IPv6 Oct 27 08:30:45.577487 kernel: NET: Registered PF_PACKET protocol family Oct 27 08:30:45.577495 kernel: Key type dns_resolver registered Oct 27 08:30:45.577504 kernel: IPI shorthand broadcast: enabled Oct 27 08:30:45.577515 kernel: sched_clock: Marking stable (1703022893, 293026114)->(2091306459, -95257452) Oct 27 08:30:45.577524 kernel: registered taskstats version 1 Oct 27 08:30:45.577532 kernel: Loading compiled-in X.509 certificates Oct 27 08:30:45.577541 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 6c7ef547b8d769f7afd2708799fb9c3145695bfb' Oct 27 08:30:45.577550 kernel: Demotion targets for Node 0: null Oct 27 08:30:45.577558 kernel: Key type .fscrypt registered Oct 27 08:30:45.577567 kernel: Key type fscrypt-provisioning registered Oct 27 08:30:45.577580 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 27 08:30:45.577589 kernel: ima: Allocated hash algorithm: sha1 Oct 27 08:30:45.577597 kernel: ima: No architecture policies found Oct 27 08:30:45.577606 kernel: clk: Disabling unused clocks Oct 27 08:30:45.577614 kernel: Freeing unused kernel image (initmem) memory: 15964K Oct 27 08:30:45.577623 kernel: Write protecting the kernel read-only data: 40960k Oct 27 08:30:45.577632 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Oct 27 08:30:45.577643 kernel: Run /init as init process Oct 27 08:30:45.577651 kernel: with arguments: Oct 27 08:30:45.577660 kernel: /init Oct 27 08:30:45.577669 kernel: with environment: Oct 27 08:30:45.577677 kernel: HOME=/ Oct 27 08:30:45.577686 kernel: TERM=linux Oct 27 08:30:45.577706 kernel: SCSI subsystem initialized Oct 27 08:30:45.577725 kernel: libata version 3.00 loaded. Oct 27 08:30:45.577903 kernel: ahci 0000:00:1f.2: version 3.0 Oct 27 08:30:45.577915 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 27 08:30:45.578105 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Oct 27 08:30:45.578273 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Oct 27 08:30:45.578466 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 27 08:30:45.578672 kernel: scsi host0: ahci Oct 27 08:30:45.578862 kernel: scsi host1: ahci Oct 27 08:30:45.579051 kernel: scsi host2: ahci Oct 27 08:30:45.579229 kernel: scsi host3: ahci Oct 27 08:30:45.579426 kernel: scsi host4: ahci Oct 27 08:30:45.579604 kernel: scsi host5: ahci Oct 27 08:30:45.579620 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Oct 27 08:30:45.579630 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Oct 27 08:30:45.579638 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Oct 27 08:30:45.579647 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Oct 27 08:30:45.579656 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Oct 27 08:30:45.579665 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Oct 27 08:30:45.579676 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 27 08:30:45.579685 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 27 08:30:45.579694 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 27 08:30:45.579702 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 27 08:30:45.579711 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 27 08:30:45.579720 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 27 08:30:45.579729 kernel: ata3.00: LPM support broken, forcing max_power Oct 27 08:30:45.579738 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 27 08:30:45.579749 kernel: ata3.00: applying bridge limits Oct 27 08:30:45.579757 kernel: ata3.00: LPM support broken, forcing max_power Oct 27 08:30:45.579766 kernel: ata3.00: configured for UDMA/100 Oct 27 08:30:45.579980 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 27 08:30:45.580172 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 27 08:30:45.580363 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Oct 27 08:30:45.580380 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 27 08:30:45.580389 kernel: GPT:16515071 != 27000831 Oct 27 08:30:45.580397 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 27 08:30:45.580406 kernel: GPT:16515071 != 27000831 Oct 27 08:30:45.580414 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 27 08:30:45.580423 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 27 08:30:45.580432 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 27 08:30:45.580623 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 27 08:30:45.580635 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 27 08:30:45.580824 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 27 08:30:45.580836 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 27 08:30:45.580844 kernel: device-mapper: uevent: version 1.0.3 Oct 27 08:30:45.580853 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 27 08:30:45.580865 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Oct 27 08:30:45.580875 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 27 08:30:45.580883 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 27 08:30:45.580892 kernel: raid6: avx2x4 gen() 30553 MB/s Oct 27 08:30:45.580900 kernel: raid6: avx2x2 gen() 31347 MB/s Oct 27 08:30:45.580909 kernel: raid6: avx2x1 gen() 25026 MB/s Oct 27 08:30:45.580918 kernel: raid6: using algorithm avx2x2 gen() 31347 MB/s Oct 27 08:30:45.580930 kernel: raid6: .... xor() 19796 MB/s, rmw enabled Oct 27 08:30:45.580944 kernel: raid6: using avx2x2 recovery algorithm Oct 27 08:30:45.580954 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 27 08:30:45.580974 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 27 08:30:45.580985 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 27 08:30:45.580997 kernel: xor: automatically using best checksumming function avx Oct 27 08:30:45.581009 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 27 08:30:45.581020 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 27 08:30:45.581032 kernel: BTRFS: device fsid bf514789-bcec-4c15-ac9d-e4c3d19a42b2 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (177) Oct 27 08:30:45.581047 kernel: BTRFS info (device dm-0): first mount of filesystem bf514789-bcec-4c15-ac9d-e4c3d19a42b2 Oct 27 08:30:45.581059 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 27 08:30:45.581071 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 27 08:30:45.581082 kernel: BTRFS info (device dm-0): enabling free space tree Oct 27 08:30:45.581092 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 27 08:30:45.581100 kernel: loop: module loaded Oct 27 08:30:45.581109 kernel: loop0: detected capacity change from 0 to 100120 Oct 27 08:30:45.581120 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 27 08:30:45.581130 systemd[1]: Successfully made /usr/ read-only. Oct 27 08:30:45.581142 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 27 08:30:45.581152 systemd[1]: Detected virtualization kvm. Oct 27 08:30:45.581161 systemd[1]: Detected architecture x86-64. Oct 27 08:30:45.581170 systemd[1]: Running in initrd. Oct 27 08:30:45.581181 systemd[1]: No hostname configured, using default hostname. Oct 27 08:30:45.581191 systemd[1]: Hostname set to . Oct 27 08:30:45.581200 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 27 08:30:45.581209 systemd[1]: Queued start job for default target initrd.target. Oct 27 08:30:45.581218 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 27 08:30:45.581227 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 27 08:30:45.581239 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 27 08:30:45.581249 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 27 08:30:45.581258 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 27 08:30:45.581268 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 27 08:30:45.581278 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 27 08:30:45.581287 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 27 08:30:45.581314 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 27 08:30:45.581324 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 27 08:30:45.581333 systemd[1]: Reached target paths.target - Path Units. Oct 27 08:30:45.581342 systemd[1]: Reached target slices.target - Slice Units. Oct 27 08:30:45.581351 systemd[1]: Reached target swap.target - Swaps. Oct 27 08:30:45.581360 systemd[1]: Reached target timers.target - Timer Units. Oct 27 08:30:45.581370 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 27 08:30:45.581381 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 27 08:30:45.581390 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 27 08:30:45.581400 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 27 08:30:45.581409 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 27 08:30:45.581418 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 27 08:30:45.581428 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 27 08:30:45.581440 systemd[1]: Reached target sockets.target - Socket Units. Oct 27 08:30:45.581449 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 27 08:30:45.581458 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 27 08:30:45.581468 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 27 08:30:45.581477 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 27 08:30:45.581487 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 27 08:30:45.581497 systemd[1]: Starting systemd-fsck-usr.service... Oct 27 08:30:45.581508 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 27 08:30:45.581517 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 27 08:30:45.581526 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 08:30:45.581536 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 27 08:30:45.581548 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 27 08:30:45.581557 systemd[1]: Finished systemd-fsck-usr.service. Oct 27 08:30:45.581775 systemd-journald[311]: Collecting audit messages is disabled. Oct 27 08:30:45.581802 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 27 08:30:45.581812 systemd-journald[311]: Journal started Oct 27 08:30:45.581830 systemd-journald[311]: Runtime Journal (/run/log/journal/932e5e4ca9cf4a25b76eb967039b4828) is 5.9M, max 47.9M, 41.9M free. Oct 27 08:30:45.584319 systemd[1]: Started systemd-journald.service - Journal Service. Oct 27 08:30:45.594320 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 27 08:30:45.595594 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 27 08:30:45.599453 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 27 08:30:45.604759 kernel: Bridge firewalling registered Oct 27 08:30:45.602274 systemd-modules-load[314]: Inserted module 'br_netfilter' Oct 27 08:30:45.604668 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 08:30:45.616518 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 27 08:30:45.621963 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 27 08:30:45.626491 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 27 08:30:45.627786 systemd-tmpfiles[328]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 27 08:30:45.631619 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 27 08:30:45.641806 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 27 08:30:45.652789 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 27 08:30:45.658430 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 27 08:30:45.660749 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 27 08:30:45.663267 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 27 08:30:45.673064 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 27 08:30:45.695209 dracut-cmdline[353]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e6ac205aca0358d0b739fe2cba6f8244850dbdc9027fd8e7442161fce065515e Oct 27 08:30:45.733184 systemd-resolved[351]: Positive Trust Anchors: Oct 27 08:30:45.733207 systemd-resolved[351]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 27 08:30:45.733216 systemd-resolved[351]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 27 08:30:45.733266 systemd-resolved[351]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 27 08:30:45.768245 systemd-resolved[351]: Defaulting to hostname 'linux'. Oct 27 08:30:45.770154 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 27 08:30:45.774432 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 27 08:30:45.848372 kernel: Loading iSCSI transport class v2.0-870. Oct 27 08:30:45.864329 kernel: iscsi: registered transport (tcp) Oct 27 08:30:45.893715 kernel: iscsi: registered transport (qla4xxx) Oct 27 08:30:45.893761 kernel: QLogic iSCSI HBA Driver Oct 27 08:30:45.930532 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 27 08:30:45.990404 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 27 08:30:45.996527 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 27 08:30:46.064717 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 27 08:30:46.067834 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 27 08:30:46.071868 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 27 08:30:46.121060 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 27 08:30:46.123698 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 27 08:30:46.166877 systemd-udevd[587]: Using default interface naming scheme 'v257'. Oct 27 08:30:46.180502 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 27 08:30:46.187250 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 27 08:30:46.220068 dracut-pre-trigger[641]: rd.md=0: removing MD RAID activation Oct 27 08:30:46.239019 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 27 08:30:46.241672 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 27 08:30:46.266064 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 27 08:30:46.268253 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 27 08:30:46.297396 systemd-networkd[710]: lo: Link UP Oct 27 08:30:46.297404 systemd-networkd[710]: lo: Gained carrier Oct 27 08:30:46.298202 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 27 08:30:46.300463 systemd[1]: Reached target network.target - Network. Oct 27 08:30:46.363391 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 27 08:30:46.369441 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 27 08:30:46.414740 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 27 08:30:46.436558 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 27 08:30:46.459619 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 27 08:30:46.466316 kernel: cryptd: max_cpu_qlen set to 1000 Oct 27 08:30:46.472906 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 27 08:30:46.483225 systemd-networkd[710]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 27 08:30:46.486785 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Oct 27 08:30:46.483323 systemd-networkd[710]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 27 08:30:46.484353 systemd-networkd[710]: eth0: Link UP Oct 27 08:30:46.484567 systemd-networkd[710]: eth0: Gained carrier Oct 27 08:30:46.484576 systemd-networkd[710]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 27 08:30:46.484690 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 27 08:30:46.500238 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 27 08:30:46.502874 kernel: AES CTR mode by8 optimization enabled Oct 27 08:30:46.500612 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 08:30:46.508310 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 08:30:46.513350 systemd-networkd[710]: eth0: DHCPv4 address 10.0.0.134/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 27 08:30:46.521788 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 08:30:46.534578 disk-uuid[822]: Primary Header is updated. Oct 27 08:30:46.534578 disk-uuid[822]: Secondary Entries is updated. Oct 27 08:30:46.534578 disk-uuid[822]: Secondary Header is updated. Oct 27 08:30:46.612898 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 08:30:46.638704 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 27 08:30:46.652067 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 27 08:30:46.655587 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 27 08:30:46.657613 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 27 08:30:46.665251 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 27 08:30:46.692447 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 27 08:30:47.630341 disk-uuid[828]: Warning: The kernel is still using the old partition table. Oct 27 08:30:47.630341 disk-uuid[828]: The new table will be used at the next reboot or after you Oct 27 08:30:47.630341 disk-uuid[828]: run partprobe(8) or kpartx(8) Oct 27 08:30:47.630341 disk-uuid[828]: The operation has completed successfully. Oct 27 08:30:47.641688 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 27 08:30:47.641818 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 27 08:30:47.643805 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 27 08:30:47.682313 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (856) Oct 27 08:30:47.682374 kernel: BTRFS info (device vda6): first mount of filesystem 3c7e1d30-69bc-4811-963d-029e55854883 Oct 27 08:30:47.685408 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 27 08:30:47.689190 kernel: BTRFS info (device vda6): turning on async discard Oct 27 08:30:47.689215 kernel: BTRFS info (device vda6): enabling free space tree Oct 27 08:30:47.696317 kernel: BTRFS info (device vda6): last unmount of filesystem 3c7e1d30-69bc-4811-963d-029e55854883 Oct 27 08:30:47.696966 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 27 08:30:47.698669 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 27 08:30:47.955637 ignition[875]: Ignition 2.22.0 Oct 27 08:30:47.955654 ignition[875]: Stage: fetch-offline Oct 27 08:30:47.955881 ignition[875]: no configs at "/usr/lib/ignition/base.d" Oct 27 08:30:47.955895 ignition[875]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 08:30:47.956033 ignition[875]: parsed url from cmdline: "" Oct 27 08:30:47.956037 ignition[875]: no config URL provided Oct 27 08:30:47.956045 ignition[875]: reading system config file "/usr/lib/ignition/user.ign" Oct 27 08:30:47.956056 ignition[875]: no config at "/usr/lib/ignition/user.ign" Oct 27 08:30:47.957016 ignition[875]: op(1): [started] loading QEMU firmware config module Oct 27 08:30:47.957022 ignition[875]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 27 08:30:47.970646 ignition[875]: op(1): [finished] loading QEMU firmware config module Oct 27 08:30:48.050916 ignition[875]: parsing config with SHA512: 7c7c97d9d83f77322e45ddaae3bc21bfcbed7a6800d3cea1ed8d577bb1f48fe9209928afb52f7cf0a4d82eedb938f55f6f323afc9259181ac229f55ecb3237b2 Oct 27 08:30:48.059527 unknown[875]: fetched base config from "system" Oct 27 08:30:48.059539 unknown[875]: fetched user config from "qemu" Oct 27 08:30:48.062358 ignition[875]: fetch-offline: fetch-offline passed Oct 27 08:30:48.063782 ignition[875]: Ignition finished successfully Oct 27 08:30:48.068038 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 27 08:30:48.068887 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 27 08:30:48.069824 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 27 08:30:48.175574 ignition[886]: Ignition 2.22.0 Oct 27 08:30:48.175586 ignition[886]: Stage: kargs Oct 27 08:30:48.175765 ignition[886]: no configs at "/usr/lib/ignition/base.d" Oct 27 08:30:48.175775 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 08:30:48.176794 ignition[886]: kargs: kargs passed Oct 27 08:30:48.176841 ignition[886]: Ignition finished successfully Oct 27 08:30:48.186837 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 27 08:30:48.191115 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 27 08:30:48.248951 ignition[894]: Ignition 2.22.0 Oct 27 08:30:48.248964 ignition[894]: Stage: disks Oct 27 08:30:48.249100 ignition[894]: no configs at "/usr/lib/ignition/base.d" Oct 27 08:30:48.249109 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 08:30:48.314313 ignition[894]: disks: disks passed Oct 27 08:30:48.314388 ignition[894]: Ignition finished successfully Oct 27 08:30:48.318008 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 27 08:30:48.321453 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 27 08:30:48.324984 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 27 08:30:48.328915 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 27 08:30:48.332083 systemd[1]: Reached target sysinit.target - System Initialization. Oct 27 08:30:48.335345 systemd[1]: Reached target basic.target - Basic System. Oct 27 08:30:48.339236 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 27 08:30:48.375421 systemd-networkd[710]: eth0: Gained IPv6LL Oct 27 08:30:48.385850 systemd-fsck[904]: ROOT: clean, 15/456736 files, 38230/456704 blocks Oct 27 08:30:48.393752 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 27 08:30:48.399521 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 27 08:30:48.563324 kernel: EXT4-fs (vda9): mounted filesystem e90e2fe3-e1db-4bff-abac-c8d1d032f674 r/w with ordered data mode. Quota mode: none. Oct 27 08:30:48.564234 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 27 08:30:48.565494 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 27 08:30:48.568628 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 27 08:30:48.573113 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 27 08:30:48.574030 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 27 08:30:48.574064 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 27 08:30:48.574090 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 27 08:30:48.591327 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (913) Oct 27 08:30:48.595091 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 27 08:30:48.598019 kernel: BTRFS info (device vda6): first mount of filesystem 3c7e1d30-69bc-4811-963d-029e55854883 Oct 27 08:30:48.598049 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 27 08:30:48.597416 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 27 08:30:48.605388 kernel: BTRFS info (device vda6): turning on async discard Oct 27 08:30:48.605419 kernel: BTRFS info (device vda6): enabling free space tree Oct 27 08:30:48.606997 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 27 08:30:48.658691 initrd-setup-root[937]: cut: /sysroot/etc/passwd: No such file or directory Oct 27 08:30:48.664954 initrd-setup-root[944]: cut: /sysroot/etc/group: No such file or directory Oct 27 08:30:48.670882 initrd-setup-root[951]: cut: /sysroot/etc/shadow: No such file or directory Oct 27 08:30:48.674982 initrd-setup-root[958]: cut: /sysroot/etc/gshadow: No such file or directory Oct 27 08:30:48.773253 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 27 08:30:48.776973 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 27 08:30:48.779345 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 27 08:30:48.796060 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 27 08:30:48.798576 kernel: BTRFS info (device vda6): last unmount of filesystem 3c7e1d30-69bc-4811-963d-029e55854883 Oct 27 08:30:48.813469 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 27 08:30:48.842227 ignition[1027]: INFO : Ignition 2.22.0 Oct 27 08:30:48.842227 ignition[1027]: INFO : Stage: mount Oct 27 08:30:48.844865 ignition[1027]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 27 08:30:48.844865 ignition[1027]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 08:30:48.844865 ignition[1027]: INFO : mount: mount passed Oct 27 08:30:48.844865 ignition[1027]: INFO : Ignition finished successfully Oct 27 08:30:48.853875 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 27 08:30:48.855886 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 27 08:30:48.881013 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 27 08:30:48.909960 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1041) Oct 27 08:30:48.910017 kernel: BTRFS info (device vda6): first mount of filesystem 3c7e1d30-69bc-4811-963d-029e55854883 Oct 27 08:30:48.910030 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 27 08:30:48.915196 kernel: BTRFS info (device vda6): turning on async discard Oct 27 08:30:48.915221 kernel: BTRFS info (device vda6): enabling free space tree Oct 27 08:30:48.916867 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 27 08:30:48.966171 ignition[1058]: INFO : Ignition 2.22.0 Oct 27 08:30:48.966171 ignition[1058]: INFO : Stage: files Oct 27 08:30:48.968859 ignition[1058]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 27 08:30:48.968859 ignition[1058]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 08:30:48.968859 ignition[1058]: DEBUG : files: compiled without relabeling support, skipping Oct 27 08:30:48.968859 ignition[1058]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 27 08:30:48.968859 ignition[1058]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 27 08:30:48.979007 ignition[1058]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 27 08:30:48.979007 ignition[1058]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 27 08:30:48.979007 ignition[1058]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 27 08:30:48.979007 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Oct 27 08:30:48.979007 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Oct 27 08:30:48.972092 unknown[1058]: wrote ssh authorized keys file for user: core Oct 27 08:30:49.024994 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 27 08:30:49.170689 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Oct 27 08:30:49.170689 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 27 08:30:49.177311 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 27 08:30:49.177311 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 27 08:30:49.177311 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 27 08:30:49.177311 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 27 08:30:49.177311 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 27 08:30:49.177311 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 27 08:30:49.177311 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 27 08:30:49.177311 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 27 08:30:49.177311 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 27 08:30:49.177311 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 27 08:30:49.206742 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 27 08:30:49.206742 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 27 08:30:49.206742 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Oct 27 08:30:49.448354 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 27 08:30:49.924606 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 27 08:30:49.924606 ignition[1058]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 27 08:30:49.930775 ignition[1058]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 27 08:30:49.930775 ignition[1058]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 27 08:30:49.930775 ignition[1058]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 27 08:30:49.930775 ignition[1058]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 27 08:30:49.930775 ignition[1058]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 27 08:30:49.930775 ignition[1058]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 27 08:30:49.930775 ignition[1058]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 27 08:30:49.930775 ignition[1058]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 27 08:30:50.034688 ignition[1058]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 27 08:30:50.044431 ignition[1058]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 27 08:30:50.046934 ignition[1058]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 27 08:30:50.046934 ignition[1058]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 27 08:30:50.046934 ignition[1058]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 27 08:30:50.046934 ignition[1058]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 27 08:30:50.046934 ignition[1058]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 27 08:30:50.046934 ignition[1058]: INFO : files: files passed Oct 27 08:30:50.046934 ignition[1058]: INFO : Ignition finished successfully Oct 27 08:30:50.064902 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 27 08:30:50.066854 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 27 08:30:50.069462 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 27 08:30:50.083437 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 27 08:30:50.083603 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 27 08:30:50.093355 initrd-setup-root-after-ignition[1089]: grep: /sysroot/oem/oem-release: No such file or directory Oct 27 08:30:50.097881 initrd-setup-root-after-ignition[1091]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 27 08:30:50.097881 initrd-setup-root-after-ignition[1091]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 27 08:30:50.103464 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 27 08:30:50.106966 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 27 08:30:50.107810 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 27 08:30:50.112783 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 27 08:30:50.191894 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 27 08:30:50.192076 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 27 08:30:50.193367 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 27 08:30:50.200807 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 27 08:30:50.205056 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 27 08:30:50.206365 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 27 08:30:50.243142 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 27 08:30:50.249803 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 27 08:30:50.277369 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 27 08:30:50.277615 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 27 08:30:50.278988 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 27 08:30:50.287936 systemd[1]: Stopped target timers.target - Timer Units. Oct 27 08:30:50.288796 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 27 08:30:50.288974 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 27 08:30:50.296364 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 27 08:30:50.297198 systemd[1]: Stopped target basic.target - Basic System. Oct 27 08:30:50.301007 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 27 08:30:50.301732 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 27 08:30:50.306837 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 27 08:30:50.307396 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 27 08:30:50.314038 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 27 08:30:50.317666 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 27 08:30:50.318310 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 27 08:30:50.327466 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 27 08:30:50.328197 systemd[1]: Stopped target swap.target - Swaps. Oct 27 08:30:50.333275 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 27 08:30:50.333501 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 27 08:30:50.338447 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 27 08:30:50.339283 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 27 08:30:50.342706 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 27 08:30:50.347264 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 27 08:30:50.351487 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 27 08:30:50.351673 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 27 08:30:50.356428 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 27 08:30:50.356594 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 27 08:30:50.357852 systemd[1]: Stopped target paths.target - Path Units. Oct 27 08:30:50.362804 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 27 08:30:50.368389 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 27 08:30:50.369155 systemd[1]: Stopped target slices.target - Slice Units. Oct 27 08:30:50.373717 systemd[1]: Stopped target sockets.target - Socket Units. Oct 27 08:30:50.376221 systemd[1]: iscsid.socket: Deactivated successfully. Oct 27 08:30:50.376364 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 27 08:30:50.379100 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 27 08:30:50.379222 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 27 08:30:50.381936 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 27 08:30:50.382075 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 27 08:30:50.384978 systemd[1]: ignition-files.service: Deactivated successfully. Oct 27 08:30:50.385108 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 27 08:30:50.392733 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 27 08:30:50.394625 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 27 08:30:50.398248 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 27 08:30:50.398446 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 27 08:30:50.398908 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 27 08:30:50.399047 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 27 08:30:50.408570 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 27 08:30:50.408701 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 27 08:30:50.418915 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 27 08:30:50.421625 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 27 08:30:50.448674 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 27 08:30:50.509745 ignition[1115]: INFO : Ignition 2.22.0 Oct 27 08:30:50.509745 ignition[1115]: INFO : Stage: umount Oct 27 08:30:50.512842 ignition[1115]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 27 08:30:50.512842 ignition[1115]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 08:30:50.512842 ignition[1115]: INFO : umount: umount passed Oct 27 08:30:50.512842 ignition[1115]: INFO : Ignition finished successfully Oct 27 08:30:50.514812 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 27 08:30:50.515024 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 27 08:30:50.516074 systemd[1]: Stopped target network.target - Network. Oct 27 08:30:50.519867 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 27 08:30:50.519937 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 27 08:30:50.522862 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 27 08:30:50.522928 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 27 08:30:50.525854 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 27 08:30:50.525921 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 27 08:30:50.528739 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 27 08:30:50.528803 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 27 08:30:50.531945 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 27 08:30:50.534869 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 27 08:30:50.548914 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 27 08:30:50.549162 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 27 08:30:50.555875 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 27 08:30:50.556018 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 27 08:30:50.566121 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 27 08:30:50.566256 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 27 08:30:50.568025 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 27 08:30:50.571055 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 27 08:30:50.571116 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 27 08:30:50.577073 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 27 08:30:50.577142 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 27 08:30:50.579472 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 27 08:30:50.582958 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 27 08:30:50.583095 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 27 08:30:50.584331 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 27 08:30:50.584379 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 27 08:30:50.590771 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 27 08:30:50.590823 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 27 08:30:50.592095 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 27 08:30:50.616431 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 27 08:30:50.616629 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 27 08:30:50.617592 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 27 08:30:50.617638 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 27 08:30:50.621006 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 27 08:30:50.621049 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 27 08:30:50.627345 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 27 08:30:50.627403 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 27 08:30:50.628748 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 27 08:30:50.628810 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 27 08:30:50.634306 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 27 08:30:50.634363 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 27 08:30:50.640281 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 27 08:30:50.641638 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 27 08:30:50.641695 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 27 08:30:50.642230 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 27 08:30:50.642276 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 27 08:30:50.642778 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 27 08:30:50.642821 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 27 08:30:50.651777 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 27 08:30:50.651827 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 27 08:30:50.655794 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 27 08:30:50.655846 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 08:30:50.674282 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 27 08:30:50.674501 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 27 08:30:50.678347 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 27 08:30:50.678539 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 27 08:30:50.681505 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 27 08:30:50.685095 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 27 08:30:50.699935 systemd[1]: Switching root. Oct 27 08:30:50.751327 systemd-journald[311]: Received SIGTERM from PID 1 (systemd). Oct 27 08:30:50.751418 systemd-journald[311]: Journal stopped Oct 27 08:30:52.185575 kernel: SELinux: policy capability network_peer_controls=1 Oct 27 08:30:52.185646 kernel: SELinux: policy capability open_perms=1 Oct 27 08:30:52.185659 kernel: SELinux: policy capability extended_socket_class=1 Oct 27 08:30:52.185672 kernel: SELinux: policy capability always_check_network=0 Oct 27 08:30:52.185688 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 27 08:30:52.185700 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 27 08:30:52.185713 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 27 08:30:52.185732 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 27 08:30:52.185744 kernel: SELinux: policy capability userspace_initial_context=0 Oct 27 08:30:52.185765 kernel: audit: type=1403 audit(1761553851.306:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 27 08:30:52.185779 systemd[1]: Successfully loaded SELinux policy in 79.616ms. Oct 27 08:30:52.185799 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.311ms. Oct 27 08:30:52.185813 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 27 08:30:52.185826 systemd[1]: Detected virtualization kvm. Oct 27 08:30:52.185844 systemd[1]: Detected architecture x86-64. Oct 27 08:30:52.185857 systemd[1]: Detected first boot. Oct 27 08:30:52.185870 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 27 08:30:52.185888 zram_generator::config[1160]: No configuration found. Oct 27 08:30:52.185901 kernel: Guest personality initialized and is inactive Oct 27 08:30:52.185913 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 27 08:30:52.185925 kernel: Initialized host personality Oct 27 08:30:52.185945 kernel: NET: Registered PF_VSOCK protocol family Oct 27 08:30:52.185958 systemd[1]: Populated /etc with preset unit settings. Oct 27 08:30:52.185971 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 27 08:30:52.185983 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 27 08:30:52.185997 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 27 08:30:52.186011 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 27 08:30:52.186024 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 27 08:30:52.186039 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 27 08:30:52.186053 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 27 08:30:52.186066 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 27 08:30:52.186083 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 27 08:30:52.186107 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 27 08:30:52.186120 systemd[1]: Created slice user.slice - User and Session Slice. Oct 27 08:30:52.186149 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 27 08:30:52.186166 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 27 08:30:52.186178 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 27 08:30:52.186191 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 27 08:30:52.186204 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 27 08:30:52.186219 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 27 08:30:52.186234 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 27 08:30:52.186250 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 27 08:30:52.186270 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 27 08:30:52.186284 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 27 08:30:52.186394 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 27 08:30:52.186409 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 27 08:30:52.186422 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 27 08:30:52.186435 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 27 08:30:52.186450 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 27 08:30:52.186462 systemd[1]: Reached target slices.target - Slice Units. Oct 27 08:30:52.186474 systemd[1]: Reached target swap.target - Swaps. Oct 27 08:30:52.186487 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 27 08:30:52.186499 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 27 08:30:52.186512 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 27 08:30:52.186525 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 27 08:30:52.186540 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 27 08:30:52.186553 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 27 08:30:52.186565 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 27 08:30:52.186578 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 27 08:30:52.186591 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 27 08:30:52.186603 systemd[1]: Mounting media.mount - External Media Directory... Oct 27 08:30:52.186616 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 08:30:52.186631 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 27 08:30:52.186644 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 27 08:30:52.186657 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 27 08:30:52.186670 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 27 08:30:52.186683 systemd[1]: Reached target machines.target - Containers. Oct 27 08:30:52.186695 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 27 08:30:52.186708 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 27 08:30:52.186722 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 27 08:30:52.186735 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 27 08:30:52.186748 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 27 08:30:52.186760 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 27 08:30:52.186778 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 27 08:30:52.186791 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 27 08:30:52.186804 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 27 08:30:52.186819 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 27 08:30:52.186832 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 27 08:30:52.186844 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 27 08:30:52.186857 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 27 08:30:52.186870 systemd[1]: Stopped systemd-fsck-usr.service. Oct 27 08:30:52.186883 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 27 08:30:52.186897 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 27 08:30:52.186910 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 27 08:30:52.186923 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 27 08:30:52.186953 systemd-journald[1224]: Collecting audit messages is disabled. Oct 27 08:30:52.186977 systemd-journald[1224]: Journal started Oct 27 08:30:52.187002 systemd-journald[1224]: Runtime Journal (/run/log/journal/932e5e4ca9cf4a25b76eb967039b4828) is 5.9M, max 47.9M, 41.9M free. Oct 27 08:30:52.189897 kernel: fuse: init (API version 7.41) Oct 27 08:30:51.891327 systemd[1]: Queued start job for default target multi-user.target. Oct 27 08:30:51.904484 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 27 08:30:51.905019 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 27 08:30:52.193762 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 27 08:30:52.201938 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 27 08:30:52.207327 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 27 08:30:52.215346 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 08:30:52.219324 systemd[1]: Started systemd-journald.service - Journal Service. Oct 27 08:30:52.221929 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 27 08:30:52.223816 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 27 08:30:52.228326 kernel: ACPI: bus type drm_connector registered Oct 27 08:30:52.228563 systemd[1]: Mounted media.mount - External Media Directory. Oct 27 08:30:52.230636 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 27 08:30:52.232940 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 27 08:30:52.235305 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 27 08:30:52.237418 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 27 08:30:52.240327 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 27 08:30:52.240692 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 27 08:30:52.243243 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 27 08:30:52.243601 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 27 08:30:52.245952 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 27 08:30:52.246213 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 27 08:30:52.248385 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 27 08:30:52.248641 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 27 08:30:52.251038 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 27 08:30:52.251309 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 27 08:30:52.253482 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 27 08:30:52.253691 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 27 08:30:52.255904 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 27 08:30:52.258185 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 27 08:30:52.261480 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 27 08:30:52.264030 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 27 08:30:52.278489 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 27 08:30:52.280963 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 27 08:30:52.284244 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 27 08:30:52.287247 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 27 08:30:52.289438 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 27 08:30:52.289555 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 27 08:30:52.292545 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 27 08:30:52.295575 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 27 08:30:52.300328 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 27 08:30:52.303417 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 27 08:30:52.305301 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 27 08:30:52.307418 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 27 08:30:52.309152 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 27 08:30:52.310319 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 27 08:30:52.312062 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 27 08:30:52.332420 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 27 08:30:52.334094 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 27 08:30:52.337242 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 27 08:30:52.339835 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 27 08:30:52.374969 systemd-journald[1224]: Time spent on flushing to /var/log/journal/932e5e4ca9cf4a25b76eb967039b4828 is 18.300ms for 1033 entries. Oct 27 08:30:52.374969 systemd-journald[1224]: System Journal (/var/log/journal/932e5e4ca9cf4a25b76eb967039b4828) is 8M, max 163.5M, 155.5M free. Oct 27 08:30:53.195313 systemd-journald[1224]: Received client request to flush runtime journal. Oct 27 08:30:53.195387 kernel: loop1: detected capacity change from 0 to 224512 Oct 27 08:30:53.195424 kernel: loop2: detected capacity change from 0 to 110984 Oct 27 08:30:53.195449 kernel: loop3: detected capacity change from 0 to 128048 Oct 27 08:30:53.195471 kernel: loop4: detected capacity change from 0 to 224512 Oct 27 08:30:53.195490 kernel: loop5: detected capacity change from 0 to 110984 Oct 27 08:30:53.195511 kernel: loop6: detected capacity change from 0 to 128048 Oct 27 08:30:53.195529 zram_generator::config[1323]: No configuration found. Oct 27 08:30:53.195557 zram_generator::config[1385]: No configuration found. Oct 27 08:30:52.383648 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Oct 27 08:30:52.383663 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Oct 27 08:30:52.386439 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 27 08:30:52.390808 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 27 08:30:52.644734 (sd-merge)[1284]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Oct 27 08:30:52.648750 (sd-merge)[1284]: Merged extensions into '/usr'. Oct 27 08:30:52.653718 systemd[1]: Reload requested from client PID 1271 ('systemd-sysext') (unit systemd-sysext.service)... Oct 27 08:30:52.653730 systemd[1]: Reloading... Oct 27 08:30:52.900596 systemd[1]: Reloading finished in 246 ms. Oct 27 08:30:52.943177 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 27 08:30:52.954736 systemd[1]: Starting ensure-sysext.service... Oct 27 08:30:52.973162 systemd[1]: Reload requested from client PID 1356 ('systemctl') (unit ensure-sysext.service)... Oct 27 08:30:52.973172 systemd[1]: Reloading... Oct 27 08:30:53.314071 systemd[1]: Reloading finished in 340 ms. Oct 27 08:30:53.335840 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 27 08:30:53.338333 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 27 08:30:53.340763 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 27 08:30:53.366416 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 27 08:30:53.370942 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 27 08:30:53.374411 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 27 08:30:53.403875 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 08:30:53.404046 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 27 08:30:53.410441 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 27 08:30:53.413499 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 27 08:30:53.418504 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 27 08:30:53.420758 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 27 08:30:53.420907 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 27 08:30:53.421023 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 08:30:53.424502 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 27 08:30:53.424752 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 27 08:30:53.427665 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 27 08:30:53.427924 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 27 08:30:53.430819 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 27 08:30:53.431045 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 27 08:30:53.438540 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 08:30:53.438737 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 27 08:30:53.440129 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 27 08:30:53.443463 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 27 08:30:53.446494 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 27 08:30:53.484731 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 27 08:30:53.484951 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 27 08:30:53.485131 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 08:30:53.490571 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 27 08:30:53.490899 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 27 08:30:53.534792 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 27 08:30:53.535018 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 27 08:30:53.538009 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 27 08:30:53.538254 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 27 08:30:53.545249 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 08:30:53.545547 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 27 08:30:53.547108 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 27 08:30:53.549137 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 27 08:30:53.549191 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 27 08:30:53.549270 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 27 08:30:53.549370 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 27 08:30:53.549458 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 08:30:53.550032 systemd[1]: Finished ensure-sysext.service. Oct 27 08:30:53.560955 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 27 08:30:53.561177 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 27 08:30:53.948389 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 27 08:30:53.995308 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 27 08:30:53.999311 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 27 08:30:54.002464 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 27 08:30:54.005664 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 27 08:30:54.027754 systemd-tmpfiles[1447]: ACLs are not supported, ignoring. Oct 27 08:30:54.027777 systemd-tmpfiles[1447]: ACLs are not supported, ignoring. Oct 27 08:30:54.033168 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 27 08:30:54.109820 systemd-tmpfiles[1448]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 27 08:30:54.109859 systemd-tmpfiles[1448]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 27 08:30:54.110147 systemd-tmpfiles[1448]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 27 08:30:54.110396 systemd-tmpfiles[1448]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 27 08:30:54.111232 systemd-tmpfiles[1448]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 27 08:30:54.111508 systemd-tmpfiles[1448]: ACLs are not supported, ignoring. Oct 27 08:30:54.111581 systemd-tmpfiles[1448]: ACLs are not supported, ignoring. Oct 27 08:30:54.115257 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 27 08:30:54.116847 systemd-tmpfiles[1448]: Detected autofs mount point /boot during canonicalization of boot. Oct 27 08:30:54.116856 systemd-tmpfiles[1448]: Skipping /boot Oct 27 08:30:54.127057 systemd-tmpfiles[1448]: Detected autofs mount point /boot during canonicalization of boot. Oct 27 08:30:54.127070 systemd-tmpfiles[1448]: Skipping /boot Oct 27 08:30:54.164212 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 27 08:30:54.286808 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 27 08:30:54.289164 systemd[1]: Reached target time-set.target - System Time Set. Oct 27 08:30:54.291953 systemd-resolved[1445]: Positive Trust Anchors: Oct 27 08:30:54.291966 systemd-resolved[1445]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 27 08:30:54.291971 systemd-resolved[1445]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 27 08:30:54.292002 systemd-resolved[1445]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 27 08:30:54.296014 systemd-resolved[1445]: Defaulting to hostname 'linux'. Oct 27 08:30:54.297391 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 27 08:30:54.299623 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 27 08:30:54.315835 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 27 08:30:54.337076 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 27 08:30:54.341222 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 27 08:30:54.343775 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 27 08:30:54.349728 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 27 08:30:54.352587 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 27 08:30:54.355736 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 27 08:30:54.363458 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 27 08:30:54.385608 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 27 08:30:54.390851 systemd-udevd[1469]: Using default interface naming scheme 'v257'. Oct 27 08:30:54.438456 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 27 08:30:54.474614 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 27 08:30:54.496386 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 27 08:30:54.517637 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 27 08:30:54.589644 augenrules[1521]: No rules Oct 27 08:30:54.593391 systemd[1]: audit-rules.service: Deactivated successfully. Oct 27 08:30:54.593728 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 27 08:30:54.605265 systemd-networkd[1504]: lo: Link UP Oct 27 08:30:54.605281 systemd-networkd[1504]: lo: Gained carrier Oct 27 08:30:54.608843 kernel: mousedev: PS/2 mouse device common for all mice Oct 27 08:30:54.608811 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 27 08:30:54.611824 systemd[1]: Reached target network.target - Network. Oct 27 08:30:54.614192 systemd-networkd[1504]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 27 08:30:54.614506 systemd-networkd[1504]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 27 08:30:54.615627 systemd-networkd[1504]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 27 08:30:54.615697 systemd-networkd[1504]: eth0: Link UP Oct 27 08:30:54.616288 systemd-networkd[1504]: eth0: Gained carrier Oct 27 08:30:54.616318 systemd-networkd[1504]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 27 08:30:54.617245 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 27 08:30:54.622846 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 27 08:30:54.625717 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 27 08:30:54.634571 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 27 08:30:54.637404 systemd-networkd[1504]: eth0: DHCPv4 address 10.0.0.134/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 27 08:30:54.639496 systemd-timesyncd[1446]: Network configuration changed, trying to establish connection. Oct 27 08:30:56.315850 systemd-resolved[1445]: Clock change detected. Flushing caches. Oct 27 08:30:56.315929 systemd-timesyncd[1446]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 27 08:30:56.316019 systemd-timesyncd[1446]: Initial clock synchronization to Mon 2025-10-27 08:30:56.315818 UTC. Oct 27 08:30:56.330325 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 27 08:30:56.339603 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 27 08:30:56.341672 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 27 08:30:56.352651 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 27 08:30:56.356445 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 27 08:30:56.364267 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 27 08:30:56.377102 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Oct 27 08:30:56.394288 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 27 08:30:56.394532 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 27 08:30:56.418456 kernel: ACPI: button: Power Button [PWRF] Oct 27 08:30:56.475748 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 08:30:56.495814 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 27 08:30:56.496067 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 08:30:56.506539 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 08:30:56.605323 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 08:30:56.612286 kernel: kvm_amd: TSC scaling supported Oct 27 08:30:56.612427 kernel: kvm_amd: Nested Virtualization enabled Oct 27 08:30:56.612462 kernel: kvm_amd: Nested Paging enabled Oct 27 08:30:56.613033 kernel: kvm_amd: LBR virtualization supported Oct 27 08:30:56.613939 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 27 08:30:56.614912 kernel: kvm_amd: Virtual GIF supported Oct 27 08:30:56.642488 kernel: EDAC MC: Ver: 3.0.0 Oct 27 08:30:56.668074 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 27 08:30:56.734988 ldconfig[1467]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 27 08:30:56.827252 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 27 08:30:56.831078 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 27 08:30:56.858048 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 27 08:30:56.860563 systemd[1]: Reached target sysinit.target - System Initialization. Oct 27 08:30:56.862649 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 27 08:30:56.865071 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 27 08:30:56.867529 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 27 08:30:56.869934 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 27 08:30:56.872144 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 27 08:30:56.874578 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 27 08:30:56.876970 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 27 08:30:56.877014 systemd[1]: Reached target paths.target - Path Units. Oct 27 08:30:56.878741 systemd[1]: Reached target timers.target - Timer Units. Oct 27 08:30:56.881902 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 27 08:30:56.885456 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 27 08:30:56.889130 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 27 08:30:56.891369 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 27 08:30:56.893588 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 27 08:30:56.899745 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 27 08:30:56.901841 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 27 08:30:56.904317 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 27 08:30:56.906746 systemd[1]: Reached target sockets.target - Socket Units. Oct 27 08:30:56.908300 systemd[1]: Reached target basic.target - Basic System. Oct 27 08:30:56.909845 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 27 08:30:56.909871 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 27 08:30:56.910858 systemd[1]: Starting containerd.service - containerd container runtime... Oct 27 08:30:56.913617 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 27 08:30:56.916122 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 27 08:30:56.920025 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 27 08:30:56.923350 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 27 08:30:56.925062 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 27 08:30:56.926855 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 27 08:30:56.932876 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 27 08:30:56.938033 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 27 08:30:56.943434 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 27 08:30:56.946931 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 27 08:30:56.952393 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 27 08:30:56.955545 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 27 08:30:56.956083 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 27 08:30:56.957725 systemd[1]: Starting update-engine.service - Update Engine... Oct 27 08:30:56.962105 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 27 08:30:56.983900 jq[1579]: true Oct 27 08:30:56.984139 jq[1570]: false Oct 27 08:30:56.981647 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 27 08:30:56.985078 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 27 08:30:56.985341 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 27 08:30:56.990646 update_engine[1578]: I20251027 08:30:56.990569 1578 main.cc:92] Flatcar Update Engine starting Oct 27 08:30:56.991275 google_oslogin_nss_cache[1572]: oslogin_cache_refresh[1572]: Refreshing passwd entry cache Oct 27 08:30:56.994605 oslogin_cache_refresh[1572]: Refreshing passwd entry cache Oct 27 08:30:57.003189 google_oslogin_nss_cache[1572]: oslogin_cache_refresh[1572]: Failure getting users, quitting Oct 27 08:30:57.003180 oslogin_cache_refresh[1572]: Failure getting users, quitting Oct 27 08:30:57.003284 google_oslogin_nss_cache[1572]: oslogin_cache_refresh[1572]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 27 08:30:57.003284 google_oslogin_nss_cache[1572]: oslogin_cache_refresh[1572]: Refreshing group entry cache Oct 27 08:30:57.003202 oslogin_cache_refresh[1572]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 27 08:30:57.003272 oslogin_cache_refresh[1572]: Refreshing group entry cache Oct 27 08:30:57.006601 systemd[1]: motdgen.service: Deactivated successfully. Oct 27 08:30:57.006891 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 27 08:30:57.010745 extend-filesystems[1571]: Found /dev/vda6 Oct 27 08:30:57.011406 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 27 08:30:57.012668 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 27 08:30:57.016434 google_oslogin_nss_cache[1572]: oslogin_cache_refresh[1572]: Failure getting groups, quitting Oct 27 08:30:57.016434 google_oslogin_nss_cache[1572]: oslogin_cache_refresh[1572]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 27 08:30:57.016505 jq[1587]: true Oct 27 08:30:57.015325 oslogin_cache_refresh[1572]: Failure getting groups, quitting Oct 27 08:30:57.015341 oslogin_cache_refresh[1572]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 27 08:30:57.019773 extend-filesystems[1571]: Found /dev/vda9 Oct 27 08:30:57.022840 (ntainerd)[1607]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 27 08:30:57.024329 extend-filesystems[1571]: Checking size of /dev/vda9 Oct 27 08:30:57.024064 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 27 08:30:57.025363 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 27 08:30:57.030452 tar[1583]: linux-amd64/LICENSE Oct 27 08:30:57.031236 tar[1583]: linux-amd64/helm Oct 27 08:30:57.085629 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 27 08:30:57.095513 extend-filesystems[1571]: Resized partition /dev/vda9 Oct 27 08:30:57.085169 dbus-daemon[1568]: [system] SELinux support is enabled Oct 27 08:30:57.090567 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 27 08:30:57.099036 update_engine[1578]: I20251027 08:30:57.096798 1578 update_check_scheduler.cc:74] Next update check in 6m8s Oct 27 08:30:57.090599 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 27 08:30:57.092877 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 27 08:30:57.092892 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 27 08:30:57.095811 systemd[1]: Started update-engine.service - Update Engine. Oct 27 08:30:57.102537 extend-filesystems[1634]: resize2fs 1.47.3 (8-Jul-2025) Oct 27 08:30:57.102808 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 27 08:30:57.114991 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Oct 27 08:30:57.111566 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 27 08:30:57.115098 bash[1631]: Updated "/home/core/.ssh/authorized_keys" Oct 27 08:30:57.114849 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 27 08:30:57.156724 systemd-logind[1577]: Watching system buttons on /dev/input/event2 (Power Button) Oct 27 08:30:57.156754 systemd-logind[1577]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 27 08:30:57.158771 systemd-logind[1577]: New seat seat0. Oct 27 08:30:57.160322 systemd[1]: Started systemd-logind.service - User Login Management. Oct 27 08:30:57.313294 sshd_keygen[1605]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 27 08:30:57.338456 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Oct 27 08:30:57.376437 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 27 08:30:57.385917 locksmithd[1635]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 27 08:30:57.402723 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 27 08:30:57.432751 systemd[1]: issuegen.service: Deactivated successfully. Oct 27 08:30:57.433020 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 27 08:30:57.441674 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 27 08:30:57.608278 extend-filesystems[1634]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 27 08:30:57.608278 extend-filesystems[1634]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 27 08:30:57.608278 extend-filesystems[1634]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Oct 27 08:30:57.612371 extend-filesystems[1571]: Resized filesystem in /dev/vda9 Oct 27 08:30:57.609091 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 27 08:30:57.609882 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 27 08:30:57.614095 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 27 08:30:57.622462 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 27 08:30:57.628803 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 27 08:30:57.630008 systemd[1]: Reached target getty.target - Login Prompts. Oct 27 08:30:57.849587 containerd[1607]: time="2025-10-27T08:30:57Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 27 08:30:57.850668 containerd[1607]: time="2025-10-27T08:30:57.850636880Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 27 08:30:57.861817 containerd[1607]: time="2025-10-27T08:30:57.861751360Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="34.936µs" Oct 27 08:30:57.861817 containerd[1607]: time="2025-10-27T08:30:57.861800111Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 27 08:30:57.861817 containerd[1607]: time="2025-10-27T08:30:57.861823605Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 27 08:30:57.862100 containerd[1607]: time="2025-10-27T08:30:57.862074536Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 27 08:30:57.862100 containerd[1607]: time="2025-10-27T08:30:57.862096166Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 27 08:30:57.862152 containerd[1607]: time="2025-10-27T08:30:57.862130671Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 27 08:30:57.862358 containerd[1607]: time="2025-10-27T08:30:57.862205221Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 27 08:30:57.862358 containerd[1607]: time="2025-10-27T08:30:57.862215971Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 27 08:30:57.862541 containerd[1607]: time="2025-10-27T08:30:57.862515302Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 27 08:30:57.862541 containerd[1607]: time="2025-10-27T08:30:57.862532825Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 27 08:30:57.862594 containerd[1607]: time="2025-10-27T08:30:57.862542754Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 27 08:30:57.862594 containerd[1607]: time="2025-10-27T08:30:57.862559054Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 27 08:30:57.862688 containerd[1607]: time="2025-10-27T08:30:57.862665995Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 27 08:30:57.862965 containerd[1607]: time="2025-10-27T08:30:57.862931142Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 27 08:30:57.862989 containerd[1607]: time="2025-10-27T08:30:57.862968853Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 27 08:30:57.862989 containerd[1607]: time="2025-10-27T08:30:57.862979573Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 27 08:30:57.863026 containerd[1607]: time="2025-10-27T08:30:57.863013546Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 27 08:30:57.863262 containerd[1607]: time="2025-10-27T08:30:57.863232247Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 27 08:30:57.863326 containerd[1607]: time="2025-10-27T08:30:57.863307818Z" level=info msg="metadata content store policy set" policy=shared Oct 27 08:30:57.870725 containerd[1607]: time="2025-10-27T08:30:57.870649424Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 27 08:30:57.870843 containerd[1607]: time="2025-10-27T08:30:57.870741767Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 27 08:30:57.870843 containerd[1607]: time="2025-10-27T08:30:57.870771192Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 27 08:30:57.870843 containerd[1607]: time="2025-10-27T08:30:57.870784798Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 27 08:30:57.870843 containerd[1607]: time="2025-10-27T08:30:57.870797351Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 27 08:30:57.870843 containerd[1607]: time="2025-10-27T08:30:57.870809314Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 27 08:30:57.870843 containerd[1607]: time="2025-10-27T08:30:57.870826636Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 27 08:30:57.870843 containerd[1607]: time="2025-10-27T08:30:57.870846573Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 27 08:30:57.870992 containerd[1607]: time="2025-10-27T08:30:57.870859057Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 27 08:30:57.870992 containerd[1607]: time="2025-10-27T08:30:57.870874005Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 27 08:30:57.870992 containerd[1607]: time="2025-10-27T08:30:57.870887781Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 27 08:30:57.870992 containerd[1607]: time="2025-10-27T08:30:57.870909501Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 27 08:30:57.871193 containerd[1607]: time="2025-10-27T08:30:57.871159540Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 27 08:30:57.871218 containerd[1607]: time="2025-10-27T08:30:57.871197902Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 27 08:30:57.871238 containerd[1607]: time="2025-10-27T08:30:57.871225374Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 27 08:30:57.871259 containerd[1607]: time="2025-10-27T08:30:57.871238328Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 27 08:30:57.871259 containerd[1607]: time="2025-10-27T08:30:57.871251132Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 27 08:30:57.871295 containerd[1607]: time="2025-10-27T08:30:57.871261622Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 27 08:30:57.871295 containerd[1607]: time="2025-10-27T08:30:57.871273975Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 27 08:30:57.871295 containerd[1607]: time="2025-10-27T08:30:57.871286087Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 27 08:30:57.871362 containerd[1607]: time="2025-10-27T08:30:57.871296327Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 27 08:30:57.871362 containerd[1607]: time="2025-10-27T08:30:57.871307007Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 27 08:30:57.871362 containerd[1607]: time="2025-10-27T08:30:57.871320752Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 27 08:30:57.871491 containerd[1607]: time="2025-10-27T08:30:57.871457389Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 27 08:30:57.871517 containerd[1607]: time="2025-10-27T08:30:57.871489188Z" level=info msg="Start snapshots syncer" Oct 27 08:30:57.871583 containerd[1607]: time="2025-10-27T08:30:57.871545374Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 27 08:30:57.871948 containerd[1607]: time="2025-10-27T08:30:57.871866786Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 27 08:30:57.872320 containerd[1607]: time="2025-10-27T08:30:57.871953278Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 27 08:30:57.872320 containerd[1607]: time="2025-10-27T08:30:57.872165577Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 27 08:30:57.872320 containerd[1607]: time="2025-10-27T08:30:57.872280833Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 27 08:30:57.872320 containerd[1607]: time="2025-10-27T08:30:57.872310328Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 27 08:30:57.872320 containerd[1607]: time="2025-10-27T08:30:57.872322000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 27 08:30:57.872439 containerd[1607]: time="2025-10-27T08:30:57.872336978Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 27 08:30:57.872439 containerd[1607]: time="2025-10-27T08:30:57.872351185Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 27 08:30:57.872439 containerd[1607]: time="2025-10-27T08:30:57.872361444Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 27 08:30:57.872439 containerd[1607]: time="2025-10-27T08:30:57.872371443Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 27 08:30:57.872439 containerd[1607]: time="2025-10-27T08:30:57.872392492Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 27 08:30:57.872439 containerd[1607]: time="2025-10-27T08:30:57.872423200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 27 08:30:57.872439 containerd[1607]: time="2025-10-27T08:30:57.872434090Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 27 08:30:57.872605 containerd[1607]: time="2025-10-27T08:30:57.872479165Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 27 08:30:57.872605 containerd[1607]: time="2025-10-27T08:30:57.872494824Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 27 08:30:57.872605 containerd[1607]: time="2025-10-27T08:30:57.872503901Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 27 08:30:57.872605 containerd[1607]: time="2025-10-27T08:30:57.872513138Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 27 08:30:57.872605 containerd[1607]: time="2025-10-27T08:30:57.872520452Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 27 08:30:57.872605 containerd[1607]: time="2025-10-27T08:30:57.872529399Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 27 08:30:57.872605 containerd[1607]: time="2025-10-27T08:30:57.872542634Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 27 08:30:57.872605 containerd[1607]: time="2025-10-27T08:30:57.872577138Z" level=info msg="runtime interface created" Oct 27 08:30:57.872605 containerd[1607]: time="2025-10-27T08:30:57.872584652Z" level=info msg="created NRI interface" Oct 27 08:30:57.872605 containerd[1607]: time="2025-10-27T08:30:57.872593188Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 27 08:30:57.872605 containerd[1607]: time="2025-10-27T08:30:57.872605341Z" level=info msg="Connect containerd service" Oct 27 08:30:57.872817 containerd[1607]: time="2025-10-27T08:30:57.872630999Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 27 08:30:57.874727 containerd[1607]: time="2025-10-27T08:30:57.874655114Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 27 08:30:57.901823 tar[1583]: linux-amd64/README.md Oct 27 08:30:57.930617 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 27 08:30:58.132387 containerd[1607]: time="2025-10-27T08:30:58.132247129Z" level=info msg="Start subscribing containerd event" Oct 27 08:30:58.132387 containerd[1607]: time="2025-10-27T08:30:58.132324444Z" level=info msg="Start recovering state" Oct 27 08:30:58.132536 containerd[1607]: time="2025-10-27T08:30:58.132503039Z" level=info msg="Start event monitor" Oct 27 08:30:58.132536 containerd[1607]: time="2025-10-27T08:30:58.132531032Z" level=info msg="Start cni network conf syncer for default" Oct 27 08:30:58.132608 containerd[1607]: time="2025-10-27T08:30:58.132552412Z" level=info msg="Start streaming server" Oct 27 08:30:58.132608 containerd[1607]: time="2025-10-27T08:30:58.132587988Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 27 08:30:58.132608 containerd[1607]: time="2025-10-27T08:30:58.132596004Z" level=info msg="runtime interface starting up..." Oct 27 08:30:58.132608 containerd[1607]: time="2025-10-27T08:30:58.132602335Z" level=info msg="starting plugins..." Oct 27 08:30:58.132680 containerd[1607]: time="2025-10-27T08:30:58.132621551Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 27 08:30:58.134333 containerd[1607]: time="2025-10-27T08:30:58.133396324Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 27 08:30:58.134333 containerd[1607]: time="2025-10-27T08:30:58.133488948Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 27 08:30:58.134333 containerd[1607]: time="2025-10-27T08:30:58.133600577Z" level=info msg="containerd successfully booted in 0.284905s" Oct 27 08:30:58.133965 systemd[1]: Started containerd.service - containerd container runtime. Oct 27 08:30:58.242683 systemd-networkd[1504]: eth0: Gained IPv6LL Oct 27 08:30:58.246166 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 27 08:30:58.248713 systemd[1]: Reached target network-online.target - Network is Online. Oct 27 08:30:58.251910 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 27 08:30:58.255149 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:30:58.267784 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 27 08:30:58.286070 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 27 08:30:58.286510 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 27 08:30:58.290265 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 27 08:30:58.293447 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 27 08:30:59.219069 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 27 08:30:59.222494 systemd[1]: Started sshd@0-10.0.0.134:22-10.0.0.1:55762.service - OpenSSH per-connection server daemon (10.0.0.1:55762). Oct 27 08:30:59.383714 sshd[1705]: Accepted publickey for core from 10.0.0.1 port 55762 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:30:59.385451 sshd-session[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:30:59.392037 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 27 08:30:59.394980 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 27 08:30:59.404085 systemd-logind[1577]: New session 1 of user core. Oct 27 08:30:59.426804 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 27 08:30:59.432496 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 27 08:30:59.459121 (systemd)[1710]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 27 08:30:59.461774 systemd-logind[1577]: New session c1 of user core. Oct 27 08:30:59.672831 systemd[1710]: Queued start job for default target default.target. Oct 27 08:30:59.765101 systemd[1710]: Created slice app.slice - User Application Slice. Oct 27 08:30:59.765134 systemd[1710]: Reached target paths.target - Paths. Oct 27 08:30:59.765200 systemd[1710]: Reached target timers.target - Timers. Oct 27 08:30:59.766853 systemd[1710]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 27 08:30:59.779448 systemd[1710]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 27 08:30:59.779581 systemd[1710]: Reached target sockets.target - Sockets. Oct 27 08:30:59.779622 systemd[1710]: Reached target basic.target - Basic System. Oct 27 08:30:59.779675 systemd[1710]: Reached target default.target - Main User Target. Oct 27 08:30:59.779707 systemd[1710]: Startup finished in 308ms. Oct 27 08:30:59.780263 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 27 08:30:59.784371 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 27 08:30:59.856519 systemd[1]: Started sshd@1-10.0.0.134:22-10.0.0.1:55770.service - OpenSSH per-connection server daemon (10.0.0.1:55770). Oct 27 08:30:59.889657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:30:59.893401 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 27 08:30:59.895798 systemd[1]: Startup finished in 3.216s (kernel) + 6.183s (initrd) + 6.992s (userspace) = 16.392s. Oct 27 08:30:59.901825 (kubelet)[1728]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 27 08:30:59.981551 sshd[1723]: Accepted publickey for core from 10.0.0.1 port 55770 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:30:59.983168 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:30:59.988602 systemd-logind[1577]: New session 2 of user core. Oct 27 08:30:59.998554 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 27 08:31:00.082524 sshd[1730]: Connection closed by 10.0.0.1 port 55770 Oct 27 08:31:00.083159 sshd-session[1723]: pam_unix(sshd:session): session closed for user core Oct 27 08:31:00.093996 systemd[1]: sshd@1-10.0.0.134:22-10.0.0.1:55770.service: Deactivated successfully. Oct 27 08:31:00.096001 systemd[1]: session-2.scope: Deactivated successfully. Oct 27 08:31:00.096774 systemd-logind[1577]: Session 2 logged out. Waiting for processes to exit. Oct 27 08:31:00.100067 systemd[1]: Started sshd@2-10.0.0.134:22-10.0.0.1:55774.service - OpenSSH per-connection server daemon (10.0.0.1:55774). Oct 27 08:31:00.100819 systemd-logind[1577]: Removed session 2. Oct 27 08:31:00.199611 sshd[1741]: Accepted publickey for core from 10.0.0.1 port 55774 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:31:00.201352 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:31:00.206020 systemd-logind[1577]: New session 3 of user core. Oct 27 08:31:00.213548 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 27 08:31:00.271362 sshd[1744]: Connection closed by 10.0.0.1 port 55774 Oct 27 08:31:00.272164 sshd-session[1741]: pam_unix(sshd:session): session closed for user core Oct 27 08:31:00.285093 systemd[1]: sshd@2-10.0.0.134:22-10.0.0.1:55774.service: Deactivated successfully. Oct 27 08:31:00.287964 systemd[1]: session-3.scope: Deactivated successfully. Oct 27 08:31:00.289046 systemd-logind[1577]: Session 3 logged out. Waiting for processes to exit. Oct 27 08:31:00.291630 systemd-logind[1577]: Removed session 3. Oct 27 08:31:00.293505 systemd[1]: Started sshd@3-10.0.0.134:22-10.0.0.1:55784.service - OpenSSH per-connection server daemon (10.0.0.1:55784). Oct 27 08:31:00.361211 sshd[1754]: Accepted publickey for core from 10.0.0.1 port 55784 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:31:00.363947 sshd-session[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:31:00.371183 systemd-logind[1577]: New session 4 of user core. Oct 27 08:31:00.378597 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 27 08:31:00.435751 sshd[1758]: Connection closed by 10.0.0.1 port 55784 Oct 27 08:31:00.436237 sshd-session[1754]: pam_unix(sshd:session): session closed for user core Oct 27 08:31:00.443790 systemd[1]: sshd@3-10.0.0.134:22-10.0.0.1:55784.service: Deactivated successfully. Oct 27 08:31:00.445330 systemd[1]: session-4.scope: Deactivated successfully. Oct 27 08:31:00.446231 systemd-logind[1577]: Session 4 logged out. Waiting for processes to exit. Oct 27 08:31:00.449432 systemd[1]: Started sshd@4-10.0.0.134:22-10.0.0.1:55798.service - OpenSSH per-connection server daemon (10.0.0.1:55798). Oct 27 08:31:00.450077 systemd-logind[1577]: Removed session 4. Oct 27 08:31:00.557516 sshd[1764]: Accepted publickey for core from 10.0.0.1 port 55798 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:31:00.559349 sshd-session[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:31:00.564011 systemd-logind[1577]: New session 5 of user core. Oct 27 08:31:00.572610 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 27 08:31:00.640760 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 27 08:31:00.641091 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 27 08:31:00.650228 kubelet[1728]: E1027 08:31:00.650041 1728 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 27 08:31:00.654601 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 27 08:31:00.654870 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 27 08:31:00.655446 systemd[1]: kubelet.service: Consumed 2.187s CPU time, 266.1M memory peak. Oct 27 08:31:00.659048 sudo[1768]: pam_unix(sudo:session): session closed for user root Oct 27 08:31:00.661338 sshd[1767]: Connection closed by 10.0.0.1 port 55798 Oct 27 08:31:00.661756 sshd-session[1764]: pam_unix(sshd:session): session closed for user core Oct 27 08:31:00.673326 systemd[1]: sshd@4-10.0.0.134:22-10.0.0.1:55798.service: Deactivated successfully. Oct 27 08:31:00.675082 systemd[1]: session-5.scope: Deactivated successfully. Oct 27 08:31:00.675802 systemd-logind[1577]: Session 5 logged out. Waiting for processes to exit. Oct 27 08:31:00.678453 systemd[1]: Started sshd@5-10.0.0.134:22-10.0.0.1:55812.service - OpenSSH per-connection server daemon (10.0.0.1:55812). Oct 27 08:31:00.679117 systemd-logind[1577]: Removed session 5. Oct 27 08:31:00.736294 sshd[1775]: Accepted publickey for core from 10.0.0.1 port 55812 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:31:00.737560 sshd-session[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:31:00.743070 systemd-logind[1577]: New session 6 of user core. Oct 27 08:31:00.752705 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 27 08:31:00.810805 sudo[1780]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 27 08:31:00.811144 sudo[1780]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 27 08:31:00.819045 sudo[1780]: pam_unix(sudo:session): session closed for user root Oct 27 08:31:00.828168 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 27 08:31:00.828542 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 27 08:31:00.839795 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 27 08:31:00.901487 augenrules[1802]: No rules Oct 27 08:31:00.903301 systemd[1]: audit-rules.service: Deactivated successfully. Oct 27 08:31:00.903593 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 27 08:31:00.904890 sudo[1779]: pam_unix(sudo:session): session closed for user root Oct 27 08:31:00.907096 sshd[1778]: Connection closed by 10.0.0.1 port 55812 Oct 27 08:31:00.907475 sshd-session[1775]: pam_unix(sshd:session): session closed for user core Oct 27 08:31:00.917154 systemd[1]: sshd@5-10.0.0.134:22-10.0.0.1:55812.service: Deactivated successfully. Oct 27 08:31:00.919100 systemd[1]: session-6.scope: Deactivated successfully. Oct 27 08:31:00.920170 systemd-logind[1577]: Session 6 logged out. Waiting for processes to exit. Oct 27 08:31:00.922699 systemd[1]: Started sshd@6-10.0.0.134:22-10.0.0.1:55816.service - OpenSSH per-connection server daemon (10.0.0.1:55816). Oct 27 08:31:00.923491 systemd-logind[1577]: Removed session 6. Oct 27 08:31:00.983314 sshd[1811]: Accepted publickey for core from 10.0.0.1 port 55816 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:31:00.984522 sshd-session[1811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:31:00.989122 systemd-logind[1577]: New session 7 of user core. Oct 27 08:31:01.001574 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 27 08:31:01.056138 sudo[1815]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 27 08:31:01.056465 sudo[1815]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 27 08:31:01.936694 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 27 08:31:01.961845 (dockerd)[1835]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 27 08:31:02.637892 dockerd[1835]: time="2025-10-27T08:31:02.637784260Z" level=info msg="Starting up" Oct 27 08:31:02.638727 dockerd[1835]: time="2025-10-27T08:31:02.638687374Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 27 08:31:02.657882 dockerd[1835]: time="2025-10-27T08:31:02.657829746Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 27 08:31:03.648097 dockerd[1835]: time="2025-10-27T08:31:03.648010254Z" level=info msg="Loading containers: start." Oct 27 08:31:03.659465 kernel: Initializing XFRM netlink socket Oct 27 08:31:03.946943 systemd-networkd[1504]: docker0: Link UP Oct 27 08:31:03.956115 dockerd[1835]: time="2025-10-27T08:31:03.956030762Z" level=info msg="Loading containers: done." Oct 27 08:31:03.972203 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1581550333-merged.mount: Deactivated successfully. Oct 27 08:31:03.974332 dockerd[1835]: time="2025-10-27T08:31:03.974272145Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 27 08:31:03.974449 dockerd[1835]: time="2025-10-27T08:31:03.974424741Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 27 08:31:03.974606 dockerd[1835]: time="2025-10-27T08:31:03.974577427Z" level=info msg="Initializing buildkit" Oct 27 08:31:04.010339 dockerd[1835]: time="2025-10-27T08:31:04.010265923Z" level=info msg="Completed buildkit initialization" Oct 27 08:31:04.016073 dockerd[1835]: time="2025-10-27T08:31:04.016026925Z" level=info msg="Daemon has completed initialization" Oct 27 08:31:04.016258 dockerd[1835]: time="2025-10-27T08:31:04.016164282Z" level=info msg="API listen on /run/docker.sock" Oct 27 08:31:04.016335 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 27 08:31:05.113461 containerd[1607]: time="2025-10-27T08:31:05.113367538Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Oct 27 08:31:05.794589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1002592788.mount: Deactivated successfully. Oct 27 08:31:07.565467 containerd[1607]: time="2025-10-27T08:31:07.565371806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:31:07.566292 containerd[1607]: time="2025-10-27T08:31:07.566211119Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Oct 27 08:31:07.567561 containerd[1607]: time="2025-10-27T08:31:07.567509925Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:31:07.570697 containerd[1607]: time="2025-10-27T08:31:07.570636337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:31:07.572023 containerd[1607]: time="2025-10-27T08:31:07.571942636Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 2.458499737s" Oct 27 08:31:07.572023 containerd[1607]: time="2025-10-27T08:31:07.572008199Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Oct 27 08:31:07.572765 containerd[1607]: time="2025-10-27T08:31:07.572738889Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Oct 27 08:31:10.267451 containerd[1607]: time="2025-10-27T08:31:10.267372256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:31:10.268287 containerd[1607]: time="2025-10-27T08:31:10.268233961Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Oct 27 08:31:10.269508 containerd[1607]: time="2025-10-27T08:31:10.269480960Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:31:10.272162 containerd[1607]: time="2025-10-27T08:31:10.272137370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:31:10.273012 containerd[1607]: time="2025-10-27T08:31:10.272983817Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 2.700216395s" Oct 27 08:31:10.273012 containerd[1607]: time="2025-10-27T08:31:10.273013223Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Oct 27 08:31:10.273607 containerd[1607]: time="2025-10-27T08:31:10.273572161Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Oct 27 08:31:10.905382 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 27 08:31:10.907618 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:31:11.194174 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:31:11.333847 (kubelet)[2124]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 27 08:31:11.553155 kubelet[2124]: E1027 08:31:11.552986 2124 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 27 08:31:11.559570 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 27 08:31:11.559772 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 27 08:31:11.560176 systemd[1]: kubelet.service: Consumed 430ms CPU time, 110.8M memory peak. Oct 27 08:31:15.903857 containerd[1607]: time="2025-10-27T08:31:15.903772665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:31:15.904796 containerd[1607]: time="2025-10-27T08:31:15.904749156Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Oct 27 08:31:15.906867 containerd[1607]: time="2025-10-27T08:31:15.906840157Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:31:15.910545 containerd[1607]: time="2025-10-27T08:31:15.910468329Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:31:15.911558 containerd[1607]: time="2025-10-27T08:31:15.911513740Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 5.637913156s" Oct 27 08:31:15.911558 containerd[1607]: time="2025-10-27T08:31:15.911549737Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Oct 27 08:31:15.912522 containerd[1607]: time="2025-10-27T08:31:15.912477356Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Oct 27 08:31:16.957884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1039509849.mount: Deactivated successfully. Oct 27 08:31:17.234460 containerd[1607]: time="2025-10-27T08:31:17.234296888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:31:17.235195 containerd[1607]: time="2025-10-27T08:31:17.235156890Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Oct 27 08:31:17.236423 containerd[1607]: time="2025-10-27T08:31:17.236363823Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:31:17.238247 containerd[1607]: time="2025-10-27T08:31:17.238211578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:31:17.238745 containerd[1607]: time="2025-10-27T08:31:17.238720021Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.326200105s" Oct 27 08:31:17.238779 containerd[1607]: time="2025-10-27T08:31:17.238746120Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Oct 27 08:31:17.239158 containerd[1607]: time="2025-10-27T08:31:17.239139688Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Oct 27 08:31:17.830237 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2772380557.mount: Deactivated successfully. Oct 27 08:31:18.859733 containerd[1607]: time="2025-10-27T08:31:18.859659532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:31:18.860364 containerd[1607]: time="2025-10-27T08:31:18.860326533Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Oct 27 08:31:18.861602 containerd[1607]: time="2025-10-27T08:31:18.861567079Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:31:18.864008 containerd[1607]: time="2025-10-27T08:31:18.863978791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:31:18.865015 containerd[1607]: time="2025-10-27T08:31:18.864984607Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.625819181s" Oct 27 08:31:18.865054 containerd[1607]: time="2025-10-27T08:31:18.865014763Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Oct 27 08:31:18.865597 containerd[1607]: time="2025-10-27T08:31:18.865567780Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 27 08:31:19.428862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1386553653.mount: Deactivated successfully. Oct 27 08:31:19.434428 containerd[1607]: time="2025-10-27T08:31:19.434382147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 27 08:31:19.435107 containerd[1607]: time="2025-10-27T08:31:19.435059507Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 27 08:31:19.436233 containerd[1607]: time="2025-10-27T08:31:19.436205916Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 27 08:31:19.438103 containerd[1607]: time="2025-10-27T08:31:19.438066385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 27 08:31:19.438567 containerd[1607]: time="2025-10-27T08:31:19.438536987Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 572.934462ms" Oct 27 08:31:19.438601 containerd[1607]: time="2025-10-27T08:31:19.438566002Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 27 08:31:19.439184 containerd[1607]: time="2025-10-27T08:31:19.438992081Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Oct 27 08:31:19.982558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1176685101.mount: Deactivated successfully. Oct 27 08:31:21.810249 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 27 08:31:21.811849 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:31:22.025986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:31:22.030454 (kubelet)[2264]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 27 08:31:22.120273 kubelet[2264]: E1027 08:31:22.120128 2264 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 27 08:31:22.123960 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 27 08:31:22.124162 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 27 08:31:22.124585 systemd[1]: kubelet.service: Consumed 261ms CPU time, 111.1M memory peak. Oct 27 08:31:22.147996 containerd[1607]: time="2025-10-27T08:31:22.147952561Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:31:22.148750 containerd[1607]: time="2025-10-27T08:31:22.148722574Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Oct 27 08:31:22.150118 containerd[1607]: time="2025-10-27T08:31:22.150069450Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:31:22.152749 containerd[1607]: time="2025-10-27T08:31:22.152718647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:31:22.153686 containerd[1607]: time="2025-10-27T08:31:22.153647138Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.714626013s" Oct 27 08:31:22.153686 containerd[1607]: time="2025-10-27T08:31:22.153677816Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Oct 27 08:31:24.536716 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:31:24.536918 systemd[1]: kubelet.service: Consumed 261ms CPU time, 111.1M memory peak. Oct 27 08:31:24.539238 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:31:24.564873 systemd[1]: Reload requested from client PID 2300 ('systemctl') (unit session-7.scope)... Oct 27 08:31:24.564891 systemd[1]: Reloading... Oct 27 08:31:24.663083 zram_generator::config[2343]: No configuration found. Oct 27 08:31:24.983503 systemd[1]: Reloading finished in 418 ms. Oct 27 08:31:25.053081 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 27 08:31:25.053175 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 27 08:31:25.053471 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:31:25.053523 systemd[1]: kubelet.service: Consumed 170ms CPU time, 98.2M memory peak. Oct 27 08:31:25.055234 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:31:25.230626 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:31:25.234804 (kubelet)[2391]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 27 08:31:25.271674 kubelet[2391]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 27 08:31:25.271674 kubelet[2391]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 27 08:31:25.271674 kubelet[2391]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 27 08:31:25.271987 kubelet[2391]: I1027 08:31:25.271730 2391 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 27 08:31:25.664262 kubelet[2391]: I1027 08:31:25.664213 2391 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 27 08:31:25.664262 kubelet[2391]: I1027 08:31:25.664245 2391 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 27 08:31:25.664748 kubelet[2391]: I1027 08:31:25.664714 2391 server.go:954] "Client rotation is on, will bootstrap in background" Oct 27 08:31:25.684860 kubelet[2391]: E1027 08:31:25.684805 2391 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.134:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Oct 27 08:31:25.685735 kubelet[2391]: I1027 08:31:25.685671 2391 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 27 08:31:25.691763 kubelet[2391]: I1027 08:31:25.691733 2391 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 27 08:31:25.696900 kubelet[2391]: I1027 08:31:25.696864 2391 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 27 08:31:25.698177 kubelet[2391]: I1027 08:31:25.698117 2391 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 27 08:31:25.698357 kubelet[2391]: I1027 08:31:25.698162 2391 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 27 08:31:25.698494 kubelet[2391]: I1027 08:31:25.698366 2391 topology_manager.go:138] "Creating topology manager with none policy" Oct 27 08:31:25.698494 kubelet[2391]: I1027 08:31:25.698376 2391 container_manager_linux.go:304] "Creating device plugin manager" Oct 27 08:31:25.698603 kubelet[2391]: I1027 08:31:25.698571 2391 state_mem.go:36] "Initialized new in-memory state store" Oct 27 08:31:25.701494 kubelet[2391]: I1027 08:31:25.701459 2391 kubelet.go:446] "Attempting to sync node with API server" Oct 27 08:31:25.701537 kubelet[2391]: I1027 08:31:25.701497 2391 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 27 08:31:25.701571 kubelet[2391]: I1027 08:31:25.701541 2391 kubelet.go:352] "Adding apiserver pod source" Oct 27 08:31:25.701571 kubelet[2391]: I1027 08:31:25.701564 2391 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 27 08:31:25.706362 kubelet[2391]: W1027 08:31:25.705439 2391 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Oct 27 08:31:25.706362 kubelet[2391]: E1027 08:31:25.705519 2391 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Oct 27 08:31:25.706362 kubelet[2391]: I1027 08:31:25.705607 2391 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 27 08:31:25.706362 kubelet[2391]: W1027 08:31:25.705896 2391 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Oct 27 08:31:25.706362 kubelet[2391]: E1027 08:31:25.705971 2391 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Oct 27 08:31:25.706362 kubelet[2391]: I1027 08:31:25.706189 2391 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 27 08:31:25.706995 kubelet[2391]: W1027 08:31:25.706964 2391 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 27 08:31:25.710063 kubelet[2391]: I1027 08:31:25.710033 2391 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 27 08:31:25.710123 kubelet[2391]: I1027 08:31:25.710090 2391 server.go:1287] "Started kubelet" Oct 27 08:31:25.710246 kubelet[2391]: I1027 08:31:25.710211 2391 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 27 08:31:25.711358 kubelet[2391]: I1027 08:31:25.711320 2391 server.go:479] "Adding debug handlers to kubelet server" Oct 27 08:31:25.711531 kubelet[2391]: I1027 08:31:25.711352 2391 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 27 08:31:25.711785 kubelet[2391]: I1027 08:31:25.711756 2391 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 27 08:31:25.713861 kubelet[2391]: I1027 08:31:25.713829 2391 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 27 08:31:25.715280 kubelet[2391]: I1027 08:31:25.714345 2391 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 27 08:31:25.716801 kubelet[2391]: E1027 08:31:25.716764 2391 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 08:31:25.716844 kubelet[2391]: I1027 08:31:25.716814 2391 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 27 08:31:25.717027 kubelet[2391]: I1027 08:31:25.716998 2391 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 27 08:31:25.717073 kubelet[2391]: I1027 08:31:25.717062 2391 reconciler.go:26] "Reconciler: start to sync state" Oct 27 08:31:25.717578 kubelet[2391]: W1027 08:31:25.717502 2391 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Oct 27 08:31:25.717645 kubelet[2391]: E1027 08:31:25.717575 2391 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Oct 27 08:31:25.717645 kubelet[2391]: E1027 08:31:25.717499 2391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="200ms" Oct 27 08:31:25.718620 kubelet[2391]: E1027 08:31:25.718457 2391 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 27 08:31:25.718661 kubelet[2391]: I1027 08:31:25.718623 2391 factory.go:221] Registration of the containerd container factory successfully Oct 27 08:31:25.718661 kubelet[2391]: I1027 08:31:25.718636 2391 factory.go:221] Registration of the systemd container factory successfully Oct 27 08:31:25.718718 kubelet[2391]: I1027 08:31:25.718695 2391 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 27 08:31:25.718993 kubelet[2391]: E1027 08:31:25.717284 2391 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.134:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.134:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18724bed0a75034e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-27 08:31:25.710058318 +0000 UTC m=+0.471515522,LastTimestamp:2025-10-27 08:31:25.710058318 +0000 UTC m=+0.471515522,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 27 08:31:25.732447 kubelet[2391]: I1027 08:31:25.732391 2391 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 27 08:31:25.732550 kubelet[2391]: I1027 08:31:25.732488 2391 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 27 08:31:25.732550 kubelet[2391]: I1027 08:31:25.732513 2391 state_mem.go:36] "Initialized new in-memory state store" Oct 27 08:31:25.733207 kubelet[2391]: I1027 08:31:25.733162 2391 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 27 08:31:25.734476 kubelet[2391]: I1027 08:31:25.734397 2391 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 27 08:31:25.734476 kubelet[2391]: I1027 08:31:25.734472 2391 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 27 08:31:25.734586 kubelet[2391]: I1027 08:31:25.734511 2391 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 27 08:31:25.734586 kubelet[2391]: I1027 08:31:25.734525 2391 kubelet.go:2382] "Starting kubelet main sync loop" Oct 27 08:31:25.734586 kubelet[2391]: E1027 08:31:25.734579 2391 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 27 08:31:25.738222 kubelet[2391]: W1027 08:31:25.738188 2391 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Oct 27 08:31:25.738557 kubelet[2391]: E1027 08:31:25.738224 2391 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Oct 27 08:31:25.817542 kubelet[2391]: E1027 08:31:25.817476 2391 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 08:31:25.835712 kubelet[2391]: E1027 08:31:25.835683 2391 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 27 08:31:25.918074 kubelet[2391]: E1027 08:31:25.917961 2391 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 08:31:25.918311 kubelet[2391]: E1027 08:31:25.918289 2391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="400ms" Oct 27 08:31:25.987837 kubelet[2391]: E1027 08:31:25.987721 2391 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.134:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.134:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18724bed0a75034e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-27 08:31:25.710058318 +0000 UTC m=+0.471515522,LastTimestamp:2025-10-27 08:31:25.710058318 +0000 UTC m=+0.471515522,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 27 08:31:26.018980 kubelet[2391]: E1027 08:31:26.018935 2391 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 08:31:26.036149 kubelet[2391]: E1027 08:31:26.036105 2391 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 27 08:31:26.119731 kubelet[2391]: E1027 08:31:26.119658 2391 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 08:31:26.167344 kubelet[2391]: I1027 08:31:26.167232 2391 policy_none.go:49] "None policy: Start" Oct 27 08:31:26.167344 kubelet[2391]: I1027 08:31:26.167281 2391 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 27 08:31:26.167344 kubelet[2391]: I1027 08:31:26.167312 2391 state_mem.go:35] "Initializing new in-memory state store" Oct 27 08:31:26.176337 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 27 08:31:26.196087 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 27 08:31:26.200274 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 27 08:31:26.211807 kubelet[2391]: I1027 08:31:26.211724 2391 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 27 08:31:26.212062 kubelet[2391]: I1027 08:31:26.212021 2391 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 27 08:31:26.212062 kubelet[2391]: I1027 08:31:26.212039 2391 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 27 08:31:26.213233 kubelet[2391]: I1027 08:31:26.213194 2391 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 27 08:31:26.213877 kubelet[2391]: E1027 08:31:26.213767 2391 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 27 08:31:26.213877 kubelet[2391]: E1027 08:31:26.213858 2391 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 27 08:31:26.313757 kubelet[2391]: I1027 08:31:26.313687 2391 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 08:31:26.314297 kubelet[2391]: E1027 08:31:26.314136 2391 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Oct 27 08:31:26.319897 kubelet[2391]: E1027 08:31:26.319849 2391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="800ms" Oct 27 08:31:26.447634 systemd[1]: Created slice kubepods-burstable-podef7bfe7e69accbc40ff8aed706444418.slice - libcontainer container kubepods-burstable-podef7bfe7e69accbc40ff8aed706444418.slice. Oct 27 08:31:26.459530 kubelet[2391]: E1027 08:31:26.459494 2391 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 08:31:26.462487 systemd[1]: Created slice kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice - libcontainer container kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice. Oct 27 08:31:26.474629 kubelet[2391]: E1027 08:31:26.474595 2391 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 08:31:26.477443 systemd[1]: Created slice kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice - libcontainer container kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice. Oct 27 08:31:26.479369 kubelet[2391]: E1027 08:31:26.479339 2391 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 08:31:26.515839 kubelet[2391]: I1027 08:31:26.515789 2391 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 08:31:26.516210 kubelet[2391]: E1027 08:31:26.516184 2391 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Oct 27 08:31:26.522588 kubelet[2391]: I1027 08:31:26.522544 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:31:26.522588 kubelet[2391]: I1027 08:31:26.522580 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:31:26.522588 kubelet[2391]: I1027 08:31:26.522617 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:31:26.522863 kubelet[2391]: I1027 08:31:26.522647 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 27 08:31:26.522863 kubelet[2391]: I1027 08:31:26.522670 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ef7bfe7e69accbc40ff8aed706444418-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ef7bfe7e69accbc40ff8aed706444418\") " pod="kube-system/kube-apiserver-localhost" Oct 27 08:31:26.522863 kubelet[2391]: I1027 08:31:26.522691 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:31:26.522863 kubelet[2391]: I1027 08:31:26.522711 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:31:26.522863 kubelet[2391]: I1027 08:31:26.522730 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ef7bfe7e69accbc40ff8aed706444418-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ef7bfe7e69accbc40ff8aed706444418\") " pod="kube-system/kube-apiserver-localhost" Oct 27 08:31:26.523021 kubelet[2391]: I1027 08:31:26.522748 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ef7bfe7e69accbc40ff8aed706444418-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ef7bfe7e69accbc40ff8aed706444418\") " pod="kube-system/kube-apiserver-localhost" Oct 27 08:31:26.541185 kubelet[2391]: W1027 08:31:26.541105 2391 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Oct 27 08:31:26.541185 kubelet[2391]: E1027 08:31:26.541185 2391 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Oct 27 08:31:26.640791 kubelet[2391]: W1027 08:31:26.640699 2391 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Oct 27 08:31:26.640791 kubelet[2391]: E1027 08:31:26.640783 2391 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Oct 27 08:31:26.751101 kubelet[2391]: W1027 08:31:26.750899 2391 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Oct 27 08:31:26.751101 kubelet[2391]: E1027 08:31:26.750989 2391 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Oct 27 08:31:26.760573 kubelet[2391]: E1027 08:31:26.760535 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:31:26.761396 containerd[1607]: time="2025-10-27T08:31:26.761341394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ef7bfe7e69accbc40ff8aed706444418,Namespace:kube-system,Attempt:0,}" Oct 27 08:31:26.775683 kubelet[2391]: E1027 08:31:26.775626 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:31:26.776186 containerd[1607]: time="2025-10-27T08:31:26.776117639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Oct 27 08:31:26.780759 kubelet[2391]: E1027 08:31:26.780469 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:31:26.781007 containerd[1607]: time="2025-10-27T08:31:26.780959828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Oct 27 08:31:26.788361 containerd[1607]: time="2025-10-27T08:31:26.788292647Z" level=info msg="connecting to shim 6967d340a594aeaa34e21145194139777b5ad951de79a0af933f41aae7d485d5" address="unix:///run/containerd/s/d1da9488808d848c5af9cfd15b6c967a1690e6364255fc5ca00717f24f4088ff" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:31:26.806772 containerd[1607]: time="2025-10-27T08:31:26.806692807Z" level=info msg="connecting to shim 5e9ad061ea81bd3f8e2c627b25d28eb6b3621e275a684c1dacce362f85083a86" address="unix:///run/containerd/s/668ea4fc4aaa0d760950a7af81f4cd1531d8f567e19548c5c9bf2750a964f53e" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:31:26.821917 systemd[1]: Started cri-containerd-6967d340a594aeaa34e21145194139777b5ad951de79a0af933f41aae7d485d5.scope - libcontainer container 6967d340a594aeaa34e21145194139777b5ad951de79a0af933f41aae7d485d5. Oct 27 08:31:26.827300 containerd[1607]: time="2025-10-27T08:31:26.827133544Z" level=info msg="connecting to shim beb467583be100260d4f967640cde8bb0ed0f7dbe1d2c9622be65a4e10a65120" address="unix:///run/containerd/s/4a4c450231692ce320b0fd52233d9c1d20213e34dd0b1abe61539f74bef5846f" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:31:26.846816 systemd[1]: Started cri-containerd-5e9ad061ea81bd3f8e2c627b25d28eb6b3621e275a684c1dacce362f85083a86.scope - libcontainer container 5e9ad061ea81bd3f8e2c627b25d28eb6b3621e275a684c1dacce362f85083a86. Oct 27 08:31:26.852351 systemd[1]: Started cri-containerd-beb467583be100260d4f967640cde8bb0ed0f7dbe1d2c9622be65a4e10a65120.scope - libcontainer container beb467583be100260d4f967640cde8bb0ed0f7dbe1d2c9622be65a4e10a65120. Oct 27 08:31:26.873182 containerd[1607]: time="2025-10-27T08:31:26.873123685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ef7bfe7e69accbc40ff8aed706444418,Namespace:kube-system,Attempt:0,} returns sandbox id \"6967d340a594aeaa34e21145194139777b5ad951de79a0af933f41aae7d485d5\"" Oct 27 08:31:26.874639 kubelet[2391]: E1027 08:31:26.874608 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:31:26.877511 containerd[1607]: time="2025-10-27T08:31:26.877473391Z" level=info msg="CreateContainer within sandbox \"6967d340a594aeaa34e21145194139777b5ad951de79a0af933f41aae7d485d5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 27 08:31:26.894370 containerd[1607]: time="2025-10-27T08:31:26.894292276Z" level=info msg="Container 45b1f9055a04695db5d74927253dc0e2af9b892faa7b992582e5cfcf69d6ab0f: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:31:26.896968 containerd[1607]: time="2025-10-27T08:31:26.896926365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e9ad061ea81bd3f8e2c627b25d28eb6b3621e275a684c1dacce362f85083a86\"" Oct 27 08:31:26.897756 kubelet[2391]: E1027 08:31:26.897720 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:31:26.900993 containerd[1607]: time="2025-10-27T08:31:26.900964276Z" level=info msg="CreateContainer within sandbox \"5e9ad061ea81bd3f8e2c627b25d28eb6b3621e275a684c1dacce362f85083a86\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 27 08:31:26.902557 containerd[1607]: time="2025-10-27T08:31:26.902513431Z" level=info msg="CreateContainer within sandbox \"6967d340a594aeaa34e21145194139777b5ad951de79a0af933f41aae7d485d5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"45b1f9055a04695db5d74927253dc0e2af9b892faa7b992582e5cfcf69d6ab0f\"" Oct 27 08:31:26.903089 containerd[1607]: time="2025-10-27T08:31:26.903066859Z" level=info msg="StartContainer for \"45b1f9055a04695db5d74927253dc0e2af9b892faa7b992582e5cfcf69d6ab0f\"" Oct 27 08:31:26.904066 containerd[1607]: time="2025-10-27T08:31:26.904041065Z" level=info msg="connecting to shim 45b1f9055a04695db5d74927253dc0e2af9b892faa7b992582e5cfcf69d6ab0f" address="unix:///run/containerd/s/d1da9488808d848c5af9cfd15b6c967a1690e6364255fc5ca00717f24f4088ff" protocol=ttrpc version=3 Oct 27 08:31:26.908108 containerd[1607]: time="2025-10-27T08:31:26.908060642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"beb467583be100260d4f967640cde8bb0ed0f7dbe1d2c9622be65a4e10a65120\"" Oct 27 08:31:26.908788 kubelet[2391]: E1027 08:31:26.908754 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:31:26.910850 containerd[1607]: time="2025-10-27T08:31:26.910795560Z" level=info msg="CreateContainer within sandbox \"beb467583be100260d4f967640cde8bb0ed0f7dbe1d2c9622be65a4e10a65120\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 27 08:31:26.912221 containerd[1607]: time="2025-10-27T08:31:26.912193221Z" level=info msg="Container 601fae52d7b75e9abe543c46cff480df7fedfac18ae6e69a8fc8fc285bdc1612: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:31:26.916952 containerd[1607]: time="2025-10-27T08:31:26.916909814Z" level=info msg="Container b4d5f7c7e02c28832de0549d91473cddd28554d0e1593c18b44393df5c463401: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:31:26.918629 kubelet[2391]: I1027 08:31:26.918575 2391 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 08:31:26.919076 kubelet[2391]: E1027 08:31:26.919053 2391 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Oct 27 08:31:26.926676 containerd[1607]: time="2025-10-27T08:31:26.926548036Z" level=info msg="CreateContainer within sandbox \"5e9ad061ea81bd3f8e2c627b25d28eb6b3621e275a684c1dacce362f85083a86\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"601fae52d7b75e9abe543c46cff480df7fedfac18ae6e69a8fc8fc285bdc1612\"" Oct 27 08:31:26.928166 containerd[1607]: time="2025-10-27T08:31:26.927147280Z" level=info msg="StartContainer for \"601fae52d7b75e9abe543c46cff480df7fedfac18ae6e69a8fc8fc285bdc1612\"" Oct 27 08:31:26.928166 containerd[1607]: time="2025-10-27T08:31:26.928108131Z" level=info msg="connecting to shim 601fae52d7b75e9abe543c46cff480df7fedfac18ae6e69a8fc8fc285bdc1612" address="unix:///run/containerd/s/668ea4fc4aaa0d760950a7af81f4cd1531d8f567e19548c5c9bf2750a964f53e" protocol=ttrpc version=3 Oct 27 08:31:26.927664 systemd[1]: Started cri-containerd-45b1f9055a04695db5d74927253dc0e2af9b892faa7b992582e5cfcf69d6ab0f.scope - libcontainer container 45b1f9055a04695db5d74927253dc0e2af9b892faa7b992582e5cfcf69d6ab0f. Oct 27 08:31:26.928932 containerd[1607]: time="2025-10-27T08:31:26.928912439Z" level=info msg="CreateContainer within sandbox \"beb467583be100260d4f967640cde8bb0ed0f7dbe1d2c9622be65a4e10a65120\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b4d5f7c7e02c28832de0549d91473cddd28554d0e1593c18b44393df5c463401\"" Oct 27 08:31:26.929473 containerd[1607]: time="2025-10-27T08:31:26.929446260Z" level=info msg="StartContainer for \"b4d5f7c7e02c28832de0549d91473cddd28554d0e1593c18b44393df5c463401\"" Oct 27 08:31:26.930375 containerd[1607]: time="2025-10-27T08:31:26.930302586Z" level=info msg="connecting to shim b4d5f7c7e02c28832de0549d91473cddd28554d0e1593c18b44393df5c463401" address="unix:///run/containerd/s/4a4c450231692ce320b0fd52233d9c1d20213e34dd0b1abe61539f74bef5846f" protocol=ttrpc version=3 Oct 27 08:31:26.964632 systemd[1]: Started cri-containerd-601fae52d7b75e9abe543c46cff480df7fedfac18ae6e69a8fc8fc285bdc1612.scope - libcontainer container 601fae52d7b75e9abe543c46cff480df7fedfac18ae6e69a8fc8fc285bdc1612. Oct 27 08:31:26.966504 systemd[1]: Started cri-containerd-b4d5f7c7e02c28832de0549d91473cddd28554d0e1593c18b44393df5c463401.scope - libcontainer container b4d5f7c7e02c28832de0549d91473cddd28554d0e1593c18b44393df5c463401. Oct 27 08:31:26.989182 containerd[1607]: time="2025-10-27T08:31:26.989098167Z" level=info msg="StartContainer for \"45b1f9055a04695db5d74927253dc0e2af9b892faa7b992582e5cfcf69d6ab0f\" returns successfully" Oct 27 08:31:27.014502 kubelet[2391]: W1027 08:31:27.014395 2391 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Oct 27 08:31:27.014502 kubelet[2391]: E1027 08:31:27.014492 2391 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Oct 27 08:31:27.030791 containerd[1607]: time="2025-10-27T08:31:27.030688559Z" level=info msg="StartContainer for \"601fae52d7b75e9abe543c46cff480df7fedfac18ae6e69a8fc8fc285bdc1612\" returns successfully" Oct 27 08:31:27.036640 containerd[1607]: time="2025-10-27T08:31:27.036592179Z" level=info msg="StartContainer for \"b4d5f7c7e02c28832de0549d91473cddd28554d0e1593c18b44393df5c463401\" returns successfully" Oct 27 08:31:27.721725 kubelet[2391]: I1027 08:31:27.721671 2391 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 08:31:27.748975 kubelet[2391]: E1027 08:31:27.748939 2391 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 08:31:27.749109 kubelet[2391]: E1027 08:31:27.749072 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:31:27.750877 kubelet[2391]: E1027 08:31:27.750859 2391 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 08:31:27.750955 kubelet[2391]: E1027 08:31:27.750938 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:31:27.755396 kubelet[2391]: E1027 08:31:27.755377 2391 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 08:31:27.755530 kubelet[2391]: E1027 08:31:27.755514 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:31:28.108700 kubelet[2391]: E1027 08:31:28.108651 2391 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 27 08:31:28.298628 kubelet[2391]: I1027 08:31:28.298487 2391 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 27 08:31:28.298628 kubelet[2391]: E1027 08:31:28.298537 2391 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 27 08:31:28.313140 kubelet[2391]: E1027 08:31:28.313086 2391 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 08:31:28.413987 kubelet[2391]: E1027 08:31:28.413778 2391 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 08:31:28.514363 kubelet[2391]: E1027 08:31:28.514313 2391 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 08:31:28.614976 kubelet[2391]: E1027 08:31:28.614912 2391 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 08:31:28.716065 kubelet[2391]: E1027 08:31:28.715919 2391 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 08:31:28.756200 kubelet[2391]: I1027 08:31:28.756143 2391 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 27 08:31:28.756687 kubelet[2391]: I1027 08:31:28.756592 2391 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 27 08:31:28.756867 kubelet[2391]: I1027 08:31:28.756824 2391 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 27 08:31:28.761125 kubelet[2391]: E1027 08:31:28.761083 2391 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 27 08:31:28.761250 kubelet[2391]: E1027 08:31:28.761127 2391 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 27 08:31:28.761306 kubelet[2391]: E1027 08:31:28.761280 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:31:28.761363 kubelet[2391]: E1027 08:31:28.761314 2391 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 27 08:31:28.761363 kubelet[2391]: E1027 08:31:28.761281 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:31:28.761497 kubelet[2391]: E1027 08:31:28.761474 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:31:28.817300 kubelet[2391]: I1027 08:31:28.817245 2391 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 27 08:31:28.819570 kubelet[2391]: E1027 08:31:28.819519 2391 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 27 08:31:28.819570 kubelet[2391]: I1027 08:31:28.819551 2391 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 27 08:31:28.820913 kubelet[2391]: E1027 08:31:28.820876 2391 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 27 08:31:28.820913 kubelet[2391]: I1027 08:31:28.820896 2391 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 27 08:31:28.822050 kubelet[2391]: E1027 08:31:28.822008 2391 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 27 08:31:29.706990 kubelet[2391]: I1027 08:31:29.706931 2391 apiserver.go:52] "Watching apiserver" Oct 27 08:31:29.717269 kubelet[2391]: I1027 08:31:29.717238 2391 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 27 08:31:29.895764 kubelet[2391]: I1027 08:31:29.895708 2391 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 27 08:31:29.900532 kubelet[2391]: E1027 08:31:29.900499 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:31:30.181484 systemd[1]: Reload requested from client PID 2669 ('systemctl') (unit session-7.scope)... Oct 27 08:31:30.181508 systemd[1]: Reloading... Oct 27 08:31:30.276454 zram_generator::config[2713]: No configuration found. Oct 27 08:31:30.760829 kubelet[2391]: E1027 08:31:30.760786 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:31:30.796489 systemd[1]: Reloading finished in 614 ms. Oct 27 08:31:30.840554 kubelet[2391]: I1027 08:31:30.840483 2391 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 27 08:31:30.840927 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:31:30.864484 systemd[1]: kubelet.service: Deactivated successfully. Oct 27 08:31:30.864997 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:31:30.865072 systemd[1]: kubelet.service: Consumed 937ms CPU time, 129.8M memory peak. Oct 27 08:31:30.869548 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:31:31.121269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:31:31.130789 (kubelet)[2758]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 27 08:31:31.180885 kubelet[2758]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 27 08:31:31.180885 kubelet[2758]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 27 08:31:31.180885 kubelet[2758]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 27 08:31:31.181611 kubelet[2758]: I1027 08:31:31.180971 2758 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 27 08:31:31.189343 kubelet[2758]: I1027 08:31:31.189258 2758 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 27 08:31:31.189343 kubelet[2758]: I1027 08:31:31.189285 2758 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 27 08:31:31.190767 kubelet[2758]: I1027 08:31:31.190726 2758 server.go:954] "Client rotation is on, will bootstrap in background" Oct 27 08:31:31.197451 kubelet[2758]: I1027 08:31:31.196474 2758 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 27 08:31:31.199330 kubelet[2758]: I1027 08:31:31.199208 2758 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 27 08:31:31.205137 kubelet[2758]: I1027 08:31:31.205110 2758 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 27 08:31:31.211393 kubelet[2758]: I1027 08:31:31.211044 2758 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 27 08:31:31.211393 kubelet[2758]: I1027 08:31:31.211326 2758 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 27 08:31:31.211536 kubelet[2758]: I1027 08:31:31.211352 2758 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 27 08:31:31.211612 kubelet[2758]: I1027 08:31:31.211543 2758 topology_manager.go:138] "Creating topology manager with none policy" Oct 27 08:31:31.211612 kubelet[2758]: I1027 08:31:31.211552 2758 container_manager_linux.go:304] "Creating device plugin manager" Oct 27 08:31:31.211612 kubelet[2758]: I1027 08:31:31.211602 2758 state_mem.go:36] "Initialized new in-memory state store" Oct 27 08:31:31.211859 kubelet[2758]: I1027 08:31:31.211833 2758 kubelet.go:446] "Attempting to sync node with API server" Oct 27 08:31:31.212446 kubelet[2758]: I1027 08:31:31.212404 2758 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 27 08:31:31.212500 kubelet[2758]: I1027 08:31:31.212453 2758 kubelet.go:352] "Adding apiserver pod source" Oct 27 08:31:31.212500 kubelet[2758]: I1027 08:31:31.212472 2758 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 27 08:31:31.214907 kubelet[2758]: I1027 08:31:31.214862 2758 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 27 08:31:31.217001 kubelet[2758]: I1027 08:31:31.216942 2758 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 27 08:31:31.220012 kubelet[2758]: I1027 08:31:31.219978 2758 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 27 08:31:31.220077 kubelet[2758]: I1027 08:31:31.220050 2758 server.go:1287] "Started kubelet" Oct 27 08:31:31.224052 kubelet[2758]: I1027 08:31:31.223767 2758 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 27 08:31:31.225569 kubelet[2758]: I1027 08:31:31.225540 2758 server.go:479] "Adding debug handlers to kubelet server" Oct 27 08:31:31.229758 kubelet[2758]: I1027 08:31:31.229708 2758 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 27 08:31:31.229923 kubelet[2758]: I1027 08:31:31.229897 2758 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 27 08:31:31.230321 kubelet[2758]: I1027 08:31:31.230272 2758 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 27 08:31:31.232091 kubelet[2758]: I1027 08:31:31.232017 2758 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 27 08:31:31.233607 kubelet[2758]: E1027 08:31:31.232482 2758 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 08:31:31.233607 kubelet[2758]: I1027 08:31:31.233457 2758 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 27 08:31:31.233704 kubelet[2758]: I1027 08:31:31.233686 2758 reconciler.go:26] "Reconciler: start to sync state" Oct 27 08:31:31.235274 kubelet[2758]: I1027 08:31:31.235226 2758 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 27 08:31:31.239824 kubelet[2758]: I1027 08:31:31.239799 2758 factory.go:221] Registration of the systemd container factory successfully Oct 27 08:31:31.240079 kubelet[2758]: I1027 08:31:31.240048 2758 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 27 08:31:31.241859 kubelet[2758]: I1027 08:31:31.241826 2758 factory.go:221] Registration of the containerd container factory successfully Oct 27 08:31:31.242076 kubelet[2758]: E1027 08:31:31.242039 2758 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 27 08:31:31.252043 kubelet[2758]: I1027 08:31:31.251959 2758 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 27 08:31:31.253340 kubelet[2758]: I1027 08:31:31.253305 2758 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 27 08:31:31.253429 kubelet[2758]: I1027 08:31:31.253345 2758 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 27 08:31:31.253429 kubelet[2758]: I1027 08:31:31.253377 2758 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 27 08:31:31.253429 kubelet[2758]: I1027 08:31:31.253387 2758 kubelet.go:2382] "Starting kubelet main sync loop" Oct 27 08:31:31.253519 kubelet[2758]: E1027 08:31:31.253473 2758 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 27 08:31:31.306506 kubelet[2758]: I1027 08:31:31.306236 2758 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 27 08:31:31.306506 kubelet[2758]: I1027 08:31:31.306506 2758 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 27 08:31:31.306702 kubelet[2758]: I1027 08:31:31.306543 2758 state_mem.go:36] "Initialized new in-memory state store" Oct 27 08:31:31.306790 kubelet[2758]: I1027 08:31:31.306763 2758 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 27 08:31:31.306827 kubelet[2758]: I1027 08:31:31.306785 2758 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 27 08:31:31.306827 kubelet[2758]: I1027 08:31:31.306814 2758 policy_none.go:49] "None policy: Start" Oct 27 08:31:31.306869 kubelet[2758]: I1027 08:31:31.306826 2758 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 27 08:31:31.306869 kubelet[2758]: I1027 08:31:31.306843 2758 state_mem.go:35] "Initializing new in-memory state store" Oct 27 08:31:31.307119 kubelet[2758]: I1027 08:31:31.307080 2758 state_mem.go:75] "Updated machine memory state" Oct 27 08:31:31.315516 kubelet[2758]: I1027 08:31:31.314844 2758 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 27 08:31:31.315516 kubelet[2758]: I1027 08:31:31.315099 2758 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 27 08:31:31.315516 kubelet[2758]: I1027 08:31:31.315113 2758 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 27 08:31:31.315516 kubelet[2758]: I1027 08:31:31.315371 2758 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 27 08:31:31.317255 kubelet[2758]: E1027 08:31:31.317233 2758 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 27 08:31:31.355238 kubelet[2758]: I1027 08:31:31.355178 2758 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 27 08:31:31.355488 kubelet[2758]: I1027 08:31:31.355186 2758 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 27 08:31:31.355488 kubelet[2758]: I1027 08:31:31.355185 2758 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 27 08:31:31.363996 kubelet[2758]: E1027 08:31:31.363957 2758 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 27 08:31:31.422311 kubelet[2758]: I1027 08:31:31.422198 2758 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 08:31:31.430106 kubelet[2758]: I1027 08:31:31.429777 2758 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 27 08:31:31.430106 kubelet[2758]: I1027 08:31:31.429849 2758 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 27 08:31:31.435776 kubelet[2758]: I1027 08:31:31.435659 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:31:31.435776 kubelet[2758]: I1027 08:31:31.435706 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 27 08:31:31.435776 kubelet[2758]: I1027 08:31:31.435728 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ef7bfe7e69accbc40ff8aed706444418-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ef7bfe7e69accbc40ff8aed706444418\") " pod="kube-system/kube-apiserver-localhost" Oct 27 08:31:31.435776 kubelet[2758]: I1027 08:31:31.435786 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ef7bfe7e69accbc40ff8aed706444418-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ef7bfe7e69accbc40ff8aed706444418\") " pod="kube-system/kube-apiserver-localhost" Oct 27 08:31:31.436145 kubelet[2758]: I1027 08:31:31.435826 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:31:31.436145 kubelet[2758]: I1027 08:31:31.435843 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:31:31.436145 kubelet[2758]: I1027 08:31:31.435865 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ef7bfe7e69accbc40ff8aed706444418-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ef7bfe7e69accbc40ff8aed706444418\") " pod="kube-system/kube-apiserver-localhost" Oct 27 08:31:31.436145 kubelet[2758]: I1027 08:31:31.435900 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:31:31.436145 kubelet[2758]: I1027 08:31:31.435941 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:31:31.664768 kubelet[2758]: E1027 08:31:31.664619 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:31:31.666202 kubelet[2758]: E1027 08:31:31.666178 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:31:31.666259 kubelet[2758]: E1027 08:31:31.666181 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:31:32.214748 kubelet[2758]: I1027 08:31:32.214686 2758 apiserver.go:52] "Watching apiserver" Oct 27 08:31:32.234034 kubelet[2758]: I1027 08:31:32.234001 2758 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 27 08:31:32.283022 kubelet[2758]: I1027 08:31:32.282982 2758 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 27 08:31:32.283805 kubelet[2758]: I1027 08:31:32.283134 2758 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 27 08:31:32.283805 kubelet[2758]: I1027 08:31:32.283453 2758 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 27 08:31:32.366728 kubelet[2758]: E1027 08:31:32.366648 2758 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 27 08:31:32.366928 kubelet[2758]: E1027 08:31:32.366648 2758 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 27 08:31:32.367011 kubelet[2758]: E1027 08:31:32.366978 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:31:32.367760 kubelet[2758]: E1027 08:31:32.367059 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:31:32.367760 kubelet[2758]: E1027 08:31:32.367217 2758 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 27 08:31:32.367760 kubelet[2758]: I1027 08:31:32.367564 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.367541649 podStartE2EDuration="1.367541649s" podCreationTimestamp="2025-10-27 08:31:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 08:31:32.367137993 +0000 UTC m=+1.230851209" watchObservedRunningTime="2025-10-27 08:31:32.367541649 +0000 UTC m=+1.231254855" Oct 27 08:31:32.367961 kubelet[2758]: E1027 08:31:32.367888 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:31:32.377344 kubelet[2758]: I1027 08:31:32.377286 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.377253009 podStartE2EDuration="3.377253009s" podCreationTimestamp="2025-10-27 08:31:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 08:31:32.376562457 +0000 UTC m=+1.240275663" watchObservedRunningTime="2025-10-27 08:31:32.377253009 +0000 UTC m=+1.240966216" Oct 27 08:31:32.385200 kubelet[2758]: I1027 08:31:32.385141 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.385127002 podStartE2EDuration="1.385127002s" podCreationTimestamp="2025-10-27 08:31:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 08:31:32.384539014 +0000 UTC m=+1.248252220" watchObservedRunningTime="2025-10-27 08:31:32.385127002 +0000 UTC m=+1.248840208" Oct 27 08:31:33.284746 kubelet[2758]: E1027 08:31:33.284593 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:31:33.284746 kubelet[2758]: E1027 08:31:33.284662 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:31:33.284746 kubelet[2758]: E1027 08:31:33.284776 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:31:34.285950 kubelet[2758]: E1027 08:31:34.285900 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:31:35.722958 systemd[1]: Created slice kubepods-besteffort-pod6c7dcbc1_b77b_42ce_b5f3_65c17dbb6b71.slice - libcontainer container kubepods-besteffort-pod6c7dcbc1_b77b_42ce_b5f3_65c17dbb6b71.slice. Oct 27 08:31:35.755384 kubelet[2758]: I1027 08:31:35.755326 2758 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 27 08:31:35.755873 containerd[1607]: time="2025-10-27T08:31:35.755813186Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 27 08:31:35.756143 kubelet[2758]: I1027 08:31:35.756042 2758 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 27 08:31:35.761289 kubelet[2758]: I1027 08:31:35.761238 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c7dcbc1-b77b-42ce-b5f3-65c17dbb6b71-lib-modules\") pod \"kube-proxy-pp5ff\" (UID: \"6c7dcbc1-b77b-42ce-b5f3-65c17dbb6b71\") " pod="kube-system/kube-proxy-pp5ff" Oct 27 08:31:35.761289 kubelet[2758]: I1027 08:31:35.761269 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcjgm\" (UniqueName: \"kubernetes.io/projected/6c7dcbc1-b77b-42ce-b5f3-65c17dbb6b71-kube-api-access-gcjgm\") pod \"kube-proxy-pp5ff\" (UID: \"6c7dcbc1-b77b-42ce-b5f3-65c17dbb6b71\") " pod="kube-system/kube-proxy-pp5ff" Oct 27 08:31:35.761289 kubelet[2758]: I1027 08:31:35.761286 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c7dcbc1-b77b-42ce-b5f3-65c17dbb6b71-xtables-lock\") pod \"kube-proxy-pp5ff\" (UID: \"6c7dcbc1-b77b-42ce-b5f3-65c17dbb6b71\") " pod="kube-system/kube-proxy-pp5ff" Oct 27 08:31:35.761289 kubelet[2758]: I1027 08:31:35.761301 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6c7dcbc1-b77b-42ce-b5f3-65c17dbb6b71-kube-proxy\") pod \"kube-proxy-pp5ff\" (UID: \"6c7dcbc1-b77b-42ce-b5f3-65c17dbb6b71\") " pod="kube-system/kube-proxy-pp5ff" Oct 27 08:31:35.866210 kubelet[2758]: E1027 08:31:35.866161 2758 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 27 08:31:35.866210 kubelet[2758]: E1027 08:31:35.866207 2758 projected.go:194] Error preparing data for projected volume kube-api-access-gcjgm for pod kube-system/kube-proxy-pp5ff: configmap "kube-root-ca.crt" not found Oct 27 08:31:35.866426 kubelet[2758]: E1027 08:31:35.866264 2758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6c7dcbc1-b77b-42ce-b5f3-65c17dbb6b71-kube-api-access-gcjgm podName:6c7dcbc1-b77b-42ce-b5f3-65c17dbb6b71 nodeName:}" failed. No retries permitted until 2025-10-27 08:31:36.366239544 +0000 UTC m=+5.229952750 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gcjgm" (UniqueName: "kubernetes.io/projected/6c7dcbc1-b77b-42ce-b5f3-65c17dbb6b71-kube-api-access-gcjgm") pod "kube-proxy-pp5ff" (UID: "6c7dcbc1-b77b-42ce-b5f3-65c17dbb6b71") : configmap "kube-root-ca.crt" not found Oct 27 08:31:36.466147 kubelet[2758]: E1027 08:31:36.466083 2758 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 27 08:31:36.466147 kubelet[2758]: E1027 08:31:36.466122 2758 projected.go:194] Error preparing data for projected volume kube-api-access-gcjgm for pod kube-system/kube-proxy-pp5ff: configmap "kube-root-ca.crt" not found Oct 27 08:31:36.466365 kubelet[2758]: E1027 08:31:36.466204 2758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6c7dcbc1-b77b-42ce-b5f3-65c17dbb6b71-kube-api-access-gcjgm podName:6c7dcbc1-b77b-42ce-b5f3-65c17dbb6b71 nodeName:}" failed. No retries permitted until 2025-10-27 08:31:37.466164283 +0000 UTC m=+6.329877489 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gcjgm" (UniqueName: "kubernetes.io/projected/6c7dcbc1-b77b-42ce-b5f3-65c17dbb6b71-kube-api-access-gcjgm") pod "kube-proxy-pp5ff" (UID: "6c7dcbc1-b77b-42ce-b5f3-65c17dbb6b71") : configmap "kube-root-ca.crt" not found Oct 27 08:31:36.849404 systemd[1]: Created slice kubepods-besteffort-podd13ebce1_fbef_4439_867a_bdc6592d9715.slice - libcontainer container kubepods-besteffort-podd13ebce1_fbef_4439_867a_bdc6592d9715.slice. Oct 27 08:31:36.869247 kubelet[2758]: I1027 08:31:36.869201 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-964xf\" (UniqueName: \"kubernetes.io/projected/d13ebce1-fbef-4439-867a-bdc6592d9715-kube-api-access-964xf\") pod \"tigera-operator-7dcd859c48-tx74p\" (UID: \"d13ebce1-fbef-4439-867a-bdc6592d9715\") " pod="tigera-operator/tigera-operator-7dcd859c48-tx74p" Oct 27 08:31:36.869675 kubelet[2758]: I1027 08:31:36.869314 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d13ebce1-fbef-4439-867a-bdc6592d9715-var-lib-calico\") pod \"tigera-operator-7dcd859c48-tx74p\" (UID: \"d13ebce1-fbef-4439-867a-bdc6592d9715\") " pod="tigera-operator/tigera-operator-7dcd859c48-tx74p" Oct 27 08:31:37.154324 containerd[1607]: time="2025-10-27T08:31:37.154196341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-tx74p,Uid:d13ebce1-fbef-4439-867a-bdc6592d9715,Namespace:tigera-operator,Attempt:0,}" Oct 27 08:31:37.184298 containerd[1607]: time="2025-10-27T08:31:37.183621594Z" level=info msg="connecting to shim 8566139a11b562da895960ddbf71c0aeb2ee3fe6487bb29acbc7fd61ef9880d1" address="unix:///run/containerd/s/5e7544e62da3da204752b577f962eb1b8140a7fffb32335b0e11d20de3adad65" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:31:37.216562 systemd[1]: Started cri-containerd-8566139a11b562da895960ddbf71c0aeb2ee3fe6487bb29acbc7fd61ef9880d1.scope - libcontainer container 8566139a11b562da895960ddbf71c0aeb2ee3fe6487bb29acbc7fd61ef9880d1. Oct 27 08:31:37.265784 containerd[1607]: time="2025-10-27T08:31:37.265725626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-tx74p,Uid:d13ebce1-fbef-4439-867a-bdc6592d9715,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8566139a11b562da895960ddbf71c0aeb2ee3fe6487bb29acbc7fd61ef9880d1\"" Oct 27 08:31:37.267632 containerd[1607]: time="2025-10-27T08:31:37.267603485Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 27 08:31:37.534142 kubelet[2758]: E1027 08:31:37.534090 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:31:37.534548 containerd[1607]: time="2025-10-27T08:31:37.534501770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pp5ff,Uid:6c7dcbc1-b77b-42ce-b5f3-65c17dbb6b71,Namespace:kube-system,Attempt:0,}" Oct 27 08:31:37.713540 containerd[1607]: time="2025-10-27T08:31:37.713494069Z" level=info msg="connecting to shim 4cf2b52d0cb61e6153e8a6284f5b1971a5d77aa3dcef438c9583ac43cfde598b" address="unix:///run/containerd/s/37119e4ec122fbb54cf615a3a5f0982f0942957d51e4804eef00b09dd08011de" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:31:37.748595 systemd[1]: Started cri-containerd-4cf2b52d0cb61e6153e8a6284f5b1971a5d77aa3dcef438c9583ac43cfde598b.scope - libcontainer container 4cf2b52d0cb61e6153e8a6284f5b1971a5d77aa3dcef438c9583ac43cfde598b. Oct 27 08:31:37.775271 containerd[1607]: time="2025-10-27T08:31:37.775211094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pp5ff,Uid:6c7dcbc1-b77b-42ce-b5f3-65c17dbb6b71,Namespace:kube-system,Attempt:0,} returns sandbox id \"4cf2b52d0cb61e6153e8a6284f5b1971a5d77aa3dcef438c9583ac43cfde598b\"" Oct 27 08:31:37.775949 kubelet[2758]: E1027 08:31:37.775924 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:31:37.778074 containerd[1607]: time="2025-10-27T08:31:37.778038902Z" level=info msg="CreateContainer within sandbox \"4cf2b52d0cb61e6153e8a6284f5b1971a5d77aa3dcef438c9583ac43cfde598b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 27 08:31:37.788928 containerd[1607]: time="2025-10-27T08:31:37.788811876Z" level=info msg="Container 605d7fe4cbb45670a21ce60280db07f9f01ec5f656fed999360c3b7e2530413b: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:31:37.797798 containerd[1607]: time="2025-10-27T08:31:37.797765296Z" level=info msg="CreateContainer within sandbox \"4cf2b52d0cb61e6153e8a6284f5b1971a5d77aa3dcef438c9583ac43cfde598b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"605d7fe4cbb45670a21ce60280db07f9f01ec5f656fed999360c3b7e2530413b\"" Oct 27 08:31:37.798446 containerd[1607]: time="2025-10-27T08:31:37.798245811Z" level=info msg="StartContainer for \"605d7fe4cbb45670a21ce60280db07f9f01ec5f656fed999360c3b7e2530413b\"" Oct 27 08:31:37.799977 containerd[1607]: time="2025-10-27T08:31:37.799946928Z" level=info msg="connecting to shim 605d7fe4cbb45670a21ce60280db07f9f01ec5f656fed999360c3b7e2530413b" address="unix:///run/containerd/s/37119e4ec122fbb54cf615a3a5f0982f0942957d51e4804eef00b09dd08011de" protocol=ttrpc version=3 Oct 27 08:31:37.823562 systemd[1]: Started cri-containerd-605d7fe4cbb45670a21ce60280db07f9f01ec5f656fed999360c3b7e2530413b.scope - libcontainer container 605d7fe4cbb45670a21ce60280db07f9f01ec5f656fed999360c3b7e2530413b. Oct 27 08:31:37.868588 containerd[1607]: time="2025-10-27T08:31:37.868532778Z" level=info msg="StartContainer for \"605d7fe4cbb45670a21ce60280db07f9f01ec5f656fed999360c3b7e2530413b\" returns successfully" Oct 27 08:31:38.296352 kubelet[2758]: E1027 08:31:38.296312 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:31:38.307355 kubelet[2758]: I1027 08:31:38.307298 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pp5ff" podStartSLOduration=3.307272717 podStartE2EDuration="3.307272717s" podCreationTimestamp="2025-10-27 08:31:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 08:31:38.307113857 +0000 UTC m=+7.170827063" watchObservedRunningTime="2025-10-27 08:31:38.307272717 +0000 UTC m=+7.170985913" Oct 27 08:31:38.915089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount107773286.mount: Deactivated successfully. Oct 27 08:31:39.502517 containerd[1607]: time="2025-10-27T08:31:39.502468694Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:31:39.503513 containerd[1607]: time="2025-10-27T08:31:39.503479141Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Oct 27 08:31:39.505428 containerd[1607]: time="2025-10-27T08:31:39.505360318Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:31:39.507569 containerd[1607]: time="2025-10-27T08:31:39.507525003Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:31:39.508104 containerd[1607]: time="2025-10-27T08:31:39.508065881Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.240424759s" Oct 27 08:31:39.508104 containerd[1607]: time="2025-10-27T08:31:39.508096808Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 27 08:31:39.510295 containerd[1607]: time="2025-10-27T08:31:39.510236397Z" level=info msg="CreateContainer within sandbox \"8566139a11b562da895960ddbf71c0aeb2ee3fe6487bb29acbc7fd61ef9880d1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 27 08:31:39.517788 containerd[1607]: time="2025-10-27T08:31:39.517732285Z" level=info msg="Container 2537e085a44abb19473969d3dd2711c2d1325756465cb5807948487471d87cdf: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:31:39.527943 containerd[1607]: time="2025-10-27T08:31:39.527875658Z" level=info msg="CreateContainer within sandbox \"8566139a11b562da895960ddbf71c0aeb2ee3fe6487bb29acbc7fd61ef9880d1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2537e085a44abb19473969d3dd2711c2d1325756465cb5807948487471d87cdf\"" Oct 27 08:31:39.528420 containerd[1607]: time="2025-10-27T08:31:39.528375502Z" level=info msg="StartContainer for \"2537e085a44abb19473969d3dd2711c2d1325756465cb5807948487471d87cdf\"" Oct 27 08:31:39.529354 containerd[1607]: time="2025-10-27T08:31:39.529327762Z" level=info msg="connecting to shim 2537e085a44abb19473969d3dd2711c2d1325756465cb5807948487471d87cdf" address="unix:///run/containerd/s/5e7544e62da3da204752b577f962eb1b8140a7fffb32335b0e11d20de3adad65" protocol=ttrpc version=3 Oct 27 08:31:39.592594 systemd[1]: Started cri-containerd-2537e085a44abb19473969d3dd2711c2d1325756465cb5807948487471d87cdf.scope - libcontainer container 2537e085a44abb19473969d3dd2711c2d1325756465cb5807948487471d87cdf. Oct 27 08:31:39.630269 containerd[1607]: time="2025-10-27T08:31:39.630197233Z" level=info msg="StartContainer for \"2537e085a44abb19473969d3dd2711c2d1325756465cb5807948487471d87cdf\" returns successfully" Oct 27 08:31:40.309143 kubelet[2758]: I1027 08:31:40.309069 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-tx74p" podStartSLOduration=2.067253085 podStartE2EDuration="4.309045609s" podCreationTimestamp="2025-10-27 08:31:36 +0000 UTC" firstStartedPulling="2025-10-27 08:31:37.267154838 +0000 UTC m=+6.130868044" lastFinishedPulling="2025-10-27 08:31:39.508947372 +0000 UTC m=+8.372660568" observedRunningTime="2025-10-27 08:31:40.308918716 +0000 UTC m=+9.172631932" watchObservedRunningTime="2025-10-27 08:31:40.309045609 +0000 UTC m=+9.172758815" Oct 27 08:31:41.595941 kubelet[2758]: E1027 08:31:41.595894 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:31:42.208433 kubelet[2758]: E1027 08:31:42.208340 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:31:42.305908 kubelet[2758]: E1027 08:31:42.305865 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:31:42.506091 update_engine[1578]: I20251027 08:31:42.505133 1578 update_attempter.cc:509] Updating boot flags... Oct 27 08:31:42.948186 kubelet[2758]: E1027 08:31:42.948057 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:31:43.310701 kubelet[2758]: E1027 08:31:43.309119 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:31:46.241279 sudo[1815]: pam_unix(sudo:session): session closed for user root Oct 27 08:31:46.243309 sshd[1814]: Connection closed by 10.0.0.1 port 55816 Oct 27 08:31:46.246194 sshd-session[1811]: pam_unix(sshd:session): session closed for user core Oct 27 08:31:46.250833 systemd[1]: sshd@6-10.0.0.134:22-10.0.0.1:55816.service: Deactivated successfully. Oct 27 08:31:46.256723 systemd[1]: session-7.scope: Deactivated successfully. Oct 27 08:31:46.258851 systemd[1]: session-7.scope: Consumed 5.944s CPU time, 218.5M memory peak. Oct 27 08:31:46.262394 systemd-logind[1577]: Session 7 logged out. Waiting for processes to exit. Oct 27 08:31:46.265803 systemd-logind[1577]: Removed session 7. Oct 27 08:31:52.488347 systemd[1]: Created slice kubepods-besteffort-podc4d81294_8eba_4ef1_ae16_c03d220e82c1.slice - libcontainer container kubepods-besteffort-podc4d81294_8eba_4ef1_ae16_c03d220e82c1.slice. Oct 27 08:31:52.566321 kubelet[2758]: I1027 08:31:52.566257 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4d81294-8eba-4ef1-ae16-c03d220e82c1-tigera-ca-bundle\") pod \"calico-typha-698644ff65-v6btr\" (UID: \"c4d81294-8eba-4ef1-ae16-c03d220e82c1\") " pod="calico-system/calico-typha-698644ff65-v6btr" Oct 27 08:31:52.566321 kubelet[2758]: I1027 08:31:52.566308 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nlsb\" (UniqueName: \"kubernetes.io/projected/c4d81294-8eba-4ef1-ae16-c03d220e82c1-kube-api-access-5nlsb\") pod \"calico-typha-698644ff65-v6btr\" (UID: \"c4d81294-8eba-4ef1-ae16-c03d220e82c1\") " pod="calico-system/calico-typha-698644ff65-v6btr" Oct 27 08:31:52.566321 kubelet[2758]: I1027 08:31:52.566333 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c4d81294-8eba-4ef1-ae16-c03d220e82c1-typha-certs\") pod \"calico-typha-698644ff65-v6btr\" (UID: \"c4d81294-8eba-4ef1-ae16-c03d220e82c1\") " pod="calico-system/calico-typha-698644ff65-v6btr" Oct 27 08:31:52.669550 systemd[1]: Created slice kubepods-besteffort-poddb5dd603_ee31_4ac1_b8bd_17a2da16e7c2.slice - libcontainer container kubepods-besteffort-poddb5dd603_ee31_4ac1_b8bd_17a2da16e7c2.slice. Oct 27 08:31:52.768865 kubelet[2758]: I1027 08:31:52.768824 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/db5dd603-ee31-4ac1-b8bd-17a2da16e7c2-var-lib-calico\") pod \"calico-node-76ckp\" (UID: \"db5dd603-ee31-4ac1-b8bd-17a2da16e7c2\") " pod="calico-system/calico-node-76ckp" Oct 27 08:31:52.768865 kubelet[2758]: I1027 08:31:52.768864 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/db5dd603-ee31-4ac1-b8bd-17a2da16e7c2-var-run-calico\") pod \"calico-node-76ckp\" (UID: \"db5dd603-ee31-4ac1-b8bd-17a2da16e7c2\") " pod="calico-system/calico-node-76ckp" Oct 27 08:31:52.769041 kubelet[2758]: I1027 08:31:52.768884 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db5dd603-ee31-4ac1-b8bd-17a2da16e7c2-lib-modules\") pod \"calico-node-76ckp\" (UID: \"db5dd603-ee31-4ac1-b8bd-17a2da16e7c2\") " pod="calico-system/calico-node-76ckp" Oct 27 08:31:52.769041 kubelet[2758]: I1027 08:31:52.768903 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/db5dd603-ee31-4ac1-b8bd-17a2da16e7c2-node-certs\") pod \"calico-node-76ckp\" (UID: \"db5dd603-ee31-4ac1-b8bd-17a2da16e7c2\") " pod="calico-system/calico-node-76ckp" Oct 27 08:31:52.769041 kubelet[2758]: I1027 08:31:52.768918 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/db5dd603-ee31-4ac1-b8bd-17a2da16e7c2-policysync\") pod \"calico-node-76ckp\" (UID: \"db5dd603-ee31-4ac1-b8bd-17a2da16e7c2\") " pod="calico-system/calico-node-76ckp" Oct 27 08:31:52.769041 kubelet[2758]: I1027 08:31:52.768952 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/db5dd603-ee31-4ac1-b8bd-17a2da16e7c2-cni-net-dir\") pod \"calico-node-76ckp\" (UID: \"db5dd603-ee31-4ac1-b8bd-17a2da16e7c2\") " pod="calico-system/calico-node-76ckp" Oct 27 08:31:52.769041 kubelet[2758]: I1027 08:31:52.768967 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/db5dd603-ee31-4ac1-b8bd-17a2da16e7c2-flexvol-driver-host\") pod \"calico-node-76ckp\" (UID: \"db5dd603-ee31-4ac1-b8bd-17a2da16e7c2\") " pod="calico-system/calico-node-76ckp" Oct 27 08:31:52.769200 kubelet[2758]: I1027 08:31:52.769001 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/db5dd603-ee31-4ac1-b8bd-17a2da16e7c2-cni-bin-dir\") pod \"calico-node-76ckp\" (UID: \"db5dd603-ee31-4ac1-b8bd-17a2da16e7c2\") " pod="calico-system/calico-node-76ckp" Oct 27 08:31:52.769200 kubelet[2758]: I1027 08:31:52.769021 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wxsq\" (UniqueName: \"kubernetes.io/projected/db5dd603-ee31-4ac1-b8bd-17a2da16e7c2-kube-api-access-5wxsq\") pod \"calico-node-76ckp\" (UID: \"db5dd603-ee31-4ac1-b8bd-17a2da16e7c2\") " pod="calico-system/calico-node-76ckp" Oct 27 08:31:52.769200 kubelet[2758]: I1027 08:31:52.769039 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/db5dd603-ee31-4ac1-b8bd-17a2da16e7c2-cni-log-dir\") pod \"calico-node-76ckp\" (UID: \"db5dd603-ee31-4ac1-b8bd-17a2da16e7c2\") " pod="calico-system/calico-node-76ckp" Oct 27 08:31:52.769200 kubelet[2758]: I1027 08:31:52.769108 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/db5dd603-ee31-4ac1-b8bd-17a2da16e7c2-xtables-lock\") pod \"calico-node-76ckp\" (UID: \"db5dd603-ee31-4ac1-b8bd-17a2da16e7c2\") " pod="calico-system/calico-node-76ckp" Oct 27 08:31:52.769200 kubelet[2758]: I1027 08:31:52.769162 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db5dd603-ee31-4ac1-b8bd-17a2da16e7c2-tigera-ca-bundle\") pod \"calico-node-76ckp\" (UID: \"db5dd603-ee31-4ac1-b8bd-17a2da16e7c2\") " pod="calico-system/calico-node-76ckp" Oct 27 08:31:52.796016 kubelet[2758]: E1027 08:31:52.795969 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:31:52.796803 containerd[1607]: time="2025-10-27T08:31:52.796460911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-698644ff65-v6btr,Uid:c4d81294-8eba-4ef1-ae16-c03d220e82c1,Namespace:calico-system,Attempt:0,}" Oct 27 08:31:52.873705 kubelet[2758]: E1027 08:31:52.871714 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:52.873705 kubelet[2758]: W1027 08:31:52.871746 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:52.873705 kubelet[2758]: E1027 08:31:52.871796 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:52.875018 kubelet[2758]: E1027 08:31:52.874997 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:52.875018 kubelet[2758]: W1027 08:31:52.875012 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:52.875124 kubelet[2758]: E1027 08:31:52.875027 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:52.889286 kubelet[2758]: E1027 08:31:52.889234 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:52.890433 kubelet[2758]: W1027 08:31:52.889528 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:52.890433 kubelet[2758]: E1027 08:31:52.889564 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:52.911961 kubelet[2758]: E1027 08:31:52.911237 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wxbd2" podUID="7910370a-d2b9-4ee0-8c0a-b904aff5f65a" Oct 27 08:31:52.931077 containerd[1607]: time="2025-10-27T08:31:52.931009262Z" level=info msg="connecting to shim 7bfaf7f4e75618df5edd324a08341a59960236eb418cfe90771e5d3bac9afa38" address="unix:///run/containerd/s/02460e279e5eeae101ee0987683f1597cfda5f04a6bccb51e8bd7168ee0e8a2b" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:31:52.956864 kubelet[2758]: E1027 08:31:52.956812 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:52.956864 kubelet[2758]: W1027 08:31:52.956845 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:52.957049 kubelet[2758]: E1027 08:31:52.956872 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:52.957049 kubelet[2758]: E1027 08:31:52.957044 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:52.957097 kubelet[2758]: W1027 08:31:52.957054 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:52.957097 kubelet[2758]: E1027 08:31:52.957065 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:52.957568 kubelet[2758]: E1027 08:31:52.957228 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:52.957568 kubelet[2758]: W1027 08:31:52.957247 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:52.957568 kubelet[2758]: E1027 08:31:52.957258 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:52.957568 kubelet[2758]: E1027 08:31:52.957518 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:52.957568 kubelet[2758]: W1027 08:31:52.957528 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:52.957568 kubelet[2758]: E1027 08:31:52.957539 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:52.958588 kubelet[2758]: E1027 08:31:52.957748 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:52.958588 kubelet[2758]: W1027 08:31:52.957759 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:52.958588 kubelet[2758]: E1027 08:31:52.957770 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:52.958588 kubelet[2758]: E1027 08:31:52.957964 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:52.958588 kubelet[2758]: W1027 08:31:52.957975 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:52.958588 kubelet[2758]: E1027 08:31:52.957985 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:52.958588 kubelet[2758]: E1027 08:31:52.958160 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:52.958588 kubelet[2758]: W1027 08:31:52.958169 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:52.958588 kubelet[2758]: E1027 08:31:52.958179 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:52.958588 kubelet[2758]: E1027 08:31:52.958336 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:52.959457 kubelet[2758]: W1027 08:31:52.958346 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:52.959457 kubelet[2758]: E1027 08:31:52.958355 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:52.959457 kubelet[2758]: E1027 08:31:52.958569 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:52.959457 kubelet[2758]: W1027 08:31:52.958581 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:52.959457 kubelet[2758]: E1027 08:31:52.958593 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:52.959457 kubelet[2758]: E1027 08:31:52.958766 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:52.959457 kubelet[2758]: W1027 08:31:52.958776 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:52.959457 kubelet[2758]: E1027 08:31:52.958786 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:52.959457 kubelet[2758]: E1027 08:31:52.959117 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:52.959457 kubelet[2758]: W1027 08:31:52.959128 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:52.960543 kubelet[2758]: E1027 08:31:52.959140 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:52.960543 kubelet[2758]: E1027 08:31:52.959308 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:52.960543 kubelet[2758]: W1027 08:31:52.959317 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:52.960543 kubelet[2758]: E1027 08:31:52.959327 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:52.960543 kubelet[2758]: E1027 08:31:52.959545 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:52.960543 kubelet[2758]: W1027 08:31:52.959556 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:52.960543 kubelet[2758]: E1027 08:31:52.959567 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:52.960543 kubelet[2758]: E1027 08:31:52.959732 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:52.960543 kubelet[2758]: W1027 08:31:52.959740 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:52.960543 kubelet[2758]: E1027 08:31:52.959751 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:52.961387 kubelet[2758]: E1027 08:31:52.959957 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:52.961387 kubelet[2758]: W1027 08:31:52.959981 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:52.961387 kubelet[2758]: E1027 08:31:52.959992 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:52.961387 kubelet[2758]: E1027 08:31:52.960171 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:52.961387 kubelet[2758]: W1027 08:31:52.960182 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:52.961387 kubelet[2758]: E1027 08:31:52.960193 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:52.961387 kubelet[2758]: E1027 08:31:52.960430 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:52.961387 kubelet[2758]: W1027 08:31:52.960442 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:52.961387 kubelet[2758]: E1027 08:31:52.960453 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:52.961387 kubelet[2758]: E1027 08:31:52.960665 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:52.962247 kubelet[2758]: W1027 08:31:52.960675 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:52.962247 kubelet[2758]: E1027 08:31:52.960686 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:52.962247 kubelet[2758]: E1027 08:31:52.960858 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:52.962247 kubelet[2758]: W1027 08:31:52.960868 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:52.962247 kubelet[2758]: E1027 08:31:52.960878 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:52.962247 kubelet[2758]: E1027 08:31:52.961050 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:52.962247 kubelet[2758]: W1027 08:31:52.961059 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:52.962247 kubelet[2758]: E1027 08:31:52.961069 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:52.961606 systemd[1]: Started cri-containerd-7bfaf7f4e75618df5edd324a08341a59960236eb418cfe90771e5d3bac9afa38.scope - libcontainer container 7bfaf7f4e75618df5edd324a08341a59960236eb418cfe90771e5d3bac9afa38. Oct 27 08:31:52.971786 kubelet[2758]: E1027 08:31:52.971754 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:52.971951 kubelet[2758]: W1027 08:31:52.971928 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:52.972038 kubelet[2758]: E1027 08:31:52.972018 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:52.972130 kubelet[2758]: I1027 08:31:52.972117 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7910370a-d2b9-4ee0-8c0a-b904aff5f65a-kubelet-dir\") pod \"csi-node-driver-wxbd2\" (UID: \"7910370a-d2b9-4ee0-8c0a-b904aff5f65a\") " pod="calico-system/csi-node-driver-wxbd2" Oct 27 08:31:52.972780 kubelet[2758]: E1027 08:31:52.972678 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:52.973535 kubelet[2758]: W1027 08:31:52.973490 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:52.973583 kubelet[2758]: E1027 08:31:52.973555 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:52.973953 kubelet[2758]: E1027 08:31:52.973864 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:52.973953 kubelet[2758]: W1027 08:31:52.973879 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:52.973953 kubelet[2758]: E1027 08:31:52.973932 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:52.974183 kubelet[2758]: E1027 08:31:52.974152 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:31:52.974867 kubelet[2758]: E1027 08:31:52.974826 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:52.974867 kubelet[2758]: W1027 08:31:52.974854 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:52.974867 kubelet[2758]: E1027 08:31:52.974865 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:52.974981 kubelet[2758]: I1027 08:31:52.974897 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7910370a-d2b9-4ee0-8c0a-b904aff5f65a-socket-dir\") pod \"csi-node-driver-wxbd2\" (UID: \"7910370a-d2b9-4ee0-8c0a-b904aff5f65a\") " pod="calico-system/csi-node-driver-wxbd2" Oct 27 08:31:52.975204 kubelet[2758]: E1027 08:31:52.975177 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:52.975204 kubelet[2758]: W1027 08:31:52.975194 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:52.975263 kubelet[2758]: E1027 08:31:52.975213 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:52.975263 kubelet[2758]: I1027 08:31:52.975228 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7910370a-d2b9-4ee0-8c0a-b904aff5f65a-registration-dir\") pod \"csi-node-driver-wxbd2\" (UID: \"7910370a-d2b9-4ee0-8c0a-b904aff5f65a\") " pod="calico-system/csi-node-driver-wxbd2" Oct 27 08:31:52.975576 kubelet[2758]: E1027 08:31:52.975491 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:52.975576 kubelet[2758]: W1027 08:31:52.975519 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:52.975576 kubelet[2758]: E1027 08:31:52.975544 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:52.975576 kubelet[2758]: I1027 08:31:52.975563 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbxkp\" (UniqueName: \"kubernetes.io/projected/7910370a-d2b9-4ee0-8c0a-b904aff5f65a-kube-api-access-zbxkp\") pod \"csi-node-driver-wxbd2\" (UID: \"7910370a-d2b9-4ee0-8c0a-b904aff5f65a\") " pod="calico-system/csi-node-driver-wxbd2" Oct 27 08:31:52.975935 kubelet[2758]: E1027 08:31:52.975814 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:52.975935 kubelet[2758]: W1027 08:31:52.975824 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:52.975935 kubelet[2758]: E1027 08:31:52.975844 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:52.975935 kubelet[2758]: I1027 08:31:52.975862 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7910370a-d2b9-4ee0-8c0a-b904aff5f65a-varrun\") pod \"csi-node-driver-wxbd2\" (UID: \"7910370a-d2b9-4ee0-8c0a-b904aff5f65a\") " pod="calico-system/csi-node-driver-wxbd2" Oct 27 08:31:52.976464 containerd[1607]: time="2025-10-27T08:31:52.975745278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-76ckp,Uid:db5dd603-ee31-4ac1-b8bd-17a2da16e7c2,Namespace:calico-system,Attempt:0,}" Oct 27 08:31:52.976530 kubelet[2758]: E1027 08:31:52.976103 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:52.976530 kubelet[2758]: W1027 08:31:52.976112 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:52.976530 kubelet[2758]: E1027 08:31:52.976138 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:52.976530 kubelet[2758]: E1027 08:31:52.976335 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:52.976530 kubelet[2758]: W1027 08:31:52.976342 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:52.976530 kubelet[2758]: E1027 08:31:52.976362 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:52.976692 kubelet[2758]: E1027 08:31:52.976660 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:52.976692 kubelet[2758]: W1027 08:31:52.976671 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:52.976760 kubelet[2758]: E1027 08:31:52.976698 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:52.977606 kubelet[2758]: E1027 08:31:52.976955 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:52.977606 kubelet[2758]: W1027 08:31:52.976966 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:52.977606 kubelet[2758]: E1027 08:31:52.977026 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:52.977606 kubelet[2758]: E1027 08:31:52.977200 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:52.977606 kubelet[2758]: W1027 08:31:52.977209 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:52.977606 kubelet[2758]: E1027 08:31:52.977341 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:52.977606 kubelet[2758]: E1027 08:31:52.977582 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:52.977606 kubelet[2758]: W1027 08:31:52.977591 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:52.977606 kubelet[2758]: E1027 08:31:52.977604 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:52.977821 kubelet[2758]: E1027 08:31:52.977785 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:52.977821 kubelet[2758]: W1027 08:31:52.977794 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:52.977821 kubelet[2758]: E1027 08:31:52.977804 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:52.978071 kubelet[2758]: E1027 08:31:52.978045 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:52.978071 kubelet[2758]: W1027 08:31:52.978063 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:52.978071 kubelet[2758]: E1027 08:31:52.978074 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:53.005772 containerd[1607]: time="2025-10-27T08:31:53.004653699Z" level=info msg="connecting to shim 60236c25ccfe0dc482eda4ce080e6a66618b49dd6e28fc0a93706885b7dd12a4" address="unix:///run/containerd/s/c14d9f2ccbfeae64f8bf4e16564fc5e7af9f8f08c2ba0f8e4035fb1a5715a2a8" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:31:53.029604 systemd[1]: Started cri-containerd-60236c25ccfe0dc482eda4ce080e6a66618b49dd6e28fc0a93706885b7dd12a4.scope - libcontainer container 60236c25ccfe0dc482eda4ce080e6a66618b49dd6e28fc0a93706885b7dd12a4. Oct 27 08:31:53.034943 containerd[1607]: time="2025-10-27T08:31:53.034827100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-698644ff65-v6btr,Uid:c4d81294-8eba-4ef1-ae16-c03d220e82c1,Namespace:calico-system,Attempt:0,} returns sandbox id \"7bfaf7f4e75618df5edd324a08341a59960236eb418cfe90771e5d3bac9afa38\"" Oct 27 08:31:53.040434 kubelet[2758]: E1027 08:31:53.038661 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:31:53.040541 containerd[1607]: time="2025-10-27T08:31:53.039832649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 27 08:31:53.060260 containerd[1607]: time="2025-10-27T08:31:53.060207437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-76ckp,Uid:db5dd603-ee31-4ac1-b8bd-17a2da16e7c2,Namespace:calico-system,Attempt:0,} returns sandbox id \"60236c25ccfe0dc482eda4ce080e6a66618b49dd6e28fc0a93706885b7dd12a4\"" Oct 27 08:31:53.060991 kubelet[2758]: E1027 08:31:53.060965 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:31:53.078029 kubelet[2758]: E1027 08:31:53.077987 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:53.078029 kubelet[2758]: W1027 08:31:53.078013 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:53.078029 kubelet[2758]: E1027 08:31:53.078033 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:53.078297 kubelet[2758]: E1027 08:31:53.078252 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:53.078297 kubelet[2758]: W1027 08:31:53.078280 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:53.078297 kubelet[2758]: E1027 08:31:53.078297 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:53.078522 kubelet[2758]: E1027 08:31:53.078502 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:53.078522 kubelet[2758]: W1027 08:31:53.078518 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:53.078596 kubelet[2758]: E1027 08:31:53.078534 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:53.078798 kubelet[2758]: E1027 08:31:53.078763 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:53.078798 kubelet[2758]: W1027 08:31:53.078778 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:53.078798 kubelet[2758]: E1027 08:31:53.078793 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:53.079055 kubelet[2758]: E1027 08:31:53.079029 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:53.079085 kubelet[2758]: W1027 08:31:53.079054 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:53.079085 kubelet[2758]: E1027 08:31:53.079072 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:53.079390 kubelet[2758]: E1027 08:31:53.079348 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:53.079390 kubelet[2758]: W1027 08:31:53.079368 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:53.079390 kubelet[2758]: E1027 08:31:53.079392 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:53.079639 kubelet[2758]: E1027 08:31:53.079610 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:53.079639 kubelet[2758]: W1027 08:31:53.079623 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:53.079752 kubelet[2758]: E1027 08:31:53.079727 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:53.079905 kubelet[2758]: E1027 08:31:53.079879 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:53.079905 kubelet[2758]: W1027 08:31:53.079895 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:53.080030 kubelet[2758]: E1027 08:31:53.079994 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:53.080107 kubelet[2758]: E1027 08:31:53.080090 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:53.080107 kubelet[2758]: W1027 08:31:53.080100 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:53.080155 kubelet[2758]: E1027 08:31:53.080146 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:53.080322 kubelet[2758]: E1027 08:31:53.080305 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:53.080322 kubelet[2758]: W1027 08:31:53.080316 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:53.080380 kubelet[2758]: E1027 08:31:53.080346 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:53.080783 kubelet[2758]: E1027 08:31:53.080765 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:53.080783 kubelet[2758]: W1027 08:31:53.080778 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:53.080875 kubelet[2758]: E1027 08:31:53.080851 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:53.080983 kubelet[2758]: E1027 08:31:53.080967 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:53.080983 kubelet[2758]: W1027 08:31:53.080977 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:53.081048 kubelet[2758]: E1027 08:31:53.081001 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:53.081127 kubelet[2758]: E1027 08:31:53.081112 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:53.081127 kubelet[2758]: W1027 08:31:53.081122 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:53.081179 kubelet[2758]: E1027 08:31:53.081132 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:53.081307 kubelet[2758]: E1027 08:31:53.081288 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:53.081307 kubelet[2758]: W1027 08:31:53.081300 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:53.081379 kubelet[2758]: E1027 08:31:53.081324 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:53.081513 kubelet[2758]: E1027 08:31:53.081487 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:53.081513 kubelet[2758]: W1027 08:31:53.081502 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:53.081577 kubelet[2758]: E1027 08:31:53.081527 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:53.081686 kubelet[2758]: E1027 08:31:53.081663 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:53.081686 kubelet[2758]: W1027 08:31:53.081673 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:53.081757 kubelet[2758]: E1027 08:31:53.081696 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:53.081838 kubelet[2758]: E1027 08:31:53.081817 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:53.081838 kubelet[2758]: W1027 08:31:53.081827 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:53.081905 kubelet[2758]: E1027 08:31:53.081839 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:53.082068 kubelet[2758]: E1027 08:31:53.082040 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:53.082068 kubelet[2758]: W1027 08:31:53.082052 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:53.082137 kubelet[2758]: E1027 08:31:53.082093 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:53.082450 kubelet[2758]: E1027 08:31:53.082432 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:53.082450 kubelet[2758]: W1027 08:31:53.082443 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:53.082514 kubelet[2758]: E1027 08:31:53.082464 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:53.082703 kubelet[2758]: E1027 08:31:53.082683 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:53.082703 kubelet[2758]: W1027 08:31:53.082694 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:53.082753 kubelet[2758]: E1027 08:31:53.082707 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:53.083034 kubelet[2758]: E1027 08:31:53.083017 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:53.083034 kubelet[2758]: W1027 08:31:53.083029 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:53.083096 kubelet[2758]: E1027 08:31:53.083042 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:53.083357 kubelet[2758]: E1027 08:31:53.083340 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:53.083357 kubelet[2758]: W1027 08:31:53.083351 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:53.083428 kubelet[2758]: E1027 08:31:53.083392 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:53.083579 kubelet[2758]: E1027 08:31:53.083562 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:53.083579 kubelet[2758]: W1027 08:31:53.083574 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:53.083679 kubelet[2758]: E1027 08:31:53.083656 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:53.083791 kubelet[2758]: E1027 08:31:53.083773 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:53.083791 kubelet[2758]: W1027 08:31:53.083785 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:53.083854 kubelet[2758]: E1027 08:31:53.083803 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:53.084004 kubelet[2758]: E1027 08:31:53.083983 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:53.084042 kubelet[2758]: W1027 08:31:53.084014 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:53.084042 kubelet[2758]: E1027 08:31:53.084028 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:53.091315 kubelet[2758]: E1027 08:31:53.091292 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:53.091315 kubelet[2758]: W1027 08:31:53.091306 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:53.091315 kubelet[2758]: E1027 08:31:53.091318 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:55.043497 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount759293556.mount: Deactivated successfully. Oct 27 08:31:55.254073 kubelet[2758]: E1027 08:31:55.253973 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wxbd2" podUID="7910370a-d2b9-4ee0-8c0a-b904aff5f65a" Oct 27 08:31:55.642577 containerd[1607]: time="2025-10-27T08:31:55.642523436Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:31:55.643233 containerd[1607]: time="2025-10-27T08:31:55.643207757Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Oct 27 08:31:55.644297 containerd[1607]: time="2025-10-27T08:31:55.644271313Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:31:55.646231 containerd[1607]: time="2025-10-27T08:31:55.646206308Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:31:55.646771 containerd[1607]: time="2025-10-27T08:31:55.646743266Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.606885621s" Oct 27 08:31:55.646810 containerd[1607]: time="2025-10-27T08:31:55.646772380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 27 08:31:55.647575 containerd[1607]: time="2025-10-27T08:31:55.647525409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 27 08:31:55.655530 containerd[1607]: time="2025-10-27T08:31:55.655380694Z" level=info msg="CreateContainer within sandbox \"7bfaf7f4e75618df5edd324a08341a59960236eb418cfe90771e5d3bac9afa38\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 27 08:31:55.662417 containerd[1607]: time="2025-10-27T08:31:55.662365351Z" level=info msg="Container 33d1881ad3ce14c7bd593550b7ab0be580a171b863ffa166ae354e6b99f07336: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:31:55.669886 containerd[1607]: time="2025-10-27T08:31:55.669841913Z" level=info msg="CreateContainer within sandbox \"7bfaf7f4e75618df5edd324a08341a59960236eb418cfe90771e5d3bac9afa38\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"33d1881ad3ce14c7bd593550b7ab0be580a171b863ffa166ae354e6b99f07336\"" Oct 27 08:31:55.670270 containerd[1607]: time="2025-10-27T08:31:55.670209386Z" level=info msg="StartContainer for \"33d1881ad3ce14c7bd593550b7ab0be580a171b863ffa166ae354e6b99f07336\"" Oct 27 08:31:55.671247 containerd[1607]: time="2025-10-27T08:31:55.671217739Z" level=info msg="connecting to shim 33d1881ad3ce14c7bd593550b7ab0be580a171b863ffa166ae354e6b99f07336" address="unix:///run/containerd/s/02460e279e5eeae101ee0987683f1597cfda5f04a6bccb51e8bd7168ee0e8a2b" protocol=ttrpc version=3 Oct 27 08:31:55.692536 systemd[1]: Started cri-containerd-33d1881ad3ce14c7bd593550b7ab0be580a171b863ffa166ae354e6b99f07336.scope - libcontainer container 33d1881ad3ce14c7bd593550b7ab0be580a171b863ffa166ae354e6b99f07336. Oct 27 08:31:55.747594 containerd[1607]: time="2025-10-27T08:31:55.747557297Z" level=info msg="StartContainer for \"33d1881ad3ce14c7bd593550b7ab0be580a171b863ffa166ae354e6b99f07336\" returns successfully" Oct 27 08:31:56.341933 kubelet[2758]: E1027 08:31:56.341900 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:31:56.352617 kubelet[2758]: I1027 08:31:56.352553 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-698644ff65-v6btr" podStartSLOduration=1.744749398 podStartE2EDuration="4.35253478s" podCreationTimestamp="2025-10-27 08:31:52 +0000 UTC" firstStartedPulling="2025-10-27 08:31:53.039646573 +0000 UTC m=+21.903359779" lastFinishedPulling="2025-10-27 08:31:55.647431965 +0000 UTC m=+24.511145161" observedRunningTime="2025-10-27 08:31:56.352514883 +0000 UTC m=+25.216228109" watchObservedRunningTime="2025-10-27 08:31:56.35253478 +0000 UTC m=+25.216247986" Oct 27 08:31:56.380687 kubelet[2758]: E1027 08:31:56.380643 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:56.380687 kubelet[2758]: W1027 08:31:56.380669 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:56.380687 kubelet[2758]: E1027 08:31:56.380708 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:56.380948 kubelet[2758]: E1027 08:31:56.380930 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:56.381000 kubelet[2758]: W1027 08:31:56.380940 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:56.381000 kubelet[2758]: E1027 08:31:56.380960 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:56.381136 kubelet[2758]: E1027 08:31:56.381113 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:56.381136 kubelet[2758]: W1027 08:31:56.381128 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:56.381136 kubelet[2758]: E1027 08:31:56.381136 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:56.381390 kubelet[2758]: E1027 08:31:56.381356 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:56.381390 kubelet[2758]: W1027 08:31:56.381371 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:56.381390 kubelet[2758]: E1027 08:31:56.381381 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:56.381604 kubelet[2758]: E1027 08:31:56.381576 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:56.381604 kubelet[2758]: W1027 08:31:56.381588 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:56.381604 kubelet[2758]: E1027 08:31:56.381597 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:56.381831 kubelet[2758]: E1027 08:31:56.381773 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:56.381831 kubelet[2758]: W1027 08:31:56.381780 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:56.381831 kubelet[2758]: E1027 08:31:56.381787 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:56.381960 kubelet[2758]: E1027 08:31:56.381948 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:56.381960 kubelet[2758]: W1027 08:31:56.381956 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:56.382006 kubelet[2758]: E1027 08:31:56.381963 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:56.382152 kubelet[2758]: E1027 08:31:56.382133 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:56.382152 kubelet[2758]: W1027 08:31:56.382141 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:56.382152 kubelet[2758]: E1027 08:31:56.382149 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:56.382565 kubelet[2758]: E1027 08:31:56.382539 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:56.382565 kubelet[2758]: W1027 08:31:56.382563 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:56.382640 kubelet[2758]: E1027 08:31:56.382586 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:56.382803 kubelet[2758]: E1027 08:31:56.382785 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:56.382803 kubelet[2758]: W1027 08:31:56.382799 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:56.382860 kubelet[2758]: E1027 08:31:56.382809 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:56.383028 kubelet[2758]: E1027 08:31:56.383010 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:56.383028 kubelet[2758]: W1027 08:31:56.383023 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:56.383087 kubelet[2758]: E1027 08:31:56.383034 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:56.383247 kubelet[2758]: E1027 08:31:56.383230 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:56.383247 kubelet[2758]: W1027 08:31:56.383242 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:56.383247 kubelet[2758]: E1027 08:31:56.383252 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:56.383482 kubelet[2758]: E1027 08:31:56.383466 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:56.383482 kubelet[2758]: W1027 08:31:56.383478 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:56.383534 kubelet[2758]: E1027 08:31:56.383489 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:56.383703 kubelet[2758]: E1027 08:31:56.383675 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:56.383703 kubelet[2758]: W1027 08:31:56.383689 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:56.383784 kubelet[2758]: E1027 08:31:56.383709 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:56.383925 kubelet[2758]: E1027 08:31:56.383908 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:56.383925 kubelet[2758]: W1027 08:31:56.383921 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:56.383974 kubelet[2758]: E1027 08:31:56.383931 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:56.399427 kubelet[2758]: E1027 08:31:56.399372 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:56.399427 kubelet[2758]: W1027 08:31:56.399393 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:56.399427 kubelet[2758]: E1027 08:31:56.399435 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:56.399689 kubelet[2758]: E1027 08:31:56.399662 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:56.399689 kubelet[2758]: W1027 08:31:56.399676 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:56.399766 kubelet[2758]: E1027 08:31:56.399714 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:56.399940 kubelet[2758]: E1027 08:31:56.399929 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:56.399940 kubelet[2758]: W1027 08:31:56.399938 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:56.400003 kubelet[2758]: E1027 08:31:56.399951 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:56.400176 kubelet[2758]: E1027 08:31:56.400158 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:56.400176 kubelet[2758]: W1027 08:31:56.400167 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:56.400227 kubelet[2758]: E1027 08:31:56.400183 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:56.400378 kubelet[2758]: E1027 08:31:56.400367 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:56.400378 kubelet[2758]: W1027 08:31:56.400375 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:56.400452 kubelet[2758]: E1027 08:31:56.400388 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:56.400601 kubelet[2758]: E1027 08:31:56.400587 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:56.400601 kubelet[2758]: W1027 08:31:56.400596 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:56.400651 kubelet[2758]: E1027 08:31:56.400610 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:56.400830 kubelet[2758]: E1027 08:31:56.400818 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:56.400830 kubelet[2758]: W1027 08:31:56.400827 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:56.400891 kubelet[2758]: E1027 08:31:56.400867 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:56.401019 kubelet[2758]: E1027 08:31:56.401007 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:56.401019 kubelet[2758]: W1027 08:31:56.401016 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:56.401081 kubelet[2758]: E1027 08:31:56.401055 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:56.401230 kubelet[2758]: E1027 08:31:56.401206 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:56.401230 kubelet[2758]: W1027 08:31:56.401220 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:56.401230 kubelet[2758]: E1027 08:31:56.401236 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:56.401433 kubelet[2758]: E1027 08:31:56.401397 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:56.401433 kubelet[2758]: W1027 08:31:56.401426 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:56.401507 kubelet[2758]: E1027 08:31:56.401439 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:56.401593 kubelet[2758]: E1027 08:31:56.401577 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:56.401593 kubelet[2758]: W1027 08:31:56.401586 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:56.401660 kubelet[2758]: E1027 08:31:56.401599 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:56.401789 kubelet[2758]: E1027 08:31:56.401772 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:56.401789 kubelet[2758]: W1027 08:31:56.401783 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:56.401853 kubelet[2758]: E1027 08:31:56.401797 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:56.402104 kubelet[2758]: E1027 08:31:56.402075 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:56.402104 kubelet[2758]: W1027 08:31:56.402089 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:56.402201 kubelet[2758]: E1027 08:31:56.402111 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:56.402299 kubelet[2758]: E1027 08:31:56.402280 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:56.402299 kubelet[2758]: W1027 08:31:56.402292 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:56.402358 kubelet[2758]: E1027 08:31:56.402305 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:56.402483 kubelet[2758]: E1027 08:31:56.402466 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:56.402483 kubelet[2758]: W1027 08:31:56.402476 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:56.402559 kubelet[2758]: E1027 08:31:56.402489 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:56.402645 kubelet[2758]: E1027 08:31:56.402629 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:56.402645 kubelet[2758]: W1027 08:31:56.402639 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:56.402718 kubelet[2758]: E1027 08:31:56.402649 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:56.402867 kubelet[2758]: E1027 08:31:56.402850 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:56.402867 kubelet[2758]: W1027 08:31:56.402860 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:56.402867 kubelet[2758]: E1027 08:31:56.402868 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:56.403146 kubelet[2758]: E1027 08:31:56.403130 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:56.403146 kubelet[2758]: W1027 08:31:56.403141 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:56.403194 kubelet[2758]: E1027 08:31:56.403149 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:57.257853 kubelet[2758]: E1027 08:31:57.257793 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wxbd2" podUID="7910370a-d2b9-4ee0-8c0a-b904aff5f65a" Oct 27 08:31:57.343594 kubelet[2758]: I1027 08:31:57.343546 2758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 27 08:31:57.344092 kubelet[2758]: E1027 08:31:57.343885 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:31:57.388853 kubelet[2758]: E1027 08:31:57.388806 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:57.388853 kubelet[2758]: W1027 08:31:57.388831 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:57.388853 kubelet[2758]: E1027 08:31:57.388856 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:57.389063 kubelet[2758]: E1027 08:31:57.389048 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:57.389063 kubelet[2758]: W1027 08:31:57.389059 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:57.389120 kubelet[2758]: E1027 08:31:57.389068 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:57.389240 kubelet[2758]: E1027 08:31:57.389219 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:57.389240 kubelet[2758]: W1027 08:31:57.389231 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:57.389286 kubelet[2758]: E1027 08:31:57.389241 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:57.389450 kubelet[2758]: E1027 08:31:57.389429 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:57.389450 kubelet[2758]: W1027 08:31:57.389443 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:57.389502 kubelet[2758]: E1027 08:31:57.389453 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:57.389628 kubelet[2758]: E1027 08:31:57.389613 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:57.389628 kubelet[2758]: W1027 08:31:57.389623 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:57.389683 kubelet[2758]: E1027 08:31:57.389632 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:57.389815 kubelet[2758]: E1027 08:31:57.389800 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:57.389815 kubelet[2758]: W1027 08:31:57.389811 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:57.389878 kubelet[2758]: E1027 08:31:57.389822 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:57.390111 kubelet[2758]: E1027 08:31:57.390089 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:57.390111 kubelet[2758]: W1027 08:31:57.390101 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:57.390156 kubelet[2758]: E1027 08:31:57.390110 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:57.390284 kubelet[2758]: E1027 08:31:57.390270 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:57.390284 kubelet[2758]: W1027 08:31:57.390280 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:57.390339 kubelet[2758]: E1027 08:31:57.390290 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:57.390491 kubelet[2758]: E1027 08:31:57.390477 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:57.390491 kubelet[2758]: W1027 08:31:57.390487 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:57.390540 kubelet[2758]: E1027 08:31:57.390496 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:57.390689 kubelet[2758]: E1027 08:31:57.390675 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:57.390689 kubelet[2758]: W1027 08:31:57.390685 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:57.390748 kubelet[2758]: E1027 08:31:57.390694 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:57.390886 kubelet[2758]: E1027 08:31:57.390868 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:57.390886 kubelet[2758]: W1027 08:31:57.390880 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:57.390928 kubelet[2758]: E1027 08:31:57.390890 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:57.391087 kubelet[2758]: E1027 08:31:57.391073 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:57.391087 kubelet[2758]: W1027 08:31:57.391084 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:57.391135 kubelet[2758]: E1027 08:31:57.391093 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:57.391291 kubelet[2758]: E1027 08:31:57.391277 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:57.391291 kubelet[2758]: W1027 08:31:57.391287 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:57.391344 kubelet[2758]: E1027 08:31:57.391296 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:57.391505 kubelet[2758]: E1027 08:31:57.391491 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:57.391505 kubelet[2758]: W1027 08:31:57.391500 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:57.391627 kubelet[2758]: E1027 08:31:57.391510 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:57.391690 kubelet[2758]: E1027 08:31:57.391676 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:57.391690 kubelet[2758]: W1027 08:31:57.391688 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:57.391740 kubelet[2758]: E1027 08:31:57.391698 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:57.406321 kubelet[2758]: E1027 08:31:57.406287 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:57.406372 kubelet[2758]: W1027 08:31:57.406320 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:57.406372 kubelet[2758]: E1027 08:31:57.406349 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:57.406654 kubelet[2758]: E1027 08:31:57.406634 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:57.406654 kubelet[2758]: W1027 08:31:57.406649 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:57.406705 kubelet[2758]: E1027 08:31:57.406668 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:57.407037 kubelet[2758]: E1027 08:31:57.406937 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:57.407037 kubelet[2758]: W1027 08:31:57.406951 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:57.407037 kubelet[2758]: E1027 08:31:57.406969 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:57.407348 kubelet[2758]: E1027 08:31:57.407325 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:57.407348 kubelet[2758]: W1027 08:31:57.407338 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:57.407425 kubelet[2758]: E1027 08:31:57.407350 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:57.407579 kubelet[2758]: E1027 08:31:57.407568 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:57.407579 kubelet[2758]: W1027 08:31:57.407577 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:57.407638 kubelet[2758]: E1027 08:31:57.407590 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:57.407780 kubelet[2758]: E1027 08:31:57.407769 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:57.407780 kubelet[2758]: W1027 08:31:57.407777 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:57.407826 kubelet[2758]: E1027 08:31:57.407788 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:57.407950 kubelet[2758]: E1027 08:31:57.407937 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:57.407974 kubelet[2758]: W1027 08:31:57.407948 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:57.407974 kubelet[2758]: E1027 08:31:57.407963 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:57.408149 kubelet[2758]: E1027 08:31:57.408139 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:57.408149 kubelet[2758]: W1027 08:31:57.408147 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:57.408188 kubelet[2758]: E1027 08:31:57.408158 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:57.408315 kubelet[2758]: E1027 08:31:57.408303 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:57.408315 kubelet[2758]: W1027 08:31:57.408312 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:57.408374 kubelet[2758]: E1027 08:31:57.408332 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:57.408473 kubelet[2758]: E1027 08:31:57.408461 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:57.408473 kubelet[2758]: W1027 08:31:57.408470 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:57.408555 kubelet[2758]: E1027 08:31:57.408519 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:57.408638 kubelet[2758]: E1027 08:31:57.408628 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:57.408673 kubelet[2758]: W1027 08:31:57.408662 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:57.408710 kubelet[2758]: E1027 08:31:57.408678 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:57.409053 kubelet[2758]: E1027 08:31:57.409031 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:57.409053 kubelet[2758]: W1027 08:31:57.409049 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:57.409106 kubelet[2758]: E1027 08:31:57.409072 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:57.409270 kubelet[2758]: E1027 08:31:57.409257 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:57.409270 kubelet[2758]: W1027 08:31:57.409268 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:57.409333 kubelet[2758]: E1027 08:31:57.409281 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:57.409452 kubelet[2758]: E1027 08:31:57.409440 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:57.409452 kubelet[2758]: W1027 08:31:57.409450 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:57.409506 kubelet[2758]: E1027 08:31:57.409464 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:57.409617 kubelet[2758]: E1027 08:31:57.409606 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:57.409617 kubelet[2758]: W1027 08:31:57.409614 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:57.409675 kubelet[2758]: E1027 08:31:57.409626 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:57.409794 kubelet[2758]: E1027 08:31:57.409783 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:57.409794 kubelet[2758]: W1027 08:31:57.409791 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:57.409837 kubelet[2758]: E1027 08:31:57.409805 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:57.410124 kubelet[2758]: E1027 08:31:57.410095 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:57.410124 kubelet[2758]: W1027 08:31:57.410112 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:57.410175 kubelet[2758]: E1027 08:31:57.410130 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:57.410326 kubelet[2758]: E1027 08:31:57.410308 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:31:57.410326 kubelet[2758]: W1027 08:31:57.410319 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:31:57.410369 kubelet[2758]: E1027 08:31:57.410327 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:31:57.752385 containerd[1607]: time="2025-10-27T08:31:57.752235634Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:31:57.753369 containerd[1607]: time="2025-10-27T08:31:57.753341011Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Oct 27 08:31:57.755034 containerd[1607]: time="2025-10-27T08:31:57.754926119Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:31:57.757364 containerd[1607]: time="2025-10-27T08:31:57.757299022Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:31:57.757998 containerd[1607]: time="2025-10-27T08:31:57.757951406Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 2.110400399s" Oct 27 08:31:57.757998 containerd[1607]: time="2025-10-27T08:31:57.757983546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 27 08:31:57.760018 containerd[1607]: time="2025-10-27T08:31:57.759965862Z" level=info msg="CreateContainer within sandbox \"60236c25ccfe0dc482eda4ce080e6a66618b49dd6e28fc0a93706885b7dd12a4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 27 08:31:57.769510 containerd[1607]: time="2025-10-27T08:31:57.769456416Z" level=info msg="Container d300ad89014473fd89b66fed511cf9d7a7aa4e8c12ab16c4265d8ccff18b8b9f: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:31:57.780735 containerd[1607]: time="2025-10-27T08:31:57.780656007Z" level=info msg="CreateContainer within sandbox \"60236c25ccfe0dc482eda4ce080e6a66618b49dd6e28fc0a93706885b7dd12a4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d300ad89014473fd89b66fed511cf9d7a7aa4e8c12ab16c4265d8ccff18b8b9f\"" Oct 27 08:31:57.782070 containerd[1607]: time="2025-10-27T08:31:57.782027178Z" level=info msg="StartContainer for \"d300ad89014473fd89b66fed511cf9d7a7aa4e8c12ab16c4265d8ccff18b8b9f\"" Oct 27 08:31:57.784419 containerd[1607]: time="2025-10-27T08:31:57.784381528Z" level=info msg="connecting to shim d300ad89014473fd89b66fed511cf9d7a7aa4e8c12ab16c4265d8ccff18b8b9f" address="unix:///run/containerd/s/c14d9f2ccbfeae64f8bf4e16564fc5e7af9f8f08c2ba0f8e4035fb1a5715a2a8" protocol=ttrpc version=3 Oct 27 08:31:57.806596 systemd[1]: Started cri-containerd-d300ad89014473fd89b66fed511cf9d7a7aa4e8c12ab16c4265d8ccff18b8b9f.scope - libcontainer container d300ad89014473fd89b66fed511cf9d7a7aa4e8c12ab16c4265d8ccff18b8b9f. Oct 27 08:31:57.850095 containerd[1607]: time="2025-10-27T08:31:57.850045736Z" level=info msg="StartContainer for \"d300ad89014473fd89b66fed511cf9d7a7aa4e8c12ab16c4265d8ccff18b8b9f\" returns successfully" Oct 27 08:31:57.859742 systemd[1]: cri-containerd-d300ad89014473fd89b66fed511cf9d7a7aa4e8c12ab16c4265d8ccff18b8b9f.scope: Deactivated successfully. Oct 27 08:31:57.862118 containerd[1607]: time="2025-10-27T08:31:57.862075552Z" level=info msg="received exit event container_id:\"d300ad89014473fd89b66fed511cf9d7a7aa4e8c12ab16c4265d8ccff18b8b9f\" id:\"d300ad89014473fd89b66fed511cf9d7a7aa4e8c12ab16c4265d8ccff18b8b9f\" pid:3496 exited_at:{seconds:1761553917 nanos:861638048}" Oct 27 08:31:57.862216 containerd[1607]: time="2025-10-27T08:31:57.862076103Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d300ad89014473fd89b66fed511cf9d7a7aa4e8c12ab16c4265d8ccff18b8b9f\" id:\"d300ad89014473fd89b66fed511cf9d7a7aa4e8c12ab16c4265d8ccff18b8b9f\" pid:3496 exited_at:{seconds:1761553917 nanos:861638048}" Oct 27 08:31:57.890978 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d300ad89014473fd89b66fed511cf9d7a7aa4e8c12ab16c4265d8ccff18b8b9f-rootfs.mount: Deactivated successfully. Oct 27 08:31:58.466443 kubelet[2758]: E1027 08:31:58.348321 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:31:59.254438 kubelet[2758]: E1027 08:31:59.254373 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wxbd2" podUID="7910370a-d2b9-4ee0-8c0a-b904aff5f65a" Oct 27 08:31:59.351755 kubelet[2758]: E1027 08:31:59.351716 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:31:59.352752 containerd[1607]: time="2025-10-27T08:31:59.352615536Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 27 08:32:01.254630 kubelet[2758]: E1027 08:32:01.254577 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wxbd2" podUID="7910370a-d2b9-4ee0-8c0a-b904aff5f65a" Oct 27 08:32:03.255956 kubelet[2758]: E1027 08:32:03.255552 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wxbd2" podUID="7910370a-d2b9-4ee0-8c0a-b904aff5f65a" Oct 27 08:32:03.736831 containerd[1607]: time="2025-10-27T08:32:03.736766032Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:32:03.737533 containerd[1607]: time="2025-10-27T08:32:03.737503687Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Oct 27 08:32:03.738663 containerd[1607]: time="2025-10-27T08:32:03.738632222Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:32:03.740774 containerd[1607]: time="2025-10-27T08:32:03.740710017Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:32:03.741464 containerd[1607]: time="2025-10-27T08:32:03.741387631Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.388716651s" Oct 27 08:32:03.741464 containerd[1607]: time="2025-10-27T08:32:03.741463041Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 27 08:32:03.743980 containerd[1607]: time="2025-10-27T08:32:03.743910465Z" level=info msg="CreateContainer within sandbox \"60236c25ccfe0dc482eda4ce080e6a66618b49dd6e28fc0a93706885b7dd12a4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 27 08:32:03.763404 containerd[1607]: time="2025-10-27T08:32:03.763316795Z" level=info msg="Container f4a376af1335219821502d864cacecfac14d6d2c486531076ce8d92edfe1ad66: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:32:03.773736 containerd[1607]: time="2025-10-27T08:32:03.773661918Z" level=info msg="CreateContainer within sandbox \"60236c25ccfe0dc482eda4ce080e6a66618b49dd6e28fc0a93706885b7dd12a4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f4a376af1335219821502d864cacecfac14d6d2c486531076ce8d92edfe1ad66\"" Oct 27 08:32:03.774350 containerd[1607]: time="2025-10-27T08:32:03.774297222Z" level=info msg="StartContainer for \"f4a376af1335219821502d864cacecfac14d6d2c486531076ce8d92edfe1ad66\"" Oct 27 08:32:03.779455 containerd[1607]: time="2025-10-27T08:32:03.777611193Z" level=info msg="connecting to shim f4a376af1335219821502d864cacecfac14d6d2c486531076ce8d92edfe1ad66" address="unix:///run/containerd/s/c14d9f2ccbfeae64f8bf4e16564fc5e7af9f8f08c2ba0f8e4035fb1a5715a2a8" protocol=ttrpc version=3 Oct 27 08:32:03.802568 systemd[1]: Started cri-containerd-f4a376af1335219821502d864cacecfac14d6d2c486531076ce8d92edfe1ad66.scope - libcontainer container f4a376af1335219821502d864cacecfac14d6d2c486531076ce8d92edfe1ad66. Oct 27 08:32:03.856973 containerd[1607]: time="2025-10-27T08:32:03.856915302Z" level=info msg="StartContainer for \"f4a376af1335219821502d864cacecfac14d6d2c486531076ce8d92edfe1ad66\" returns successfully" Oct 27 08:32:04.362871 kubelet[2758]: E1027 08:32:04.362822 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:32:05.254544 kubelet[2758]: E1027 08:32:05.254476 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wxbd2" podUID="7910370a-d2b9-4ee0-8c0a-b904aff5f65a" Oct 27 08:32:05.364270 kubelet[2758]: E1027 08:32:05.364229 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:32:05.631053 systemd[1]: cri-containerd-f4a376af1335219821502d864cacecfac14d6d2c486531076ce8d92edfe1ad66.scope: Deactivated successfully. Oct 27 08:32:05.631650 systemd[1]: cri-containerd-f4a376af1335219821502d864cacecfac14d6d2c486531076ce8d92edfe1ad66.scope: Consumed 623ms CPU time, 179M memory peak, 3.5M read from disk, 171.3M written to disk. Oct 27 08:32:05.662120 containerd[1607]: time="2025-10-27T08:32:05.662042947Z" level=info msg="received exit event container_id:\"f4a376af1335219821502d864cacecfac14d6d2c486531076ce8d92edfe1ad66\" id:\"f4a376af1335219821502d864cacecfac14d6d2c486531076ce8d92edfe1ad66\" pid:3556 exited_at:{seconds:1761553925 nanos:631307773}" Oct 27 08:32:05.689253 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f4a376af1335219821502d864cacecfac14d6d2c486531076ce8d92edfe1ad66-rootfs.mount: Deactivated successfully. Oct 27 08:32:05.717350 containerd[1607]: time="2025-10-27T08:32:05.717283711Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f4a376af1335219821502d864cacecfac14d6d2c486531076ce8d92edfe1ad66\" id:\"f4a376af1335219821502d864cacecfac14d6d2c486531076ce8d92edfe1ad66\" pid:3556 exited_at:{seconds:1761553925 nanos:631307773}" Oct 27 08:32:05.743806 kubelet[2758]: I1027 08:32:05.743657 2758 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 27 08:32:06.145735 systemd[1]: Created slice kubepods-besteffort-pod155c9352_5d7b_4c12_ac76_f7ec57d8dc42.slice - libcontainer container kubepods-besteffort-pod155c9352_5d7b_4c12_ac76_f7ec57d8dc42.slice. Oct 27 08:32:06.153204 systemd[1]: Created slice kubepods-besteffort-pod29f6d368_bb0a_4633_8401_1b96e3d04052.slice - libcontainer container kubepods-besteffort-pod29f6d368_bb0a_4633_8401_1b96e3d04052.slice. Oct 27 08:32:06.160574 systemd[1]: Created slice kubepods-burstable-pod0ea1d490_08d4_4317_a16f_2f9d40de2cf9.slice - libcontainer container kubepods-burstable-pod0ea1d490_08d4_4317_a16f_2f9d40de2cf9.slice. Oct 27 08:32:06.167475 systemd[1]: Created slice kubepods-burstable-pod6a9ec93a_0de7_44da_a4da_50f958026c65.slice - libcontainer container kubepods-burstable-pod6a9ec93a_0de7_44da_a4da_50f958026c65.slice. Oct 27 08:32:06.172609 systemd[1]: Created slice kubepods-besteffort-poddbb3c469_8dbc_4871_a711_e7befd27ac29.slice - libcontainer container kubepods-besteffort-poddbb3c469_8dbc_4871_a711_e7befd27ac29.slice. Oct 27 08:32:06.179882 systemd[1]: Created slice kubepods-besteffort-pod4ba4a847_9e38_4bf2_a22e_46c61e29a54b.slice - libcontainer container kubepods-besteffort-pod4ba4a847_9e38_4bf2_a22e_46c61e29a54b.slice. Oct 27 08:32:06.184818 systemd[1]: Created slice kubepods-besteffort-pod2968bd6d_b552_4563_80fc_dc58528087c7.slice - libcontainer container kubepods-besteffort-pod2968bd6d_b552_4563_80fc_dc58528087c7.slice. Oct 27 08:32:06.221432 kubelet[2758]: I1027 08:32:06.221354 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ea1d490-08d4-4317-a16f-2f9d40de2cf9-config-volume\") pod \"coredns-668d6bf9bc-qxbwc\" (UID: \"0ea1d490-08d4-4317-a16f-2f9d40de2cf9\") " pod="kube-system/coredns-668d6bf9bc-qxbwc" Oct 27 08:32:06.221432 kubelet[2758]: I1027 08:32:06.221397 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-274n8\" (UniqueName: \"kubernetes.io/projected/0ea1d490-08d4-4317-a16f-2f9d40de2cf9-kube-api-access-274n8\") pod \"coredns-668d6bf9bc-qxbwc\" (UID: \"0ea1d490-08d4-4317-a16f-2f9d40de2cf9\") " pod="kube-system/coredns-668d6bf9bc-qxbwc" Oct 27 08:32:06.221432 kubelet[2758]: I1027 08:32:06.221441 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/4ba4a847-9e38-4bf2-a22e-46c61e29a54b-goldmane-key-pair\") pod \"goldmane-666569f655-pds2z\" (UID: \"4ba4a847-9e38-4bf2-a22e-46c61e29a54b\") " pod="calico-system/goldmane-666569f655-pds2z" Oct 27 08:32:06.221679 kubelet[2758]: I1027 08:32:06.221464 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r84d\" (UniqueName: \"kubernetes.io/projected/4ba4a847-9e38-4bf2-a22e-46c61e29a54b-kube-api-access-9r84d\") pod \"goldmane-666569f655-pds2z\" (UID: \"4ba4a847-9e38-4bf2-a22e-46c61e29a54b\") " pod="calico-system/goldmane-666569f655-pds2z" Oct 27 08:32:06.221679 kubelet[2758]: I1027 08:32:06.221578 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ba4a847-9e38-4bf2-a22e-46c61e29a54b-config\") pod \"goldmane-666569f655-pds2z\" (UID: \"4ba4a847-9e38-4bf2-a22e-46c61e29a54b\") " pod="calico-system/goldmane-666569f655-pds2z" Oct 27 08:32:06.221679 kubelet[2758]: I1027 08:32:06.221656 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/155c9352-5d7b-4c12-ac76-f7ec57d8dc42-calico-apiserver-certs\") pod \"calico-apiserver-7948764c44-vhjsc\" (UID: \"155c9352-5d7b-4c12-ac76-f7ec57d8dc42\") " pod="calico-apiserver/calico-apiserver-7948764c44-vhjsc" Oct 27 08:32:06.221784 kubelet[2758]: I1027 08:32:06.221686 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ba4a847-9e38-4bf2-a22e-46c61e29a54b-goldmane-ca-bundle\") pod \"goldmane-666569f655-pds2z\" (UID: \"4ba4a847-9e38-4bf2-a22e-46c61e29a54b\") " pod="calico-system/goldmane-666569f655-pds2z" Oct 27 08:32:06.221784 kubelet[2758]: I1027 08:32:06.221702 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2968bd6d-b552-4563-80fc-dc58528087c7-whisker-ca-bundle\") pod \"whisker-c8ff4b4c7-5jv54\" (UID: \"2968bd6d-b552-4563-80fc-dc58528087c7\") " pod="calico-system/whisker-c8ff4b4c7-5jv54" Oct 27 08:32:06.221784 kubelet[2758]: I1027 08:32:06.221721 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dbb3c469-8dbc-4871-a711-e7befd27ac29-calico-apiserver-certs\") pod \"calico-apiserver-7948764c44-6chns\" (UID: \"dbb3c469-8dbc-4871-a711-e7befd27ac29\") " pod="calico-apiserver/calico-apiserver-7948764c44-6chns" Oct 27 08:32:06.221784 kubelet[2758]: I1027 08:32:06.221736 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfhw9\" (UniqueName: \"kubernetes.io/projected/dbb3c469-8dbc-4871-a711-e7befd27ac29-kube-api-access-qfhw9\") pod \"calico-apiserver-7948764c44-6chns\" (UID: \"dbb3c469-8dbc-4871-a711-e7befd27ac29\") " pod="calico-apiserver/calico-apiserver-7948764c44-6chns" Oct 27 08:32:06.221784 kubelet[2758]: I1027 08:32:06.221755 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2968bd6d-b552-4563-80fc-dc58528087c7-whisker-backend-key-pair\") pod \"whisker-c8ff4b4c7-5jv54\" (UID: \"2968bd6d-b552-4563-80fc-dc58528087c7\") " pod="calico-system/whisker-c8ff4b4c7-5jv54" Oct 27 08:32:06.221957 kubelet[2758]: I1027 08:32:06.221774 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29f6d368-bb0a-4633-8401-1b96e3d04052-tigera-ca-bundle\") pod \"calico-kube-controllers-85866cf9fb-2xx5v\" (UID: \"29f6d368-bb0a-4633-8401-1b96e3d04052\") " pod="calico-system/calico-kube-controllers-85866cf9fb-2xx5v" Oct 27 08:32:06.221957 kubelet[2758]: I1027 08:32:06.221798 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7njj\" (UniqueName: \"kubernetes.io/projected/155c9352-5d7b-4c12-ac76-f7ec57d8dc42-kube-api-access-k7njj\") pod \"calico-apiserver-7948764c44-vhjsc\" (UID: \"155c9352-5d7b-4c12-ac76-f7ec57d8dc42\") " pod="calico-apiserver/calico-apiserver-7948764c44-vhjsc" Oct 27 08:32:06.221957 kubelet[2758]: I1027 08:32:06.221813 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vzb5\" (UniqueName: \"kubernetes.io/projected/29f6d368-bb0a-4633-8401-1b96e3d04052-kube-api-access-8vzb5\") pod \"calico-kube-controllers-85866cf9fb-2xx5v\" (UID: \"29f6d368-bb0a-4633-8401-1b96e3d04052\") " pod="calico-system/calico-kube-controllers-85866cf9fb-2xx5v" Oct 27 08:32:06.221957 kubelet[2758]: I1027 08:32:06.221828 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mgd8\" (UniqueName: \"kubernetes.io/projected/2968bd6d-b552-4563-80fc-dc58528087c7-kube-api-access-5mgd8\") pod \"whisker-c8ff4b4c7-5jv54\" (UID: \"2968bd6d-b552-4563-80fc-dc58528087c7\") " pod="calico-system/whisker-c8ff4b4c7-5jv54" Oct 27 08:32:06.324454 kubelet[2758]: I1027 08:32:06.322977 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7px9\" (UniqueName: \"kubernetes.io/projected/6a9ec93a-0de7-44da-a4da-50f958026c65-kube-api-access-q7px9\") pod \"coredns-668d6bf9bc-2pgd2\" (UID: \"6a9ec93a-0de7-44da-a4da-50f958026c65\") " pod="kube-system/coredns-668d6bf9bc-2pgd2" Oct 27 08:32:06.324454 kubelet[2758]: I1027 08:32:06.323062 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a9ec93a-0de7-44da-a4da-50f958026c65-config-volume\") pod \"coredns-668d6bf9bc-2pgd2\" (UID: \"6a9ec93a-0de7-44da-a4da-50f958026c65\") " pod="kube-system/coredns-668d6bf9bc-2pgd2" Oct 27 08:32:06.369504 kubelet[2758]: E1027 08:32:06.369465 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:32:06.370915 containerd[1607]: time="2025-10-27T08:32:06.370346291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 27 08:32:06.449851 containerd[1607]: time="2025-10-27T08:32:06.449705546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7948764c44-vhjsc,Uid:155c9352-5d7b-4c12-ac76-f7ec57d8dc42,Namespace:calico-apiserver,Attempt:0,}" Oct 27 08:32:06.456517 containerd[1607]: time="2025-10-27T08:32:06.456471405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85866cf9fb-2xx5v,Uid:29f6d368-bb0a-4633-8401-1b96e3d04052,Namespace:calico-system,Attempt:0,}" Oct 27 08:32:06.464908 kubelet[2758]: E1027 08:32:06.464848 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:32:06.465662 containerd[1607]: time="2025-10-27T08:32:06.465629931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qxbwc,Uid:0ea1d490-08d4-4317-a16f-2f9d40de2cf9,Namespace:kube-system,Attempt:0,}" Oct 27 08:32:06.471158 kubelet[2758]: E1027 08:32:06.470966 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:32:06.475684 containerd[1607]: time="2025-10-27T08:32:06.475626511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2pgd2,Uid:6a9ec93a-0de7-44da-a4da-50f958026c65,Namespace:kube-system,Attempt:0,}" Oct 27 08:32:06.477654 containerd[1607]: time="2025-10-27T08:32:06.477623869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7948764c44-6chns,Uid:dbb3c469-8dbc-4871-a711-e7befd27ac29,Namespace:calico-apiserver,Attempt:0,}" Oct 27 08:32:06.484166 containerd[1607]: time="2025-10-27T08:32:06.484116910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-pds2z,Uid:4ba4a847-9e38-4bf2-a22e-46c61e29a54b,Namespace:calico-system,Attempt:0,}" Oct 27 08:32:06.495786 containerd[1607]: time="2025-10-27T08:32:06.495712293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c8ff4b4c7-5jv54,Uid:2968bd6d-b552-4563-80fc-dc58528087c7,Namespace:calico-system,Attempt:0,}" Oct 27 08:32:06.593157 containerd[1607]: time="2025-10-27T08:32:06.593079302Z" level=error msg="Failed to destroy network for sandbox \"7236c2de5fe1bb791b7421ce46586c89f3061b527758553a88bbf24ccaf35150\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:32:06.597098 containerd[1607]: time="2025-10-27T08:32:06.596918999Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2pgd2,Uid:6a9ec93a-0de7-44da-a4da-50f958026c65,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7236c2de5fe1bb791b7421ce46586c89f3061b527758553a88bbf24ccaf35150\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:32:06.625237 kubelet[2758]: E1027 08:32:06.625166 2758 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7236c2de5fe1bb791b7421ce46586c89f3061b527758553a88bbf24ccaf35150\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:32:06.625377 kubelet[2758]: E1027 08:32:06.625273 2758 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7236c2de5fe1bb791b7421ce46586c89f3061b527758553a88bbf24ccaf35150\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2pgd2" Oct 27 08:32:06.625377 kubelet[2758]: E1027 08:32:06.625300 2758 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7236c2de5fe1bb791b7421ce46586c89f3061b527758553a88bbf24ccaf35150\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2pgd2" Oct 27 08:32:06.625377 kubelet[2758]: E1027 08:32:06.625342 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-2pgd2_kube-system(6a9ec93a-0de7-44da-a4da-50f958026c65)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-2pgd2_kube-system(6a9ec93a-0de7-44da-a4da-50f958026c65)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7236c2de5fe1bb791b7421ce46586c89f3061b527758553a88bbf24ccaf35150\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-2pgd2" podUID="6a9ec93a-0de7-44da-a4da-50f958026c65" Oct 27 08:32:06.632647 containerd[1607]: time="2025-10-27T08:32:06.632482103Z" level=error msg="Failed to destroy network for sandbox \"90d1600580c5ebe4bd5664d61cea9d75f27b117a68f74021c92cd0ca1698d267\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:32:06.634453 containerd[1607]: time="2025-10-27T08:32:06.634425410Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7948764c44-vhjsc,Uid:155c9352-5d7b-4c12-ac76-f7ec57d8dc42,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"90d1600580c5ebe4bd5664d61cea9d75f27b117a68f74021c92cd0ca1698d267\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:32:06.635043 kubelet[2758]: E1027 08:32:06.634978 2758 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90d1600580c5ebe4bd5664d61cea9d75f27b117a68f74021c92cd0ca1698d267\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:32:06.635100 kubelet[2758]: E1027 08:32:06.635058 2758 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90d1600580c5ebe4bd5664d61cea9d75f27b117a68f74021c92cd0ca1698d267\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7948764c44-vhjsc" Oct 27 08:32:06.635100 kubelet[2758]: E1027 08:32:06.635082 2758 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90d1600580c5ebe4bd5664d61cea9d75f27b117a68f74021c92cd0ca1698d267\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7948764c44-vhjsc" Oct 27 08:32:06.635158 kubelet[2758]: E1027 08:32:06.635136 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7948764c44-vhjsc_calico-apiserver(155c9352-5d7b-4c12-ac76-f7ec57d8dc42)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7948764c44-vhjsc_calico-apiserver(155c9352-5d7b-4c12-ac76-f7ec57d8dc42)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90d1600580c5ebe4bd5664d61cea9d75f27b117a68f74021c92cd0ca1698d267\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7948764c44-vhjsc" podUID="155c9352-5d7b-4c12-ac76-f7ec57d8dc42" Oct 27 08:32:06.636734 containerd[1607]: time="2025-10-27T08:32:06.636567128Z" level=error msg="Failed to destroy network for sandbox \"8211e34b273fc8a8b5fe67f0b6272c397ad95b49e79dfa3e01c2844f87984788\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:32:06.640443 containerd[1607]: time="2025-10-27T08:32:06.640364797Z" level=error msg="Failed to destroy network for sandbox \"3efac1f6bb66a556776bc715c9908f3840e7c9a5e52bd4e93c46e9b3258dbe54\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:32:06.640976 containerd[1607]: time="2025-10-27T08:32:06.640900587Z" level=error msg="Failed to destroy network for sandbox \"395bacc1e339c4db6b0c9062f823f9945ad8349f561050739459662675d25289\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:32:06.641451 containerd[1607]: time="2025-10-27T08:32:06.641384571Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c8ff4b4c7-5jv54,Uid:2968bd6d-b552-4563-80fc-dc58528087c7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8211e34b273fc8a8b5fe67f0b6272c397ad95b49e79dfa3e01c2844f87984788\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:32:06.641705 kubelet[2758]: E1027 08:32:06.641656 2758 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8211e34b273fc8a8b5fe67f0b6272c397ad95b49e79dfa3e01c2844f87984788\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:32:06.641796 kubelet[2758]: E1027 08:32:06.641728 2758 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8211e34b273fc8a8b5fe67f0b6272c397ad95b49e79dfa3e01c2844f87984788\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-c8ff4b4c7-5jv54" Oct 27 08:32:06.641796 kubelet[2758]: E1027 08:32:06.641751 2758 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8211e34b273fc8a8b5fe67f0b6272c397ad95b49e79dfa3e01c2844f87984788\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-c8ff4b4c7-5jv54" Oct 27 08:32:06.641848 kubelet[2758]: E1027 08:32:06.641802 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-c8ff4b4c7-5jv54_calico-system(2968bd6d-b552-4563-80fc-dc58528087c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-c8ff4b4c7-5jv54_calico-system(2968bd6d-b552-4563-80fc-dc58528087c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8211e34b273fc8a8b5fe67f0b6272c397ad95b49e79dfa3e01c2844f87984788\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-c8ff4b4c7-5jv54" podUID="2968bd6d-b552-4563-80fc-dc58528087c7" Oct 27 08:32:06.643282 containerd[1607]: time="2025-10-27T08:32:06.643232790Z" level=error msg="Failed to destroy network for sandbox \"966e47d479e21bcbdda33a5ccb3a50bc6ec8d82b7d73afe593280c50ce7d564e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:32:06.643657 containerd[1607]: time="2025-10-27T08:32:06.643570451Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-pds2z,Uid:4ba4a847-9e38-4bf2-a22e-46c61e29a54b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3efac1f6bb66a556776bc715c9908f3840e7c9a5e52bd4e93c46e9b3258dbe54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:32:06.643930 kubelet[2758]: E1027 08:32:06.643880 2758 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3efac1f6bb66a556776bc715c9908f3840e7c9a5e52bd4e93c46e9b3258dbe54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:32:06.643993 kubelet[2758]: E1027 08:32:06.643956 2758 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3efac1f6bb66a556776bc715c9908f3840e7c9a5e52bd4e93c46e9b3258dbe54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-pds2z" Oct 27 08:32:06.643993 kubelet[2758]: E1027 08:32:06.643980 2758 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3efac1f6bb66a556776bc715c9908f3840e7c9a5e52bd4e93c46e9b3258dbe54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-pds2z" Oct 27 08:32:06.644156 kubelet[2758]: E1027 08:32:06.644023 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-pds2z_calico-system(4ba4a847-9e38-4bf2-a22e-46c61e29a54b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-pds2z_calico-system(4ba4a847-9e38-4bf2-a22e-46c61e29a54b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3efac1f6bb66a556776bc715c9908f3840e7c9a5e52bd4e93c46e9b3258dbe54\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-pds2z" podUID="4ba4a847-9e38-4bf2-a22e-46c61e29a54b" Oct 27 08:32:06.644614 containerd[1607]: time="2025-10-27T08:32:06.644564386Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85866cf9fb-2xx5v,Uid:29f6d368-bb0a-4633-8401-1b96e3d04052,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"966e47d479e21bcbdda33a5ccb3a50bc6ec8d82b7d73afe593280c50ce7d564e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:32:06.644791 kubelet[2758]: E1027 08:32:06.644758 2758 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"966e47d479e21bcbdda33a5ccb3a50bc6ec8d82b7d73afe593280c50ce7d564e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:32:06.644832 kubelet[2758]: E1027 08:32:06.644794 2758 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"966e47d479e21bcbdda33a5ccb3a50bc6ec8d82b7d73afe593280c50ce7d564e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85866cf9fb-2xx5v" Oct 27 08:32:06.644832 kubelet[2758]: E1027 08:32:06.644809 2758 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"966e47d479e21bcbdda33a5ccb3a50bc6ec8d82b7d73afe593280c50ce7d564e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85866cf9fb-2xx5v" Oct 27 08:32:06.644983 kubelet[2758]: E1027 08:32:06.644839 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-85866cf9fb-2xx5v_calico-system(29f6d368-bb0a-4633-8401-1b96e3d04052)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-85866cf9fb-2xx5v_calico-system(29f6d368-bb0a-4633-8401-1b96e3d04052)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"966e47d479e21bcbdda33a5ccb3a50bc6ec8d82b7d73afe593280c50ce7d564e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85866cf9fb-2xx5v" podUID="29f6d368-bb0a-4633-8401-1b96e3d04052" Oct 27 08:32:06.645739 containerd[1607]: time="2025-10-27T08:32:06.645631658Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qxbwc,Uid:0ea1d490-08d4-4317-a16f-2f9d40de2cf9,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"395bacc1e339c4db6b0c9062f823f9945ad8349f561050739459662675d25289\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:32:06.645977 kubelet[2758]: E1027 08:32:06.645834 2758 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"395bacc1e339c4db6b0c9062f823f9945ad8349f561050739459662675d25289\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:32:06.645977 kubelet[2758]: E1027 08:32:06.645899 2758 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"395bacc1e339c4db6b0c9062f823f9945ad8349f561050739459662675d25289\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qxbwc" Oct 27 08:32:06.645977 kubelet[2758]: E1027 08:32:06.645925 2758 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"395bacc1e339c4db6b0c9062f823f9945ad8349f561050739459662675d25289\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qxbwc" Oct 27 08:32:06.646067 kubelet[2758]: E1027 08:32:06.645988 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-qxbwc_kube-system(0ea1d490-08d4-4317-a16f-2f9d40de2cf9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-qxbwc_kube-system(0ea1d490-08d4-4317-a16f-2f9d40de2cf9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"395bacc1e339c4db6b0c9062f823f9945ad8349f561050739459662675d25289\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qxbwc" podUID="0ea1d490-08d4-4317-a16f-2f9d40de2cf9" Oct 27 08:32:06.655199 containerd[1607]: time="2025-10-27T08:32:06.654531411Z" level=error msg="Failed to destroy network for sandbox \"f3a2523d244fa3e61987b8fd1fec8a1c6c52c2cbb50b857a4142a97e69786cd2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:32:06.661637 containerd[1607]: time="2025-10-27T08:32:06.661566213Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7948764c44-6chns,Uid:dbb3c469-8dbc-4871-a711-e7befd27ac29,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3a2523d244fa3e61987b8fd1fec8a1c6c52c2cbb50b857a4142a97e69786cd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:32:06.661882 kubelet[2758]: E1027 08:32:06.661828 2758 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3a2523d244fa3e61987b8fd1fec8a1c6c52c2cbb50b857a4142a97e69786cd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:32:06.661936 kubelet[2758]: E1027 08:32:06.661894 2758 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3a2523d244fa3e61987b8fd1fec8a1c6c52c2cbb50b857a4142a97e69786cd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7948764c44-6chns" Oct 27 08:32:06.661936 kubelet[2758]: E1027 08:32:06.661915 2758 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3a2523d244fa3e61987b8fd1fec8a1c6c52c2cbb50b857a4142a97e69786cd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7948764c44-6chns" Oct 27 08:32:06.662008 kubelet[2758]: E1027 08:32:06.661969 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7948764c44-6chns_calico-apiserver(dbb3c469-8dbc-4871-a711-e7befd27ac29)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7948764c44-6chns_calico-apiserver(dbb3c469-8dbc-4871-a711-e7befd27ac29)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f3a2523d244fa3e61987b8fd1fec8a1c6c52c2cbb50b857a4142a97e69786cd2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7948764c44-6chns" podUID="dbb3c469-8dbc-4871-a711-e7befd27ac29" Oct 27 08:32:07.261201 systemd[1]: Created slice kubepods-besteffort-pod7910370a_d2b9_4ee0_8c0a_b904aff5f65a.slice - libcontainer container kubepods-besteffort-pod7910370a_d2b9_4ee0_8c0a_b904aff5f65a.slice. Oct 27 08:32:07.263417 containerd[1607]: time="2025-10-27T08:32:07.263365569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wxbd2,Uid:7910370a-d2b9-4ee0-8c0a-b904aff5f65a,Namespace:calico-system,Attempt:0,}" Oct 27 08:32:07.317128 containerd[1607]: time="2025-10-27T08:32:07.317056758Z" level=error msg="Failed to destroy network for sandbox \"114c0b3fe5faeeeabfd108e48a382e2c69a83aeff23b1ff1b69c241fdd67e8d4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:32:07.318471 containerd[1607]: time="2025-10-27T08:32:07.318428169Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wxbd2,Uid:7910370a-d2b9-4ee0-8c0a-b904aff5f65a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"114c0b3fe5faeeeabfd108e48a382e2c69a83aeff23b1ff1b69c241fdd67e8d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:32:07.318782 kubelet[2758]: E1027 08:32:07.318719 2758 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"114c0b3fe5faeeeabfd108e48a382e2c69a83aeff23b1ff1b69c241fdd67e8d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:32:07.318834 kubelet[2758]: E1027 08:32:07.318799 2758 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"114c0b3fe5faeeeabfd108e48a382e2c69a83aeff23b1ff1b69c241fdd67e8d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wxbd2" Oct 27 08:32:07.318859 kubelet[2758]: E1027 08:32:07.318832 2758 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"114c0b3fe5faeeeabfd108e48a382e2c69a83aeff23b1ff1b69c241fdd67e8d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wxbd2" Oct 27 08:32:07.318922 kubelet[2758]: E1027 08:32:07.318896 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wxbd2_calico-system(7910370a-d2b9-4ee0-8c0a-b904aff5f65a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wxbd2_calico-system(7910370a-d2b9-4ee0-8c0a-b904aff5f65a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"114c0b3fe5faeeeabfd108e48a382e2c69a83aeff23b1ff1b69c241fdd67e8d4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wxbd2" podUID="7910370a-d2b9-4ee0-8c0a-b904aff5f65a" Oct 27 08:32:07.319880 systemd[1]: run-netns-cni\x2d05902b5c\x2d1a19\x2da57c\x2de1e2\x2d7d8aa4e855f7.mount: Deactivated successfully. Oct 27 08:32:12.338574 systemd[1]: Started sshd@7-10.0.0.134:22-10.0.0.1:53482.service - OpenSSH per-connection server daemon (10.0.0.1:53482). Oct 27 08:32:12.425912 sshd[3869]: Accepted publickey for core from 10.0.0.1 port 53482 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:32:12.428368 sshd-session[3869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:32:12.435647 systemd-logind[1577]: New session 8 of user core. Oct 27 08:32:12.446760 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 27 08:32:12.675586 sshd[3873]: Connection closed by 10.0.0.1 port 53482 Oct 27 08:32:12.675864 sshd-session[3869]: pam_unix(sshd:session): session closed for user core Oct 27 08:32:12.682395 systemd[1]: sshd@7-10.0.0.134:22-10.0.0.1:53482.service: Deactivated successfully. Oct 27 08:32:12.684501 systemd[1]: session-8.scope: Deactivated successfully. Oct 27 08:32:12.686331 systemd-logind[1577]: Session 8 logged out. Waiting for processes to exit. Oct 27 08:32:12.688251 systemd-logind[1577]: Removed session 8. Oct 27 08:32:12.874437 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2896147602.mount: Deactivated successfully. Oct 27 08:32:14.542764 containerd[1607]: time="2025-10-27T08:32:14.542682870Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:32:14.543814 containerd[1607]: time="2025-10-27T08:32:14.543782475Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Oct 27 08:32:14.545961 containerd[1607]: time="2025-10-27T08:32:14.545927467Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:32:14.549357 containerd[1607]: time="2025-10-27T08:32:14.549143881Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:32:14.549805 containerd[1607]: time="2025-10-27T08:32:14.549760925Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.179372645s" Oct 27 08:32:14.549805 containerd[1607]: time="2025-10-27T08:32:14.549798936Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 27 08:32:14.579403 containerd[1607]: time="2025-10-27T08:32:14.579347382Z" level=info msg="CreateContainer within sandbox \"60236c25ccfe0dc482eda4ce080e6a66618b49dd6e28fc0a93706885b7dd12a4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 27 08:32:15.033105 containerd[1607]: time="2025-10-27T08:32:15.032884079Z" level=info msg="Container 386840f37713bcbc7dffc61f8593ec36ea759e2185bbca2faf04e5febe4d307d: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:32:15.192200 containerd[1607]: time="2025-10-27T08:32:15.192129108Z" level=info msg="CreateContainer within sandbox \"60236c25ccfe0dc482eda4ce080e6a66618b49dd6e28fc0a93706885b7dd12a4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"386840f37713bcbc7dffc61f8593ec36ea759e2185bbca2faf04e5febe4d307d\"" Oct 27 08:32:15.196592 containerd[1607]: time="2025-10-27T08:32:15.196523247Z" level=info msg="StartContainer for \"386840f37713bcbc7dffc61f8593ec36ea759e2185bbca2faf04e5febe4d307d\"" Oct 27 08:32:15.198935 containerd[1607]: time="2025-10-27T08:32:15.198888792Z" level=info msg="connecting to shim 386840f37713bcbc7dffc61f8593ec36ea759e2185bbca2faf04e5febe4d307d" address="unix:///run/containerd/s/c14d9f2ccbfeae64f8bf4e16564fc5e7af9f8f08c2ba0f8e4035fb1a5715a2a8" protocol=ttrpc version=3 Oct 27 08:32:15.264585 systemd[1]: Started cri-containerd-386840f37713bcbc7dffc61f8593ec36ea759e2185bbca2faf04e5febe4d307d.scope - libcontainer container 386840f37713bcbc7dffc61f8593ec36ea759e2185bbca2faf04e5febe4d307d. Oct 27 08:32:15.713658 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 27 08:32:15.714015 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 27 08:32:15.735375 containerd[1607]: time="2025-10-27T08:32:15.735337112Z" level=info msg="StartContainer for \"386840f37713bcbc7dffc61f8593ec36ea759e2185bbca2faf04e5febe4d307d\" returns successfully" Oct 27 08:32:15.775908 kubelet[2758]: I1027 08:32:15.775755 2758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 27 08:32:15.780549 kubelet[2758]: E1027 08:32:15.780521 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:32:15.892303 kubelet[2758]: I1027 08:32:15.892249 2758 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2968bd6d-b552-4563-80fc-dc58528087c7-whisker-ca-bundle\") pod \"2968bd6d-b552-4563-80fc-dc58528087c7\" (UID: \"2968bd6d-b552-4563-80fc-dc58528087c7\") " Oct 27 08:32:15.892828 kubelet[2758]: I1027 08:32:15.892797 2758 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2968bd6d-b552-4563-80fc-dc58528087c7-whisker-backend-key-pair\") pod \"2968bd6d-b552-4563-80fc-dc58528087c7\" (UID: \"2968bd6d-b552-4563-80fc-dc58528087c7\") " Oct 27 08:32:15.892828 kubelet[2758]: I1027 08:32:15.892828 2758 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mgd8\" (UniqueName: \"kubernetes.io/projected/2968bd6d-b552-4563-80fc-dc58528087c7-kube-api-access-5mgd8\") pod \"2968bd6d-b552-4563-80fc-dc58528087c7\" (UID: \"2968bd6d-b552-4563-80fc-dc58528087c7\") " Oct 27 08:32:15.893209 kubelet[2758]: I1027 08:32:15.892745 2758 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2968bd6d-b552-4563-80fc-dc58528087c7-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "2968bd6d-b552-4563-80fc-dc58528087c7" (UID: "2968bd6d-b552-4563-80fc-dc58528087c7"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 27 08:32:15.899048 systemd[1]: var-lib-kubelet-pods-2968bd6d\x2db552\x2d4563\x2d80fc\x2ddc58528087c7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5mgd8.mount: Deactivated successfully. Oct 27 08:32:15.899198 systemd[1]: var-lib-kubelet-pods-2968bd6d\x2db552\x2d4563\x2d80fc\x2ddc58528087c7-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 27 08:32:15.900846 kubelet[2758]: I1027 08:32:15.900803 2758 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2968bd6d-b552-4563-80fc-dc58528087c7-kube-api-access-5mgd8" (OuterVolumeSpecName: "kube-api-access-5mgd8") pod "2968bd6d-b552-4563-80fc-dc58528087c7" (UID: "2968bd6d-b552-4563-80fc-dc58528087c7"). InnerVolumeSpecName "kube-api-access-5mgd8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 27 08:32:15.900973 kubelet[2758]: I1027 08:32:15.900915 2758 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2968bd6d-b552-4563-80fc-dc58528087c7-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "2968bd6d-b552-4563-80fc-dc58528087c7" (UID: "2968bd6d-b552-4563-80fc-dc58528087c7"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 27 08:32:15.993487 kubelet[2758]: I1027 08:32:15.993286 2758 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2968bd6d-b552-4563-80fc-dc58528087c7-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 27 08:32:15.993487 kubelet[2758]: I1027 08:32:15.993318 2758 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2968bd6d-b552-4563-80fc-dc58528087c7-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 27 08:32:15.993487 kubelet[2758]: I1027 08:32:15.993326 2758 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5mgd8\" (UniqueName: \"kubernetes.io/projected/2968bd6d-b552-4563-80fc-dc58528087c7-kube-api-access-5mgd8\") on node \"localhost\" DevicePath \"\"" Oct 27 08:32:16.743364 kubelet[2758]: E1027 08:32:16.743303 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:32:16.743364 kubelet[2758]: E1027 08:32:16.743341 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:32:16.751025 systemd[1]: Removed slice kubepods-besteffort-pod2968bd6d_b552_4563_80fc_dc58528087c7.slice - libcontainer container kubepods-besteffort-pod2968bd6d_b552_4563_80fc_dc58528087c7.slice. Oct 27 08:32:16.777767 kubelet[2758]: I1027 08:32:16.775635 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-76ckp" podStartSLOduration=3.280044785 podStartE2EDuration="24.775611815s" podCreationTimestamp="2025-10-27 08:31:52 +0000 UTC" firstStartedPulling="2025-10-27 08:31:53.061344126 +0000 UTC m=+21.925057333" lastFinishedPulling="2025-10-27 08:32:14.556911167 +0000 UTC m=+43.420624363" observedRunningTime="2025-10-27 08:32:16.774586908 +0000 UTC m=+45.638300124" watchObservedRunningTime="2025-10-27 08:32:16.775611815 +0000 UTC m=+45.639325021" Oct 27 08:32:16.842058 systemd[1]: Created slice kubepods-besteffort-podf2ff340b_16ee_4ed6_afb9_848c0501ad98.slice - libcontainer container kubepods-besteffort-podf2ff340b_16ee_4ed6_afb9_848c0501ad98.slice. Oct 27 08:32:16.899973 kubelet[2758]: I1027 08:32:16.899919 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg9l5\" (UniqueName: \"kubernetes.io/projected/f2ff340b-16ee-4ed6-afb9-848c0501ad98-kube-api-access-xg9l5\") pod \"whisker-849fdbdcd5-csmld\" (UID: \"f2ff340b-16ee-4ed6-afb9-848c0501ad98\") " pod="calico-system/whisker-849fdbdcd5-csmld" Oct 27 08:32:16.900255 kubelet[2758]: I1027 08:32:16.900223 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f2ff340b-16ee-4ed6-afb9-848c0501ad98-whisker-backend-key-pair\") pod \"whisker-849fdbdcd5-csmld\" (UID: \"f2ff340b-16ee-4ed6-afb9-848c0501ad98\") " pod="calico-system/whisker-849fdbdcd5-csmld" Oct 27 08:32:16.900354 kubelet[2758]: I1027 08:32:16.900338 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2ff340b-16ee-4ed6-afb9-848c0501ad98-whisker-ca-bundle\") pod \"whisker-849fdbdcd5-csmld\" (UID: \"f2ff340b-16ee-4ed6-afb9-848c0501ad98\") " pod="calico-system/whisker-849fdbdcd5-csmld" Oct 27 08:32:17.002448 containerd[1607]: time="2025-10-27T08:32:17.000796037Z" level=info msg="TaskExit event in podsandbox handler container_id:\"386840f37713bcbc7dffc61f8593ec36ea759e2185bbca2faf04e5febe4d307d\" id:\"dc0ff5610b996ca329285b7bc0e0ab0db320a66cf64d23f9ced9c48a56bc794f\" pid:3968 exit_status:1 exited_at:{seconds:1761553936 nanos:999675051}" Oct 27 08:32:17.149260 containerd[1607]: time="2025-10-27T08:32:17.149207428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-849fdbdcd5-csmld,Uid:f2ff340b-16ee-4ed6-afb9-848c0501ad98,Namespace:calico-system,Attempt:0,}" Oct 27 08:32:17.259822 kubelet[2758]: I1027 08:32:17.259780 2758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2968bd6d-b552-4563-80fc-dc58528087c7" path="/var/lib/kubelet/pods/2968bd6d-b552-4563-80fc-dc58528087c7/volumes" Oct 27 08:32:17.260318 kubelet[2758]: E1027 08:32:17.260294 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:32:17.260801 containerd[1607]: time="2025-10-27T08:32:17.260706741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2pgd2,Uid:6a9ec93a-0de7-44da-a4da-50f958026c65,Namespace:kube-system,Attempt:0,}" Oct 27 08:32:17.460511 systemd-networkd[1504]: cali8f8a4e5992d: Link UP Oct 27 08:32:17.460740 systemd-networkd[1504]: cali8f8a4e5992d: Gained carrier Oct 27 08:32:17.473201 containerd[1607]: 2025-10-27 08:32:17.313 [INFO][4103] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 27 08:32:17.473201 containerd[1607]: 2025-10-27 08:32:17.327 [INFO][4103] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--849fdbdcd5--csmld-eth0 whisker-849fdbdcd5- calico-system f2ff340b-16ee-4ed6-afb9-848c0501ad98 956 0 2025-10-27 08:32:16 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:849fdbdcd5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-849fdbdcd5-csmld eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali8f8a4e5992d [] [] }} ContainerID="b6abe2cbc6405133b084a28c3beda539cedd3c0b8a5658d9e34af4d7c0281861" Namespace="calico-system" Pod="whisker-849fdbdcd5-csmld" WorkloadEndpoint="localhost-k8s-whisker--849fdbdcd5--csmld-" Oct 27 08:32:17.473201 containerd[1607]: 2025-10-27 08:32:17.327 [INFO][4103] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b6abe2cbc6405133b084a28c3beda539cedd3c0b8a5658d9e34af4d7c0281861" Namespace="calico-system" Pod="whisker-849fdbdcd5-csmld" WorkloadEndpoint="localhost-k8s-whisker--849fdbdcd5--csmld-eth0" Oct 27 08:32:17.473201 containerd[1607]: 2025-10-27 08:32:17.414 [INFO][4141] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b6abe2cbc6405133b084a28c3beda539cedd3c0b8a5658d9e34af4d7c0281861" HandleID="k8s-pod-network.b6abe2cbc6405133b084a28c3beda539cedd3c0b8a5658d9e34af4d7c0281861" Workload="localhost-k8s-whisker--849fdbdcd5--csmld-eth0" Oct 27 08:32:17.473642 containerd[1607]: 2025-10-27 08:32:17.414 [INFO][4141] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b6abe2cbc6405133b084a28c3beda539cedd3c0b8a5658d9e34af4d7c0281861" HandleID="k8s-pod-network.b6abe2cbc6405133b084a28c3beda539cedd3c0b8a5658d9e34af4d7c0281861" Workload="localhost-k8s-whisker--849fdbdcd5--csmld-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035efb0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-849fdbdcd5-csmld", "timestamp":"2025-10-27 08:32:17.414019381 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:32:17.473642 containerd[1607]: 2025-10-27 08:32:17.414 [INFO][4141] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:32:17.473642 containerd[1607]: 2025-10-27 08:32:17.415 [INFO][4141] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:32:17.473642 containerd[1607]: 2025-10-27 08:32:17.415 [INFO][4141] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 08:32:17.473642 containerd[1607]: 2025-10-27 08:32:17.423 [INFO][4141] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b6abe2cbc6405133b084a28c3beda539cedd3c0b8a5658d9e34af4d7c0281861" host="localhost" Oct 27 08:32:17.473642 containerd[1607]: 2025-10-27 08:32:17.430 [INFO][4141] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 08:32:17.473642 containerd[1607]: 2025-10-27 08:32:17.434 [INFO][4141] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 08:32:17.473642 containerd[1607]: 2025-10-27 08:32:17.436 [INFO][4141] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 08:32:17.473642 containerd[1607]: 2025-10-27 08:32:17.437 [INFO][4141] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 08:32:17.473642 containerd[1607]: 2025-10-27 08:32:17.437 [INFO][4141] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b6abe2cbc6405133b084a28c3beda539cedd3c0b8a5658d9e34af4d7c0281861" host="localhost" Oct 27 08:32:17.473867 containerd[1607]: 2025-10-27 08:32:17.439 [INFO][4141] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b6abe2cbc6405133b084a28c3beda539cedd3c0b8a5658d9e34af4d7c0281861 Oct 27 08:32:17.473867 containerd[1607]: 2025-10-27 08:32:17.442 [INFO][4141] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b6abe2cbc6405133b084a28c3beda539cedd3c0b8a5658d9e34af4d7c0281861" host="localhost" Oct 27 08:32:17.473867 containerd[1607]: 2025-10-27 08:32:17.446 [INFO][4141] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.b6abe2cbc6405133b084a28c3beda539cedd3c0b8a5658d9e34af4d7c0281861" host="localhost" Oct 27 08:32:17.473867 containerd[1607]: 2025-10-27 08:32:17.446 [INFO][4141] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.b6abe2cbc6405133b084a28c3beda539cedd3c0b8a5658d9e34af4d7c0281861" host="localhost" Oct 27 08:32:17.473867 containerd[1607]: 2025-10-27 08:32:17.447 [INFO][4141] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:32:17.473867 containerd[1607]: 2025-10-27 08:32:17.447 [INFO][4141] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="b6abe2cbc6405133b084a28c3beda539cedd3c0b8a5658d9e34af4d7c0281861" HandleID="k8s-pod-network.b6abe2cbc6405133b084a28c3beda539cedd3c0b8a5658d9e34af4d7c0281861" Workload="localhost-k8s-whisker--849fdbdcd5--csmld-eth0" Oct 27 08:32:17.473986 containerd[1607]: 2025-10-27 08:32:17.452 [INFO][4103] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b6abe2cbc6405133b084a28c3beda539cedd3c0b8a5658d9e34af4d7c0281861" Namespace="calico-system" Pod="whisker-849fdbdcd5-csmld" WorkloadEndpoint="localhost-k8s-whisker--849fdbdcd5--csmld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--849fdbdcd5--csmld-eth0", GenerateName:"whisker-849fdbdcd5-", Namespace:"calico-system", SelfLink:"", UID:"f2ff340b-16ee-4ed6-afb9-848c0501ad98", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 32, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"849fdbdcd5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-849fdbdcd5-csmld", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8f8a4e5992d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:32:17.473986 containerd[1607]: 2025-10-27 08:32:17.452 [INFO][4103] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="b6abe2cbc6405133b084a28c3beda539cedd3c0b8a5658d9e34af4d7c0281861" Namespace="calico-system" Pod="whisker-849fdbdcd5-csmld" WorkloadEndpoint="localhost-k8s-whisker--849fdbdcd5--csmld-eth0" Oct 27 08:32:17.474056 containerd[1607]: 2025-10-27 08:32:17.452 [INFO][4103] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8f8a4e5992d ContainerID="b6abe2cbc6405133b084a28c3beda539cedd3c0b8a5658d9e34af4d7c0281861" Namespace="calico-system" Pod="whisker-849fdbdcd5-csmld" WorkloadEndpoint="localhost-k8s-whisker--849fdbdcd5--csmld-eth0" Oct 27 08:32:17.474056 containerd[1607]: 2025-10-27 08:32:17.460 [INFO][4103] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b6abe2cbc6405133b084a28c3beda539cedd3c0b8a5658d9e34af4d7c0281861" Namespace="calico-system" Pod="whisker-849fdbdcd5-csmld" WorkloadEndpoint="localhost-k8s-whisker--849fdbdcd5--csmld-eth0" Oct 27 08:32:17.474105 containerd[1607]: 2025-10-27 08:32:17.460 [INFO][4103] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b6abe2cbc6405133b084a28c3beda539cedd3c0b8a5658d9e34af4d7c0281861" Namespace="calico-system" Pod="whisker-849fdbdcd5-csmld" WorkloadEndpoint="localhost-k8s-whisker--849fdbdcd5--csmld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--849fdbdcd5--csmld-eth0", GenerateName:"whisker-849fdbdcd5-", Namespace:"calico-system", SelfLink:"", UID:"f2ff340b-16ee-4ed6-afb9-848c0501ad98", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 32, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"849fdbdcd5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b6abe2cbc6405133b084a28c3beda539cedd3c0b8a5658d9e34af4d7c0281861", Pod:"whisker-849fdbdcd5-csmld", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8f8a4e5992d", MAC:"5a:3a:db:db:33:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:32:17.474154 containerd[1607]: 2025-10-27 08:32:17.469 [INFO][4103] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b6abe2cbc6405133b084a28c3beda539cedd3c0b8a5658d9e34af4d7c0281861" Namespace="calico-system" Pod="whisker-849fdbdcd5-csmld" WorkloadEndpoint="localhost-k8s-whisker--849fdbdcd5--csmld-eth0" Oct 27 08:32:17.540430 containerd[1607]: time="2025-10-27T08:32:17.539529497Z" level=info msg="connecting to shim b6abe2cbc6405133b084a28c3beda539cedd3c0b8a5658d9e34af4d7c0281861" address="unix:///run/containerd/s/53b496f8bf6a52bdcb45eeab21c6ecd7bee53d677d4b6c4982b1090ff8c15ea8" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:32:17.562230 systemd-networkd[1504]: cali413ef38c447: Link UP Oct 27 08:32:17.562762 systemd-networkd[1504]: cali413ef38c447: Gained carrier Oct 27 08:32:17.579554 systemd[1]: Started cri-containerd-b6abe2cbc6405133b084a28c3beda539cedd3c0b8a5658d9e34af4d7c0281861.scope - libcontainer container b6abe2cbc6405133b084a28c3beda539cedd3c0b8a5658d9e34af4d7c0281861. Oct 27 08:32:17.583597 containerd[1607]: 2025-10-27 08:32:17.303 [INFO][4110] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 27 08:32:17.583597 containerd[1607]: 2025-10-27 08:32:17.327 [INFO][4110] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--2pgd2-eth0 coredns-668d6bf9bc- kube-system 6a9ec93a-0de7-44da-a4da-50f958026c65 829 0 2025-10-27 08:31:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-2pgd2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali413ef38c447 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6e75fd866179dec83c14c36e0800850103dfb5f262806813e4318e023792e96b" Namespace="kube-system" Pod="coredns-668d6bf9bc-2pgd2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2pgd2-" Oct 27 08:32:17.583597 containerd[1607]: 2025-10-27 08:32:17.327 [INFO][4110] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6e75fd866179dec83c14c36e0800850103dfb5f262806813e4318e023792e96b" Namespace="kube-system" Pod="coredns-668d6bf9bc-2pgd2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2pgd2-eth0" Oct 27 08:32:17.583597 containerd[1607]: 2025-10-27 08:32:17.415 [INFO][4139] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6e75fd866179dec83c14c36e0800850103dfb5f262806813e4318e023792e96b" HandleID="k8s-pod-network.6e75fd866179dec83c14c36e0800850103dfb5f262806813e4318e023792e96b" Workload="localhost-k8s-coredns--668d6bf9bc--2pgd2-eth0" Oct 27 08:32:17.583830 containerd[1607]: 2025-10-27 08:32:17.415 [INFO][4139] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6e75fd866179dec83c14c36e0800850103dfb5f262806813e4318e023792e96b" HandleID="k8s-pod-network.6e75fd866179dec83c14c36e0800850103dfb5f262806813e4318e023792e96b" Workload="localhost-k8s-coredns--668d6bf9bc--2pgd2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039c320), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-2pgd2", "timestamp":"2025-10-27 08:32:17.415811855 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:32:17.583830 containerd[1607]: 2025-10-27 08:32:17.416 [INFO][4139] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:32:17.583830 containerd[1607]: 2025-10-27 08:32:17.446 [INFO][4139] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:32:17.583830 containerd[1607]: 2025-10-27 08:32:17.446 [INFO][4139] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 08:32:17.583830 containerd[1607]: 2025-10-27 08:32:17.524 [INFO][4139] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6e75fd866179dec83c14c36e0800850103dfb5f262806813e4318e023792e96b" host="localhost" Oct 27 08:32:17.583830 containerd[1607]: 2025-10-27 08:32:17.530 [INFO][4139] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 08:32:17.583830 containerd[1607]: 2025-10-27 08:32:17.537 [INFO][4139] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 08:32:17.583830 containerd[1607]: 2025-10-27 08:32:17.539 [INFO][4139] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 08:32:17.583830 containerd[1607]: 2025-10-27 08:32:17.543 [INFO][4139] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 08:32:17.583830 containerd[1607]: 2025-10-27 08:32:17.543 [INFO][4139] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6e75fd866179dec83c14c36e0800850103dfb5f262806813e4318e023792e96b" host="localhost" Oct 27 08:32:17.584130 containerd[1607]: 2025-10-27 08:32:17.544 [INFO][4139] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6e75fd866179dec83c14c36e0800850103dfb5f262806813e4318e023792e96b Oct 27 08:32:17.584130 containerd[1607]: 2025-10-27 08:32:17.548 [INFO][4139] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6e75fd866179dec83c14c36e0800850103dfb5f262806813e4318e023792e96b" host="localhost" Oct 27 08:32:17.584130 containerd[1607]: 2025-10-27 08:32:17.556 [INFO][4139] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.6e75fd866179dec83c14c36e0800850103dfb5f262806813e4318e023792e96b" host="localhost" Oct 27 08:32:17.584130 containerd[1607]: 2025-10-27 08:32:17.557 [INFO][4139] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.6e75fd866179dec83c14c36e0800850103dfb5f262806813e4318e023792e96b" host="localhost" Oct 27 08:32:17.584130 containerd[1607]: 2025-10-27 08:32:17.557 [INFO][4139] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:32:17.584130 containerd[1607]: 2025-10-27 08:32:17.557 [INFO][4139] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="6e75fd866179dec83c14c36e0800850103dfb5f262806813e4318e023792e96b" HandleID="k8s-pod-network.6e75fd866179dec83c14c36e0800850103dfb5f262806813e4318e023792e96b" Workload="localhost-k8s-coredns--668d6bf9bc--2pgd2-eth0" Oct 27 08:32:17.584351 containerd[1607]: 2025-10-27 08:32:17.560 [INFO][4110] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6e75fd866179dec83c14c36e0800850103dfb5f262806813e4318e023792e96b" Namespace="kube-system" Pod="coredns-668d6bf9bc-2pgd2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2pgd2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--2pgd2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6a9ec93a-0de7-44da-a4da-50f958026c65", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 31, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-2pgd2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali413ef38c447", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:32:17.584433 containerd[1607]: 2025-10-27 08:32:17.560 [INFO][4110] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="6e75fd866179dec83c14c36e0800850103dfb5f262806813e4318e023792e96b" Namespace="kube-system" Pod="coredns-668d6bf9bc-2pgd2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2pgd2-eth0" Oct 27 08:32:17.584433 containerd[1607]: 2025-10-27 08:32:17.560 [INFO][4110] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali413ef38c447 ContainerID="6e75fd866179dec83c14c36e0800850103dfb5f262806813e4318e023792e96b" Namespace="kube-system" Pod="coredns-668d6bf9bc-2pgd2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2pgd2-eth0" Oct 27 08:32:17.584433 containerd[1607]: 2025-10-27 08:32:17.562 [INFO][4110] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6e75fd866179dec83c14c36e0800850103dfb5f262806813e4318e023792e96b" Namespace="kube-system" Pod="coredns-668d6bf9bc-2pgd2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2pgd2-eth0" Oct 27 08:32:17.584509 containerd[1607]: 2025-10-27 08:32:17.563 [INFO][4110] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6e75fd866179dec83c14c36e0800850103dfb5f262806813e4318e023792e96b" Namespace="kube-system" Pod="coredns-668d6bf9bc-2pgd2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2pgd2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--2pgd2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6a9ec93a-0de7-44da-a4da-50f958026c65", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 31, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6e75fd866179dec83c14c36e0800850103dfb5f262806813e4318e023792e96b", Pod:"coredns-668d6bf9bc-2pgd2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali413ef38c447", MAC:"96:fc:25:71:04:b9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:32:17.584509 containerd[1607]: 2025-10-27 08:32:17.576 [INFO][4110] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6e75fd866179dec83c14c36e0800850103dfb5f262806813e4318e023792e96b" Namespace="kube-system" Pod="coredns-668d6bf9bc-2pgd2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2pgd2-eth0" Oct 27 08:32:17.597023 systemd-resolved[1445]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 08:32:17.609675 containerd[1607]: time="2025-10-27T08:32:17.609489144Z" level=info msg="connecting to shim 6e75fd866179dec83c14c36e0800850103dfb5f262806813e4318e023792e96b" address="unix:///run/containerd/s/2cf5e993490a1ca9837bf4029cc0dabb86b62974e52e19165a353e22d8ed9fb1" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:32:17.637757 systemd[1]: Started cri-containerd-6e75fd866179dec83c14c36e0800850103dfb5f262806813e4318e023792e96b.scope - libcontainer container 6e75fd866179dec83c14c36e0800850103dfb5f262806813e4318e023792e96b. Oct 27 08:32:17.641095 containerd[1607]: time="2025-10-27T08:32:17.641051233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-849fdbdcd5-csmld,Uid:f2ff340b-16ee-4ed6-afb9-848c0501ad98,Namespace:calico-system,Attempt:0,} returns sandbox id \"b6abe2cbc6405133b084a28c3beda539cedd3c0b8a5658d9e34af4d7c0281861\"" Oct 27 08:32:17.643647 containerd[1607]: time="2025-10-27T08:32:17.643612434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 27 08:32:17.657236 systemd-resolved[1445]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 08:32:17.689035 systemd-networkd[1504]: vxlan.calico: Link UP Oct 27 08:32:17.689050 systemd-networkd[1504]: vxlan.calico: Gained carrier Oct 27 08:32:17.692479 systemd[1]: Started sshd@8-10.0.0.134:22-10.0.0.1:60976.service - OpenSSH per-connection server daemon (10.0.0.1:60976). Oct 27 08:32:17.719047 containerd[1607]: time="2025-10-27T08:32:17.718856387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2pgd2,Uid:6a9ec93a-0de7-44da-a4da-50f958026c65,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e75fd866179dec83c14c36e0800850103dfb5f262806813e4318e023792e96b\"" Oct 27 08:32:17.721025 kubelet[2758]: E1027 08:32:17.720979 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:32:17.734455 containerd[1607]: time="2025-10-27T08:32:17.734380601Z" level=info msg="CreateContainer within sandbox \"6e75fd866179dec83c14c36e0800850103dfb5f262806813e4318e023792e96b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 27 08:32:17.752936 kubelet[2758]: E1027 08:32:17.752214 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:32:17.793518 sshd[4275]: Accepted publickey for core from 10.0.0.1 port 60976 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:32:17.794890 sshd-session[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:32:17.802035 systemd-logind[1577]: New session 9 of user core. Oct 27 08:32:17.810584 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 27 08:32:17.846338 containerd[1607]: time="2025-10-27T08:32:17.846284370Z" level=info msg="TaskExit event in podsandbox handler container_id:\"386840f37713bcbc7dffc61f8593ec36ea759e2185bbca2faf04e5febe4d307d\" id:\"a410863745786ce1c78413e0efd2284064e4997f8a32e4ef8f7385c37c09850b\" pid:4315 exit_status:1 exited_at:{seconds:1761553937 nanos:845923725}" Oct 27 08:32:17.975682 containerd[1607]: time="2025-10-27T08:32:17.975597500Z" level=info msg="Container 5f50f4418da037d2ea87142b466ea873ec356743da634b9b948fe7a64790249a: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:32:17.983617 sshd[4326]: Connection closed by 10.0.0.1 port 60976 Oct 27 08:32:17.984625 sshd-session[4275]: pam_unix(sshd:session): session closed for user core Oct 27 08:32:17.986095 containerd[1607]: time="2025-10-27T08:32:17.985956130Z" level=info msg="CreateContainer within sandbox \"6e75fd866179dec83c14c36e0800850103dfb5f262806813e4318e023792e96b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5f50f4418da037d2ea87142b466ea873ec356743da634b9b948fe7a64790249a\"" Oct 27 08:32:17.986775 containerd[1607]: time="2025-10-27T08:32:17.986753171Z" level=info msg="StartContainer for \"5f50f4418da037d2ea87142b466ea873ec356743da634b9b948fe7a64790249a\"" Oct 27 08:32:17.988472 containerd[1607]: time="2025-10-27T08:32:17.988180392Z" level=info msg="connecting to shim 5f50f4418da037d2ea87142b466ea873ec356743da634b9b948fe7a64790249a" address="unix:///run/containerd/s/2cf5e993490a1ca9837bf4029cc0dabb86b62974e52e19165a353e22d8ed9fb1" protocol=ttrpc version=3 Oct 27 08:32:17.990790 systemd[1]: sshd@8-10.0.0.134:22-10.0.0.1:60976.service: Deactivated successfully. Oct 27 08:32:17.993207 systemd[1]: session-9.scope: Deactivated successfully. Oct 27 08:32:17.995788 systemd-logind[1577]: Session 9 logged out. Waiting for processes to exit. Oct 27 08:32:18.000700 systemd-logind[1577]: Removed session 9. Oct 27 08:32:18.013945 systemd[1]: Started cri-containerd-5f50f4418da037d2ea87142b466ea873ec356743da634b9b948fe7a64790249a.scope - libcontainer container 5f50f4418da037d2ea87142b466ea873ec356743da634b9b948fe7a64790249a. Oct 27 08:32:18.025519 containerd[1607]: time="2025-10-27T08:32:18.025467268Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:32:18.026653 containerd[1607]: time="2025-10-27T08:32:18.026601289Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 27 08:32:18.031268 containerd[1607]: time="2025-10-27T08:32:18.031216685Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 27 08:32:18.031648 kubelet[2758]: E1027 08:32:18.031606 2758 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 08:32:18.031937 kubelet[2758]: E1027 08:32:18.031665 2758 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 08:32:18.034675 kubelet[2758]: E1027 08:32:18.034619 2758 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:80a7b8633959402db64dbfcc329ddd47,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xg9l5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-849fdbdcd5-csmld_calico-system(f2ff340b-16ee-4ed6-afb9-848c0501ad98): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 27 08:32:18.037015 containerd[1607]: time="2025-10-27T08:32:18.036982023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 27 08:32:18.054511 containerd[1607]: time="2025-10-27T08:32:18.054350031Z" level=info msg="StartContainer for \"5f50f4418da037d2ea87142b466ea873ec356743da634b9b948fe7a64790249a\" returns successfully" Oct 27 08:32:18.254873 containerd[1607]: time="2025-10-27T08:32:18.254824267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-pds2z,Uid:4ba4a847-9e38-4bf2-a22e-46c61e29a54b,Namespace:calico-system,Attempt:0,}" Oct 27 08:32:18.342940 systemd-networkd[1504]: cali02a5fd344d9: Link UP Oct 27 08:32:18.344487 systemd-networkd[1504]: cali02a5fd344d9: Gained carrier Oct 27 08:32:18.357567 containerd[1607]: 2025-10-27 08:32:18.288 [INFO][4411] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--pds2z-eth0 goldmane-666569f655- calico-system 4ba4a847-9e38-4bf2-a22e-46c61e29a54b 831 0 2025-10-27 08:31:50 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-pds2z eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali02a5fd344d9 [] [] }} ContainerID="a82e18745cf41103d2fc9ff830dd03fd098fc2e07342f7de068db6624720dd50" Namespace="calico-system" Pod="goldmane-666569f655-pds2z" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--pds2z-" Oct 27 08:32:18.357567 containerd[1607]: 2025-10-27 08:32:18.288 [INFO][4411] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a82e18745cf41103d2fc9ff830dd03fd098fc2e07342f7de068db6624720dd50" Namespace="calico-system" Pod="goldmane-666569f655-pds2z" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--pds2z-eth0" Oct 27 08:32:18.357567 containerd[1607]: 2025-10-27 08:32:18.311 [INFO][4425] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a82e18745cf41103d2fc9ff830dd03fd098fc2e07342f7de068db6624720dd50" HandleID="k8s-pod-network.a82e18745cf41103d2fc9ff830dd03fd098fc2e07342f7de068db6624720dd50" Workload="localhost-k8s-goldmane--666569f655--pds2z-eth0" Oct 27 08:32:18.357567 containerd[1607]: 2025-10-27 08:32:18.311 [INFO][4425] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a82e18745cf41103d2fc9ff830dd03fd098fc2e07342f7de068db6624720dd50" HandleID="k8s-pod-network.a82e18745cf41103d2fc9ff830dd03fd098fc2e07342f7de068db6624720dd50" Workload="localhost-k8s-goldmane--666569f655--pds2z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f5f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-pds2z", "timestamp":"2025-10-27 08:32:18.311825945 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:32:18.357567 containerd[1607]: 2025-10-27 08:32:18.312 [INFO][4425] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:32:18.357567 containerd[1607]: 2025-10-27 08:32:18.312 [INFO][4425] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:32:18.357567 containerd[1607]: 2025-10-27 08:32:18.312 [INFO][4425] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 08:32:18.357567 containerd[1607]: 2025-10-27 08:32:18.318 [INFO][4425] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a82e18745cf41103d2fc9ff830dd03fd098fc2e07342f7de068db6624720dd50" host="localhost" Oct 27 08:32:18.357567 containerd[1607]: 2025-10-27 08:32:18.321 [INFO][4425] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 08:32:18.357567 containerd[1607]: 2025-10-27 08:32:18.324 [INFO][4425] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 08:32:18.357567 containerd[1607]: 2025-10-27 08:32:18.326 [INFO][4425] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 08:32:18.357567 containerd[1607]: 2025-10-27 08:32:18.327 [INFO][4425] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 08:32:18.357567 containerd[1607]: 2025-10-27 08:32:18.327 [INFO][4425] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a82e18745cf41103d2fc9ff830dd03fd098fc2e07342f7de068db6624720dd50" host="localhost" Oct 27 08:32:18.357567 containerd[1607]: 2025-10-27 08:32:18.329 [INFO][4425] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a82e18745cf41103d2fc9ff830dd03fd098fc2e07342f7de068db6624720dd50 Oct 27 08:32:18.357567 containerd[1607]: 2025-10-27 08:32:18.332 [INFO][4425] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a82e18745cf41103d2fc9ff830dd03fd098fc2e07342f7de068db6624720dd50" host="localhost" Oct 27 08:32:18.357567 containerd[1607]: 2025-10-27 08:32:18.337 [INFO][4425] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.a82e18745cf41103d2fc9ff830dd03fd098fc2e07342f7de068db6624720dd50" host="localhost" Oct 27 08:32:18.357567 containerd[1607]: 2025-10-27 08:32:18.337 [INFO][4425] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.a82e18745cf41103d2fc9ff830dd03fd098fc2e07342f7de068db6624720dd50" host="localhost" Oct 27 08:32:18.357567 containerd[1607]: 2025-10-27 08:32:18.337 [INFO][4425] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:32:18.357567 containerd[1607]: 2025-10-27 08:32:18.337 [INFO][4425] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="a82e18745cf41103d2fc9ff830dd03fd098fc2e07342f7de068db6624720dd50" HandleID="k8s-pod-network.a82e18745cf41103d2fc9ff830dd03fd098fc2e07342f7de068db6624720dd50" Workload="localhost-k8s-goldmane--666569f655--pds2z-eth0" Oct 27 08:32:18.358339 containerd[1607]: 2025-10-27 08:32:18.340 [INFO][4411] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a82e18745cf41103d2fc9ff830dd03fd098fc2e07342f7de068db6624720dd50" Namespace="calico-system" Pod="goldmane-666569f655-pds2z" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--pds2z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--pds2z-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"4ba4a847-9e38-4bf2-a22e-46c61e29a54b", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 31, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-pds2z", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali02a5fd344d9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:32:18.358339 containerd[1607]: 2025-10-27 08:32:18.341 [INFO][4411] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="a82e18745cf41103d2fc9ff830dd03fd098fc2e07342f7de068db6624720dd50" Namespace="calico-system" Pod="goldmane-666569f655-pds2z" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--pds2z-eth0" Oct 27 08:32:18.358339 containerd[1607]: 2025-10-27 08:32:18.341 [INFO][4411] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali02a5fd344d9 ContainerID="a82e18745cf41103d2fc9ff830dd03fd098fc2e07342f7de068db6624720dd50" Namespace="calico-system" Pod="goldmane-666569f655-pds2z" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--pds2z-eth0" Oct 27 08:32:18.358339 containerd[1607]: 2025-10-27 08:32:18.343 [INFO][4411] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a82e18745cf41103d2fc9ff830dd03fd098fc2e07342f7de068db6624720dd50" Namespace="calico-system" Pod="goldmane-666569f655-pds2z" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--pds2z-eth0" Oct 27 08:32:18.358339 containerd[1607]: 2025-10-27 08:32:18.343 [INFO][4411] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a82e18745cf41103d2fc9ff830dd03fd098fc2e07342f7de068db6624720dd50" Namespace="calico-system" Pod="goldmane-666569f655-pds2z" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--pds2z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--pds2z-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"4ba4a847-9e38-4bf2-a22e-46c61e29a54b", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 31, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a82e18745cf41103d2fc9ff830dd03fd098fc2e07342f7de068db6624720dd50", Pod:"goldmane-666569f655-pds2z", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali02a5fd344d9", MAC:"16:28:3d:41:86:d0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:32:18.358339 containerd[1607]: 2025-10-27 08:32:18.353 [INFO][4411] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a82e18745cf41103d2fc9ff830dd03fd098fc2e07342f7de068db6624720dd50" Namespace="calico-system" Pod="goldmane-666569f655-pds2z" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--pds2z-eth0" Oct 27 08:32:18.388569 containerd[1607]: time="2025-10-27T08:32:18.388471410Z" level=info msg="connecting to shim a82e18745cf41103d2fc9ff830dd03fd098fc2e07342f7de068db6624720dd50" address="unix:///run/containerd/s/86307165f120fef31d000ab4084e7363192e1d8756eaa909807092a56b11a963" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:32:18.393427 containerd[1607]: time="2025-10-27T08:32:18.393370016Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:32:18.396481 containerd[1607]: time="2025-10-27T08:32:18.395113628Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 27 08:32:18.396481 containerd[1607]: time="2025-10-27T08:32:18.395166608Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 27 08:32:18.396599 kubelet[2758]: E1027 08:32:18.395343 2758 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 08:32:18.396599 kubelet[2758]: E1027 08:32:18.395399 2758 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 08:32:18.396669 kubelet[2758]: E1027 08:32:18.395570 2758 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xg9l5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-849fdbdcd5-csmld_calico-system(f2ff340b-16ee-4ed6-afb9-848c0501ad98): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 27 08:32:18.397611 kubelet[2758]: E1027 08:32:18.396880 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-849fdbdcd5-csmld" podUID="f2ff340b-16ee-4ed6-afb9-848c0501ad98" Oct 27 08:32:18.423861 systemd[1]: Started cri-containerd-a82e18745cf41103d2fc9ff830dd03fd098fc2e07342f7de068db6624720dd50.scope - libcontainer container a82e18745cf41103d2fc9ff830dd03fd098fc2e07342f7de068db6624720dd50. Oct 27 08:32:18.441739 systemd-resolved[1445]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 08:32:18.475166 containerd[1607]: time="2025-10-27T08:32:18.475121885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-pds2z,Uid:4ba4a847-9e38-4bf2-a22e-46c61e29a54b,Namespace:calico-system,Attempt:0,} returns sandbox id \"a82e18745cf41103d2fc9ff830dd03fd098fc2e07342f7de068db6624720dd50\"" Oct 27 08:32:18.476615 containerd[1607]: time="2025-10-27T08:32:18.476587948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 27 08:32:18.756007 kubelet[2758]: E1027 08:32:18.755868 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:32:18.757332 kubelet[2758]: E1027 08:32:18.757272 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-849fdbdcd5-csmld" podUID="f2ff340b-16ee-4ed6-afb9-848c0501ad98" Oct 27 08:32:18.806247 containerd[1607]: time="2025-10-27T08:32:18.806186344Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:32:18.889848 containerd[1607]: time="2025-10-27T08:32:18.889738212Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 27 08:32:18.889848 containerd[1607]: time="2025-10-27T08:32:18.889828361Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 27 08:32:18.890082 kubelet[2758]: E1027 08:32:18.890041 2758 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 08:32:18.890182 kubelet[2758]: E1027 08:32:18.890098 2758 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 08:32:18.890336 kubelet[2758]: E1027 08:32:18.890261 2758 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9r84d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-pds2z_calico-system(4ba4a847-9e38-4bf2-a22e-46c61e29a54b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 27 08:32:18.891513 kubelet[2758]: E1027 08:32:18.891460 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pds2z" podUID="4ba4a847-9e38-4bf2-a22e-46c61e29a54b" Oct 27 08:32:19.011211 kubelet[2758]: I1027 08:32:19.011144 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2pgd2" podStartSLOduration=43.01112187 podStartE2EDuration="43.01112187s" podCreationTimestamp="2025-10-27 08:31:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 08:32:19.010791772 +0000 UTC m=+47.874505008" watchObservedRunningTime="2025-10-27 08:32:19.01112187 +0000 UTC m=+47.874835076" Oct 27 08:32:19.138621 systemd-networkd[1504]: vxlan.calico: Gained IPv6LL Oct 27 08:32:19.255164 containerd[1607]: time="2025-10-27T08:32:19.255105847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wxbd2,Uid:7910370a-d2b9-4ee0-8c0a-b904aff5f65a,Namespace:calico-system,Attempt:0,}" Oct 27 08:32:19.255589 containerd[1607]: time="2025-10-27T08:32:19.255237705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85866cf9fb-2xx5v,Uid:29f6d368-bb0a-4633-8401-1b96e3d04052,Namespace:calico-system,Attempt:0,}" Oct 27 08:32:19.255589 containerd[1607]: time="2025-10-27T08:32:19.255105687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7948764c44-6chns,Uid:dbb3c469-8dbc-4871-a711-e7befd27ac29,Namespace:calico-apiserver,Attempt:0,}" Oct 27 08:32:19.522767 systemd-networkd[1504]: cali8f8a4e5992d: Gained IPv6LL Oct 27 08:32:19.586579 systemd-networkd[1504]: cali413ef38c447: Gained IPv6LL Oct 27 08:32:19.649906 systemd-networkd[1504]: cali8ad123c058e: Link UP Oct 27 08:32:19.650835 systemd-networkd[1504]: cali8ad123c058e: Gained carrier Oct 27 08:32:19.665076 containerd[1607]: 2025-10-27 08:32:19.564 [INFO][4501] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7948764c44--6chns-eth0 calico-apiserver-7948764c44- calico-apiserver dbb3c469-8dbc-4871-a711-e7befd27ac29 834 0 2025-10-27 08:31:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7948764c44 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7948764c44-6chns eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8ad123c058e [] [] }} ContainerID="bfc393088385c29c4aa9d965bb24fc1fccaf235e4374d924d9508e40eaa3e5bd" Namespace="calico-apiserver" Pod="calico-apiserver-7948764c44-6chns" WorkloadEndpoint="localhost-k8s-calico--apiserver--7948764c44--6chns-" Oct 27 08:32:19.665076 containerd[1607]: 2025-10-27 08:32:19.565 [INFO][4501] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bfc393088385c29c4aa9d965bb24fc1fccaf235e4374d924d9508e40eaa3e5bd" Namespace="calico-apiserver" Pod="calico-apiserver-7948764c44-6chns" WorkloadEndpoint="localhost-k8s-calico--apiserver--7948764c44--6chns-eth0" Oct 27 08:32:19.665076 containerd[1607]: 2025-10-27 08:32:19.601 [INFO][4543] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bfc393088385c29c4aa9d965bb24fc1fccaf235e4374d924d9508e40eaa3e5bd" HandleID="k8s-pod-network.bfc393088385c29c4aa9d965bb24fc1fccaf235e4374d924d9508e40eaa3e5bd" Workload="localhost-k8s-calico--apiserver--7948764c44--6chns-eth0" Oct 27 08:32:19.665076 containerd[1607]: 2025-10-27 08:32:19.601 [INFO][4543] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bfc393088385c29c4aa9d965bb24fc1fccaf235e4374d924d9508e40eaa3e5bd" HandleID="k8s-pod-network.bfc393088385c29c4aa9d965bb24fc1fccaf235e4374d924d9508e40eaa3e5bd" Workload="localhost-k8s-calico--apiserver--7948764c44--6chns-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001384f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7948764c44-6chns", "timestamp":"2025-10-27 08:32:19.601308385 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:32:19.665076 containerd[1607]: 2025-10-27 08:32:19.601 [INFO][4543] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:32:19.665076 containerd[1607]: 2025-10-27 08:32:19.601 [INFO][4543] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:32:19.665076 containerd[1607]: 2025-10-27 08:32:19.601 [INFO][4543] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 08:32:19.665076 containerd[1607]: 2025-10-27 08:32:19.610 [INFO][4543] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bfc393088385c29c4aa9d965bb24fc1fccaf235e4374d924d9508e40eaa3e5bd" host="localhost" Oct 27 08:32:19.665076 containerd[1607]: 2025-10-27 08:32:19.615 [INFO][4543] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 08:32:19.665076 containerd[1607]: 2025-10-27 08:32:19.619 [INFO][4543] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 08:32:19.665076 containerd[1607]: 2025-10-27 08:32:19.621 [INFO][4543] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 08:32:19.665076 containerd[1607]: 2025-10-27 08:32:19.622 [INFO][4543] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 08:32:19.665076 containerd[1607]: 2025-10-27 08:32:19.622 [INFO][4543] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bfc393088385c29c4aa9d965bb24fc1fccaf235e4374d924d9508e40eaa3e5bd" host="localhost" Oct 27 08:32:19.665076 containerd[1607]: 2025-10-27 08:32:19.624 [INFO][4543] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bfc393088385c29c4aa9d965bb24fc1fccaf235e4374d924d9508e40eaa3e5bd Oct 27 08:32:19.665076 containerd[1607]: 2025-10-27 08:32:19.634 [INFO][4543] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bfc393088385c29c4aa9d965bb24fc1fccaf235e4374d924d9508e40eaa3e5bd" host="localhost" Oct 27 08:32:19.665076 containerd[1607]: 2025-10-27 08:32:19.643 [INFO][4543] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.bfc393088385c29c4aa9d965bb24fc1fccaf235e4374d924d9508e40eaa3e5bd" host="localhost" Oct 27 08:32:19.665076 containerd[1607]: 2025-10-27 08:32:19.643 [INFO][4543] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.bfc393088385c29c4aa9d965bb24fc1fccaf235e4374d924d9508e40eaa3e5bd" host="localhost" Oct 27 08:32:19.665076 containerd[1607]: 2025-10-27 08:32:19.643 [INFO][4543] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:32:19.665076 containerd[1607]: 2025-10-27 08:32:19.643 [INFO][4543] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="bfc393088385c29c4aa9d965bb24fc1fccaf235e4374d924d9508e40eaa3e5bd" HandleID="k8s-pod-network.bfc393088385c29c4aa9d965bb24fc1fccaf235e4374d924d9508e40eaa3e5bd" Workload="localhost-k8s-calico--apiserver--7948764c44--6chns-eth0" Oct 27 08:32:19.665709 containerd[1607]: 2025-10-27 08:32:19.646 [INFO][4501] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bfc393088385c29c4aa9d965bb24fc1fccaf235e4374d924d9508e40eaa3e5bd" Namespace="calico-apiserver" Pod="calico-apiserver-7948764c44-6chns" WorkloadEndpoint="localhost-k8s-calico--apiserver--7948764c44--6chns-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7948764c44--6chns-eth0", GenerateName:"calico-apiserver-7948764c44-", Namespace:"calico-apiserver", SelfLink:"", UID:"dbb3c469-8dbc-4871-a711-e7befd27ac29", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 31, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7948764c44", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7948764c44-6chns", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8ad123c058e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:32:19.665709 containerd[1607]: 2025-10-27 08:32:19.646 [INFO][4501] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="bfc393088385c29c4aa9d965bb24fc1fccaf235e4374d924d9508e40eaa3e5bd" Namespace="calico-apiserver" Pod="calico-apiserver-7948764c44-6chns" WorkloadEndpoint="localhost-k8s-calico--apiserver--7948764c44--6chns-eth0" Oct 27 08:32:19.665709 containerd[1607]: 2025-10-27 08:32:19.646 [INFO][4501] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8ad123c058e ContainerID="bfc393088385c29c4aa9d965bb24fc1fccaf235e4374d924d9508e40eaa3e5bd" Namespace="calico-apiserver" Pod="calico-apiserver-7948764c44-6chns" WorkloadEndpoint="localhost-k8s-calico--apiserver--7948764c44--6chns-eth0" Oct 27 08:32:19.665709 containerd[1607]: 2025-10-27 08:32:19.650 [INFO][4501] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bfc393088385c29c4aa9d965bb24fc1fccaf235e4374d924d9508e40eaa3e5bd" Namespace="calico-apiserver" Pod="calico-apiserver-7948764c44-6chns" WorkloadEndpoint="localhost-k8s-calico--apiserver--7948764c44--6chns-eth0" Oct 27 08:32:19.665709 containerd[1607]: 2025-10-27 08:32:19.651 [INFO][4501] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bfc393088385c29c4aa9d965bb24fc1fccaf235e4374d924d9508e40eaa3e5bd" Namespace="calico-apiserver" Pod="calico-apiserver-7948764c44-6chns" WorkloadEndpoint="localhost-k8s-calico--apiserver--7948764c44--6chns-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7948764c44--6chns-eth0", GenerateName:"calico-apiserver-7948764c44-", Namespace:"calico-apiserver", SelfLink:"", UID:"dbb3c469-8dbc-4871-a711-e7befd27ac29", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 31, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7948764c44", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bfc393088385c29c4aa9d965bb24fc1fccaf235e4374d924d9508e40eaa3e5bd", Pod:"calico-apiserver-7948764c44-6chns", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8ad123c058e", MAC:"e6:76:8c:3b:e0:f1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:32:19.665709 containerd[1607]: 2025-10-27 08:32:19.661 [INFO][4501] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bfc393088385c29c4aa9d965bb24fc1fccaf235e4374d924d9508e40eaa3e5bd" Namespace="calico-apiserver" Pod="calico-apiserver-7948764c44-6chns" WorkloadEndpoint="localhost-k8s-calico--apiserver--7948764c44--6chns-eth0" Oct 27 08:32:19.696243 containerd[1607]: time="2025-10-27T08:32:19.696183503Z" level=info msg="connecting to shim bfc393088385c29c4aa9d965bb24fc1fccaf235e4374d924d9508e40eaa3e5bd" address="unix:///run/containerd/s/d4f15f870a118fc7df5bb13cf873eab6ff70358e1b04ac285d8feae16facf576" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:32:19.726466 systemd[1]: Started cri-containerd-bfc393088385c29c4aa9d965bb24fc1fccaf235e4374d924d9508e40eaa3e5bd.scope - libcontainer container bfc393088385c29c4aa9d965bb24fc1fccaf235e4374d924d9508e40eaa3e5bd. Oct 27 08:32:19.748100 systemd-resolved[1445]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 08:32:19.753757 systemd-networkd[1504]: caliba6a4a45eb0: Link UP Oct 27 08:32:19.754953 systemd-networkd[1504]: caliba6a4a45eb0: Gained carrier Oct 27 08:32:19.760630 kubelet[2758]: E1027 08:32:19.760292 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:32:19.762300 kubelet[2758]: E1027 08:32:19.762252 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pds2z" podUID="4ba4a847-9e38-4bf2-a22e-46c61e29a54b" Oct 27 08:32:19.785837 containerd[1607]: 2025-10-27 08:32:19.563 [INFO][4499] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--wxbd2-eth0 csi-node-driver- calico-system 7910370a-d2b9-4ee0-8c0a-b904aff5f65a 709 0 2025-10-27 08:31:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-wxbd2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliba6a4a45eb0 [] [] }} ContainerID="c7395f27aa487b9223899982920ea9f7c1e94c2989f28515b714fa32ba0c845f" Namespace="calico-system" Pod="csi-node-driver-wxbd2" WorkloadEndpoint="localhost-k8s-csi--node--driver--wxbd2-" Oct 27 08:32:19.785837 containerd[1607]: 2025-10-27 08:32:19.564 [INFO][4499] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c7395f27aa487b9223899982920ea9f7c1e94c2989f28515b714fa32ba0c845f" Namespace="calico-system" Pod="csi-node-driver-wxbd2" WorkloadEndpoint="localhost-k8s-csi--node--driver--wxbd2-eth0" Oct 27 08:32:19.785837 containerd[1607]: 2025-10-27 08:32:19.605 [INFO][4536] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c7395f27aa487b9223899982920ea9f7c1e94c2989f28515b714fa32ba0c845f" HandleID="k8s-pod-network.c7395f27aa487b9223899982920ea9f7c1e94c2989f28515b714fa32ba0c845f" Workload="localhost-k8s-csi--node--driver--wxbd2-eth0" Oct 27 08:32:19.785837 containerd[1607]: 2025-10-27 08:32:19.605 [INFO][4536] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c7395f27aa487b9223899982920ea9f7c1e94c2989f28515b714fa32ba0c845f" HandleID="k8s-pod-network.c7395f27aa487b9223899982920ea9f7c1e94c2989f28515b714fa32ba0c845f" Workload="localhost-k8s-csi--node--driver--wxbd2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b4a10), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-wxbd2", "timestamp":"2025-10-27 08:32:19.605524355 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:32:19.785837 containerd[1607]: 2025-10-27 08:32:19.605 [INFO][4536] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:32:19.785837 containerd[1607]: 2025-10-27 08:32:19.643 [INFO][4536] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:32:19.785837 containerd[1607]: 2025-10-27 08:32:19.644 [INFO][4536] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 08:32:19.785837 containerd[1607]: 2025-10-27 08:32:19.710 [INFO][4536] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c7395f27aa487b9223899982920ea9f7c1e94c2989f28515b714fa32ba0c845f" host="localhost" Oct 27 08:32:19.785837 containerd[1607]: 2025-10-27 08:32:19.716 [INFO][4536] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 08:32:19.785837 containerd[1607]: 2025-10-27 08:32:19.721 [INFO][4536] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 08:32:19.785837 containerd[1607]: 2025-10-27 08:32:19.723 [INFO][4536] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 08:32:19.785837 containerd[1607]: 2025-10-27 08:32:19.725 [INFO][4536] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 08:32:19.785837 containerd[1607]: 2025-10-27 08:32:19.725 [INFO][4536] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c7395f27aa487b9223899982920ea9f7c1e94c2989f28515b714fa32ba0c845f" host="localhost" Oct 27 08:32:19.785837 containerd[1607]: 2025-10-27 08:32:19.729 [INFO][4536] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c7395f27aa487b9223899982920ea9f7c1e94c2989f28515b714fa32ba0c845f Oct 27 08:32:19.785837 containerd[1607]: 2025-10-27 08:32:19.738 [INFO][4536] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c7395f27aa487b9223899982920ea9f7c1e94c2989f28515b714fa32ba0c845f" host="localhost" Oct 27 08:32:19.785837 containerd[1607]: 2025-10-27 08:32:19.744 [INFO][4536] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.c7395f27aa487b9223899982920ea9f7c1e94c2989f28515b714fa32ba0c845f" host="localhost" Oct 27 08:32:19.785837 containerd[1607]: 2025-10-27 08:32:19.744 [INFO][4536] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.c7395f27aa487b9223899982920ea9f7c1e94c2989f28515b714fa32ba0c845f" host="localhost" Oct 27 08:32:19.785837 containerd[1607]: 2025-10-27 08:32:19.744 [INFO][4536] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:32:19.785837 containerd[1607]: 2025-10-27 08:32:19.744 [INFO][4536] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="c7395f27aa487b9223899982920ea9f7c1e94c2989f28515b714fa32ba0c845f" HandleID="k8s-pod-network.c7395f27aa487b9223899982920ea9f7c1e94c2989f28515b714fa32ba0c845f" Workload="localhost-k8s-csi--node--driver--wxbd2-eth0" Oct 27 08:32:19.786467 containerd[1607]: 2025-10-27 08:32:19.749 [INFO][4499] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c7395f27aa487b9223899982920ea9f7c1e94c2989f28515b714fa32ba0c845f" Namespace="calico-system" Pod="csi-node-driver-wxbd2" WorkloadEndpoint="localhost-k8s-csi--node--driver--wxbd2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wxbd2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7910370a-d2b9-4ee0-8c0a-b904aff5f65a", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 31, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-wxbd2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliba6a4a45eb0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:32:19.786467 containerd[1607]: 2025-10-27 08:32:19.749 [INFO][4499] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="c7395f27aa487b9223899982920ea9f7c1e94c2989f28515b714fa32ba0c845f" Namespace="calico-system" Pod="csi-node-driver-wxbd2" WorkloadEndpoint="localhost-k8s-csi--node--driver--wxbd2-eth0" Oct 27 08:32:19.786467 containerd[1607]: 2025-10-27 08:32:19.749 [INFO][4499] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliba6a4a45eb0 ContainerID="c7395f27aa487b9223899982920ea9f7c1e94c2989f28515b714fa32ba0c845f" Namespace="calico-system" Pod="csi-node-driver-wxbd2" WorkloadEndpoint="localhost-k8s-csi--node--driver--wxbd2-eth0" Oct 27 08:32:19.786467 containerd[1607]: 2025-10-27 08:32:19.755 [INFO][4499] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c7395f27aa487b9223899982920ea9f7c1e94c2989f28515b714fa32ba0c845f" Namespace="calico-system" Pod="csi-node-driver-wxbd2" WorkloadEndpoint="localhost-k8s-csi--node--driver--wxbd2-eth0" Oct 27 08:32:19.786467 containerd[1607]: 2025-10-27 08:32:19.756 [INFO][4499] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c7395f27aa487b9223899982920ea9f7c1e94c2989f28515b714fa32ba0c845f" Namespace="calico-system" Pod="csi-node-driver-wxbd2" WorkloadEndpoint="localhost-k8s-csi--node--driver--wxbd2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wxbd2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7910370a-d2b9-4ee0-8c0a-b904aff5f65a", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 31, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c7395f27aa487b9223899982920ea9f7c1e94c2989f28515b714fa32ba0c845f", Pod:"csi-node-driver-wxbd2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliba6a4a45eb0", MAC:"da:94:a9:b6:45:ea", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:32:19.786467 containerd[1607]: 2025-10-27 08:32:19.773 [INFO][4499] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c7395f27aa487b9223899982920ea9f7c1e94c2989f28515b714fa32ba0c845f" Namespace="calico-system" Pod="csi-node-driver-wxbd2" WorkloadEndpoint="localhost-k8s-csi--node--driver--wxbd2-eth0" Oct 27 08:32:19.802905 containerd[1607]: time="2025-10-27T08:32:19.802857602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7948764c44-6chns,Uid:dbb3c469-8dbc-4871-a711-e7befd27ac29,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"bfc393088385c29c4aa9d965bb24fc1fccaf235e4374d924d9508e40eaa3e5bd\"" Oct 27 08:32:19.804441 containerd[1607]: time="2025-10-27T08:32:19.804385981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 08:32:19.842635 systemd-networkd[1504]: cali02a5fd344d9: Gained IPv6LL Oct 27 08:32:20.311725 containerd[1607]: time="2025-10-27T08:32:20.311670747Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:32:20.375186 systemd-networkd[1504]: cali93ed70e7860: Link UP Oct 27 08:32:20.376781 systemd-networkd[1504]: cali93ed70e7860: Gained carrier Oct 27 08:32:20.447522 containerd[1607]: time="2025-10-27T08:32:20.447451927Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 08:32:20.447522 containerd[1607]: time="2025-10-27T08:32:20.447458419Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 08:32:20.447842 kubelet[2758]: E1027 08:32:20.447795 2758 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:32:20.447922 kubelet[2758]: E1027 08:32:20.447854 2758 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:32:20.448056 kubelet[2758]: E1027 08:32:20.448004 2758 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qfhw9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7948764c44-6chns_calico-apiserver(dbb3c469-8dbc-4871-a711-e7befd27ac29): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 08:32:20.449211 kubelet[2758]: E1027 08:32:20.449162 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7948764c44-6chns" podUID="dbb3c469-8dbc-4871-a711-e7befd27ac29" Oct 27 08:32:20.614670 containerd[1607]: 2025-10-27 08:32:19.590 [INFO][4519] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--85866cf9fb--2xx5v-eth0 calico-kube-controllers-85866cf9fb- calico-system 29f6d368-bb0a-4633-8401-1b96e3d04052 833 0 2025-10-27 08:31:52 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:85866cf9fb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-85866cf9fb-2xx5v eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali93ed70e7860 [] [] }} ContainerID="a28d08033230ef2c8a65da98505f1fdd2045a4f99bb6bc3486fb7b6db9386211" Namespace="calico-system" Pod="calico-kube-controllers-85866cf9fb-2xx5v" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85866cf9fb--2xx5v-" Oct 27 08:32:20.614670 containerd[1607]: 2025-10-27 08:32:19.590 [INFO][4519] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a28d08033230ef2c8a65da98505f1fdd2045a4f99bb6bc3486fb7b6db9386211" Namespace="calico-system" Pod="calico-kube-controllers-85866cf9fb-2xx5v" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85866cf9fb--2xx5v-eth0" Oct 27 08:32:20.614670 containerd[1607]: 2025-10-27 08:32:19.626 [INFO][4554] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a28d08033230ef2c8a65da98505f1fdd2045a4f99bb6bc3486fb7b6db9386211" HandleID="k8s-pod-network.a28d08033230ef2c8a65da98505f1fdd2045a4f99bb6bc3486fb7b6db9386211" Workload="localhost-k8s-calico--kube--controllers--85866cf9fb--2xx5v-eth0" Oct 27 08:32:20.614670 containerd[1607]: 2025-10-27 08:32:19.626 [INFO][4554] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a28d08033230ef2c8a65da98505f1fdd2045a4f99bb6bc3486fb7b6db9386211" HandleID="k8s-pod-network.a28d08033230ef2c8a65da98505f1fdd2045a4f99bb6bc3486fb7b6db9386211" Workload="localhost-k8s-calico--kube--controllers--85866cf9fb--2xx5v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001393f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-85866cf9fb-2xx5v", "timestamp":"2025-10-27 08:32:19.626305424 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:32:20.614670 containerd[1607]: 2025-10-27 08:32:19.626 [INFO][4554] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:32:20.614670 containerd[1607]: 2025-10-27 08:32:19.744 [INFO][4554] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:32:20.614670 containerd[1607]: 2025-10-27 08:32:19.745 [INFO][4554] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 08:32:20.614670 containerd[1607]: 2025-10-27 08:32:19.810 [INFO][4554] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a28d08033230ef2c8a65da98505f1fdd2045a4f99bb6bc3486fb7b6db9386211" host="localhost" Oct 27 08:32:20.614670 containerd[1607]: 2025-10-27 08:32:19.997 [INFO][4554] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 08:32:20.614670 containerd[1607]: 2025-10-27 08:32:20.001 [INFO][4554] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 08:32:20.614670 containerd[1607]: 2025-10-27 08:32:20.002 [INFO][4554] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 08:32:20.614670 containerd[1607]: 2025-10-27 08:32:20.004 [INFO][4554] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 08:32:20.614670 containerd[1607]: 2025-10-27 08:32:20.004 [INFO][4554] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a28d08033230ef2c8a65da98505f1fdd2045a4f99bb6bc3486fb7b6db9386211" host="localhost" Oct 27 08:32:20.614670 containerd[1607]: 2025-10-27 08:32:20.005 [INFO][4554] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a28d08033230ef2c8a65da98505f1fdd2045a4f99bb6bc3486fb7b6db9386211 Oct 27 08:32:20.614670 containerd[1607]: 2025-10-27 08:32:20.025 [INFO][4554] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a28d08033230ef2c8a65da98505f1fdd2045a4f99bb6bc3486fb7b6db9386211" host="localhost" Oct 27 08:32:20.614670 containerd[1607]: 2025-10-27 08:32:20.368 [INFO][4554] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.a28d08033230ef2c8a65da98505f1fdd2045a4f99bb6bc3486fb7b6db9386211" host="localhost" Oct 27 08:32:20.614670 containerd[1607]: 2025-10-27 08:32:20.368 [INFO][4554] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.a28d08033230ef2c8a65da98505f1fdd2045a4f99bb6bc3486fb7b6db9386211" host="localhost" Oct 27 08:32:20.614670 containerd[1607]: 2025-10-27 08:32:20.368 [INFO][4554] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:32:20.614670 containerd[1607]: 2025-10-27 08:32:20.368 [INFO][4554] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="a28d08033230ef2c8a65da98505f1fdd2045a4f99bb6bc3486fb7b6db9386211" HandleID="k8s-pod-network.a28d08033230ef2c8a65da98505f1fdd2045a4f99bb6bc3486fb7b6db9386211" Workload="localhost-k8s-calico--kube--controllers--85866cf9fb--2xx5v-eth0" Oct 27 08:32:20.615366 containerd[1607]: 2025-10-27 08:32:20.372 [INFO][4519] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a28d08033230ef2c8a65da98505f1fdd2045a4f99bb6bc3486fb7b6db9386211" Namespace="calico-system" Pod="calico-kube-controllers-85866cf9fb-2xx5v" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85866cf9fb--2xx5v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--85866cf9fb--2xx5v-eth0", GenerateName:"calico-kube-controllers-85866cf9fb-", Namespace:"calico-system", SelfLink:"", UID:"29f6d368-bb0a-4633-8401-1b96e3d04052", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 31, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85866cf9fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-85866cf9fb-2xx5v", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali93ed70e7860", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:32:20.615366 containerd[1607]: 2025-10-27 08:32:20.372 [INFO][4519] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="a28d08033230ef2c8a65da98505f1fdd2045a4f99bb6bc3486fb7b6db9386211" Namespace="calico-system" Pod="calico-kube-controllers-85866cf9fb-2xx5v" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85866cf9fb--2xx5v-eth0" Oct 27 08:32:20.615366 containerd[1607]: 2025-10-27 08:32:20.372 [INFO][4519] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali93ed70e7860 ContainerID="a28d08033230ef2c8a65da98505f1fdd2045a4f99bb6bc3486fb7b6db9386211" Namespace="calico-system" Pod="calico-kube-controllers-85866cf9fb-2xx5v" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85866cf9fb--2xx5v-eth0" Oct 27 08:32:20.615366 containerd[1607]: 2025-10-27 08:32:20.375 [INFO][4519] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a28d08033230ef2c8a65da98505f1fdd2045a4f99bb6bc3486fb7b6db9386211" Namespace="calico-system" Pod="calico-kube-controllers-85866cf9fb-2xx5v" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85866cf9fb--2xx5v-eth0" Oct 27 08:32:20.615366 containerd[1607]: 2025-10-27 08:32:20.375 [INFO][4519] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a28d08033230ef2c8a65da98505f1fdd2045a4f99bb6bc3486fb7b6db9386211" Namespace="calico-system" Pod="calico-kube-controllers-85866cf9fb-2xx5v" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85866cf9fb--2xx5v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--85866cf9fb--2xx5v-eth0", GenerateName:"calico-kube-controllers-85866cf9fb-", Namespace:"calico-system", SelfLink:"", UID:"29f6d368-bb0a-4633-8401-1b96e3d04052", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 31, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85866cf9fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a28d08033230ef2c8a65da98505f1fdd2045a4f99bb6bc3486fb7b6db9386211", Pod:"calico-kube-controllers-85866cf9fb-2xx5v", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali93ed70e7860", MAC:"16:f0:38:3a:d6:e0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:32:20.615366 containerd[1607]: 2025-10-27 08:32:20.608 [INFO][4519] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a28d08033230ef2c8a65da98505f1fdd2045a4f99bb6bc3486fb7b6db9386211" Namespace="calico-system" Pod="calico-kube-controllers-85866cf9fb-2xx5v" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85866cf9fb--2xx5v-eth0" Oct 27 08:32:20.713127 containerd[1607]: time="2025-10-27T08:32:20.713074769Z" level=info msg="connecting to shim c7395f27aa487b9223899982920ea9f7c1e94c2989f28515b714fa32ba0c845f" address="unix:///run/containerd/s/15ce3fea3064c9b83c907acd18400311c657a49e717b461274d6b8f9934f2b54" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:32:20.738528 systemd[1]: Started cri-containerd-c7395f27aa487b9223899982920ea9f7c1e94c2989f28515b714fa32ba0c845f.scope - libcontainer container c7395f27aa487b9223899982920ea9f7c1e94c2989f28515b714fa32ba0c845f. Oct 27 08:32:20.753585 systemd-resolved[1445]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 08:32:20.762038 kubelet[2758]: E1027 08:32:20.762000 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:32:20.764817 kubelet[2758]: E1027 08:32:20.763923 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7948764c44-6chns" podUID="dbb3c469-8dbc-4871-a711-e7befd27ac29" Oct 27 08:32:20.816379 containerd[1607]: time="2025-10-27T08:32:20.816314284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wxbd2,Uid:7910370a-d2b9-4ee0-8c0a-b904aff5f65a,Namespace:calico-system,Attempt:0,} returns sandbox id \"c7395f27aa487b9223899982920ea9f7c1e94c2989f28515b714fa32ba0c845f\"" Oct 27 08:32:20.817684 containerd[1607]: time="2025-10-27T08:32:20.817558602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 27 08:32:20.857062 containerd[1607]: time="2025-10-27T08:32:20.857016532Z" level=info msg="connecting to shim a28d08033230ef2c8a65da98505f1fdd2045a4f99bb6bc3486fb7b6db9386211" address="unix:///run/containerd/s/de862e50ebcc803f8d852b4cb39361e6d6e90fb2e976dff4258c9562bf36e948" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:32:20.866894 systemd-networkd[1504]: cali8ad123c058e: Gained IPv6LL Oct 27 08:32:20.884535 systemd[1]: Started cri-containerd-a28d08033230ef2c8a65da98505f1fdd2045a4f99bb6bc3486fb7b6db9386211.scope - libcontainer container a28d08033230ef2c8a65da98505f1fdd2045a4f99bb6bc3486fb7b6db9386211. Oct 27 08:32:20.898461 systemd-resolved[1445]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 08:32:20.930581 systemd-networkd[1504]: caliba6a4a45eb0: Gained IPv6LL Oct 27 08:32:20.970859 containerd[1607]: time="2025-10-27T08:32:20.970814387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85866cf9fb-2xx5v,Uid:29f6d368-bb0a-4633-8401-1b96e3d04052,Namespace:calico-system,Attempt:0,} returns sandbox id \"a28d08033230ef2c8a65da98505f1fdd2045a4f99bb6bc3486fb7b6db9386211\"" Oct 27 08:32:21.254878 kubelet[2758]: E1027 08:32:21.254568 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:32:21.256124 containerd[1607]: time="2025-10-27T08:32:21.255169233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qxbwc,Uid:0ea1d490-08d4-4317-a16f-2f9d40de2cf9,Namespace:kube-system,Attempt:0,}" Oct 27 08:32:21.564951 containerd[1607]: time="2025-10-27T08:32:21.564893663Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:32:21.570544 systemd-networkd[1504]: cali93ed70e7860: Gained IPv6LL Oct 27 08:32:21.593125 containerd[1607]: time="2025-10-27T08:32:21.593062069Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 27 08:32:21.593125 containerd[1607]: time="2025-10-27T08:32:21.593111251Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 27 08:32:21.593341 kubelet[2758]: E1027 08:32:21.593293 2758 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 08:32:21.593426 kubelet[2758]: E1027 08:32:21.593343 2758 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 08:32:21.593719 kubelet[2758]: E1027 08:32:21.593578 2758 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zbxkp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wxbd2_calico-system(7910370a-d2b9-4ee0-8c0a-b904aff5f65a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 27 08:32:21.593849 containerd[1607]: time="2025-10-27T08:32:21.593738445Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 27 08:32:21.620152 systemd-networkd[1504]: califc68c43befc: Link UP Oct 27 08:32:21.620402 systemd-networkd[1504]: califc68c43befc: Gained carrier Oct 27 08:32:21.765083 kubelet[2758]: E1027 08:32:21.765023 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7948764c44-6chns" podUID="dbb3c469-8dbc-4871-a711-e7befd27ac29" Oct 27 08:32:21.825523 containerd[1607]: 2025-10-27 08:32:21.376 [INFO][4732] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--qxbwc-eth0 coredns-668d6bf9bc- kube-system 0ea1d490-08d4-4317-a16f-2f9d40de2cf9 835 0 2025-10-27 08:31:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-qxbwc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califc68c43befc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4816faf5d5f855e055bd423ad4961c8f792dfa2bcaead7a9d6227601537d9d27" Namespace="kube-system" Pod="coredns-668d6bf9bc-qxbwc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qxbwc-" Oct 27 08:32:21.825523 containerd[1607]: 2025-10-27 08:32:21.376 [INFO][4732] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4816faf5d5f855e055bd423ad4961c8f792dfa2bcaead7a9d6227601537d9d27" Namespace="kube-system" Pod="coredns-668d6bf9bc-qxbwc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qxbwc-eth0" Oct 27 08:32:21.825523 containerd[1607]: 2025-10-27 08:32:21.432 [INFO][4746] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4816faf5d5f855e055bd423ad4961c8f792dfa2bcaead7a9d6227601537d9d27" HandleID="k8s-pod-network.4816faf5d5f855e055bd423ad4961c8f792dfa2bcaead7a9d6227601537d9d27" Workload="localhost-k8s-coredns--668d6bf9bc--qxbwc-eth0" Oct 27 08:32:21.825523 containerd[1607]: 2025-10-27 08:32:21.432 [INFO][4746] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4816faf5d5f855e055bd423ad4961c8f792dfa2bcaead7a9d6227601537d9d27" HandleID="k8s-pod-network.4816faf5d5f855e055bd423ad4961c8f792dfa2bcaead7a9d6227601537d9d27" Workload="localhost-k8s-coredns--668d6bf9bc--qxbwc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000580800), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-qxbwc", "timestamp":"2025-10-27 08:32:21.432089477 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:32:21.825523 containerd[1607]: 2025-10-27 08:32:21.432 [INFO][4746] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:32:21.825523 containerd[1607]: 2025-10-27 08:32:21.432 [INFO][4746] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:32:21.825523 containerd[1607]: 2025-10-27 08:32:21.432 [INFO][4746] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 08:32:21.825523 containerd[1607]: 2025-10-27 08:32:21.439 [INFO][4746] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4816faf5d5f855e055bd423ad4961c8f792dfa2bcaead7a9d6227601537d9d27" host="localhost" Oct 27 08:32:21.825523 containerd[1607]: 2025-10-27 08:32:21.442 [INFO][4746] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 08:32:21.825523 containerd[1607]: 2025-10-27 08:32:21.446 [INFO][4746] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 08:32:21.825523 containerd[1607]: 2025-10-27 08:32:21.448 [INFO][4746] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 08:32:21.825523 containerd[1607]: 2025-10-27 08:32:21.450 [INFO][4746] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 08:32:21.825523 containerd[1607]: 2025-10-27 08:32:21.450 [INFO][4746] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4816faf5d5f855e055bd423ad4961c8f792dfa2bcaead7a9d6227601537d9d27" host="localhost" Oct 27 08:32:21.825523 containerd[1607]: 2025-10-27 08:32:21.451 [INFO][4746] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4816faf5d5f855e055bd423ad4961c8f792dfa2bcaead7a9d6227601537d9d27 Oct 27 08:32:21.825523 containerd[1607]: 2025-10-27 08:32:21.549 [INFO][4746] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4816faf5d5f855e055bd423ad4961c8f792dfa2bcaead7a9d6227601537d9d27" host="localhost" Oct 27 08:32:21.825523 containerd[1607]: 2025-10-27 08:32:21.614 [INFO][4746] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.4816faf5d5f855e055bd423ad4961c8f792dfa2bcaead7a9d6227601537d9d27" host="localhost" Oct 27 08:32:21.825523 containerd[1607]: 2025-10-27 08:32:21.614 [INFO][4746] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.4816faf5d5f855e055bd423ad4961c8f792dfa2bcaead7a9d6227601537d9d27" host="localhost" Oct 27 08:32:21.825523 containerd[1607]: 2025-10-27 08:32:21.614 [INFO][4746] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:32:21.825523 containerd[1607]: 2025-10-27 08:32:21.614 [INFO][4746] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="4816faf5d5f855e055bd423ad4961c8f792dfa2bcaead7a9d6227601537d9d27" HandleID="k8s-pod-network.4816faf5d5f855e055bd423ad4961c8f792dfa2bcaead7a9d6227601537d9d27" Workload="localhost-k8s-coredns--668d6bf9bc--qxbwc-eth0" Oct 27 08:32:21.826023 containerd[1607]: 2025-10-27 08:32:21.618 [INFO][4732] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4816faf5d5f855e055bd423ad4961c8f792dfa2bcaead7a9d6227601537d9d27" Namespace="kube-system" Pod="coredns-668d6bf9bc-qxbwc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qxbwc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--qxbwc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0ea1d490-08d4-4317-a16f-2f9d40de2cf9", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 31, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-qxbwc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califc68c43befc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:32:21.826023 containerd[1607]: 2025-10-27 08:32:21.618 [INFO][4732] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="4816faf5d5f855e055bd423ad4961c8f792dfa2bcaead7a9d6227601537d9d27" Namespace="kube-system" Pod="coredns-668d6bf9bc-qxbwc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qxbwc-eth0" Oct 27 08:32:21.826023 containerd[1607]: 2025-10-27 08:32:21.618 [INFO][4732] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califc68c43befc ContainerID="4816faf5d5f855e055bd423ad4961c8f792dfa2bcaead7a9d6227601537d9d27" Namespace="kube-system" Pod="coredns-668d6bf9bc-qxbwc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qxbwc-eth0" Oct 27 08:32:21.826023 containerd[1607]: 2025-10-27 08:32:21.620 [INFO][4732] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4816faf5d5f855e055bd423ad4961c8f792dfa2bcaead7a9d6227601537d9d27" Namespace="kube-system" Pod="coredns-668d6bf9bc-qxbwc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qxbwc-eth0" Oct 27 08:32:21.826023 containerd[1607]: 2025-10-27 08:32:21.621 [INFO][4732] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4816faf5d5f855e055bd423ad4961c8f792dfa2bcaead7a9d6227601537d9d27" Namespace="kube-system" Pod="coredns-668d6bf9bc-qxbwc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qxbwc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--qxbwc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0ea1d490-08d4-4317-a16f-2f9d40de2cf9", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 31, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4816faf5d5f855e055bd423ad4961c8f792dfa2bcaead7a9d6227601537d9d27", Pod:"coredns-668d6bf9bc-qxbwc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califc68c43befc", MAC:"d2:d2:25:31:bf:e2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:32:21.826023 containerd[1607]: 2025-10-27 08:32:21.821 [INFO][4732] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4816faf5d5f855e055bd423ad4961c8f792dfa2bcaead7a9d6227601537d9d27" Namespace="kube-system" Pod="coredns-668d6bf9bc-qxbwc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qxbwc-eth0" Oct 27 08:32:21.993165 containerd[1607]: time="2025-10-27T08:32:21.993098907Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:32:22.098434 containerd[1607]: time="2025-10-27T08:32:22.098257114Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 27 08:32:22.098705 kubelet[2758]: E1027 08:32:22.098641 2758 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 08:32:22.098855 kubelet[2758]: E1027 08:32:22.098720 2758 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 08:32:22.099441 kubelet[2758]: E1027 08:32:22.099043 2758 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8vzb5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-85866cf9fb-2xx5v_calico-system(29f6d368-bb0a-4633-8401-1b96e3d04052): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 27 08:32:22.100507 kubelet[2758]: E1027 08:32:22.100462 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85866cf9fb-2xx5v" podUID="29f6d368-bb0a-4633-8401-1b96e3d04052" Oct 27 08:32:22.107996 containerd[1607]: time="2025-10-27T08:32:22.098280759Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 27 08:32:22.108062 containerd[1607]: time="2025-10-27T08:32:22.099377030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 27 08:32:22.155557 containerd[1607]: time="2025-10-27T08:32:22.155513650Z" level=info msg="connecting to shim 4816faf5d5f855e055bd423ad4961c8f792dfa2bcaead7a9d6227601537d9d27" address="unix:///run/containerd/s/494c540363c4006ef658606627f853bdfa1b4488b91c53f7683e3dd21620fc70" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:32:22.198579 systemd[1]: Started cri-containerd-4816faf5d5f855e055bd423ad4961c8f792dfa2bcaead7a9d6227601537d9d27.scope - libcontainer container 4816faf5d5f855e055bd423ad4961c8f792dfa2bcaead7a9d6227601537d9d27. Oct 27 08:32:22.212988 systemd-resolved[1445]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 08:32:22.255117 containerd[1607]: time="2025-10-27T08:32:22.255074249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7948764c44-vhjsc,Uid:155c9352-5d7b-4c12-ac76-f7ec57d8dc42,Namespace:calico-apiserver,Attempt:0,}" Oct 27 08:32:22.261922 containerd[1607]: time="2025-10-27T08:32:22.261881512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qxbwc,Uid:0ea1d490-08d4-4317-a16f-2f9d40de2cf9,Namespace:kube-system,Attempt:0,} returns sandbox id \"4816faf5d5f855e055bd423ad4961c8f792dfa2bcaead7a9d6227601537d9d27\"" Oct 27 08:32:22.262575 kubelet[2758]: E1027 08:32:22.262547 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:32:22.264036 containerd[1607]: time="2025-10-27T08:32:22.264011149Z" level=info msg="CreateContainer within sandbox \"4816faf5d5f855e055bd423ad4961c8f792dfa2bcaead7a9d6227601537d9d27\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 27 08:32:22.432336 containerd[1607]: time="2025-10-27T08:32:22.431875555Z" level=info msg="Container 65eff03ebd864009dde99d3309e88444374879e68a14b817919087c4d8d04bc6: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:32:22.447041 containerd[1607]: time="2025-10-27T08:32:22.446975963Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:32:22.486992 containerd[1607]: time="2025-10-27T08:32:22.486927343Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 27 08:32:22.487122 containerd[1607]: time="2025-10-27T08:32:22.487003416Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 27 08:32:22.487222 kubelet[2758]: E1027 08:32:22.487172 2758 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 08:32:22.487268 kubelet[2758]: E1027 08:32:22.487229 2758 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 08:32:22.487374 kubelet[2758]: E1027 08:32:22.487339 2758 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zbxkp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wxbd2_calico-system(7910370a-d2b9-4ee0-8c0a-b904aff5f65a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 27 08:32:22.488846 kubelet[2758]: E1027 08:32:22.488794 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wxbd2" podUID="7910370a-d2b9-4ee0-8c0a-b904aff5f65a" Oct 27 08:32:22.512528 containerd[1607]: time="2025-10-27T08:32:22.512488084Z" level=info msg="CreateContainer within sandbox \"4816faf5d5f855e055bd423ad4961c8f792dfa2bcaead7a9d6227601537d9d27\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"65eff03ebd864009dde99d3309e88444374879e68a14b817919087c4d8d04bc6\"" Oct 27 08:32:22.512941 containerd[1607]: time="2025-10-27T08:32:22.512915242Z" level=info msg="StartContainer for \"65eff03ebd864009dde99d3309e88444374879e68a14b817919087c4d8d04bc6\"" Oct 27 08:32:22.513815 containerd[1607]: time="2025-10-27T08:32:22.513785532Z" level=info msg="connecting to shim 65eff03ebd864009dde99d3309e88444374879e68a14b817919087c4d8d04bc6" address="unix:///run/containerd/s/494c540363c4006ef658606627f853bdfa1b4488b91c53f7683e3dd21620fc70" protocol=ttrpc version=3 Oct 27 08:32:22.528287 systemd-networkd[1504]: cali3f6903fb2e0: Link UP Oct 27 08:32:22.529208 systemd-networkd[1504]: cali3f6903fb2e0: Gained carrier Oct 27 08:32:22.535550 systemd[1]: Started cri-containerd-65eff03ebd864009dde99d3309e88444374879e68a14b817919087c4d8d04bc6.scope - libcontainer container 65eff03ebd864009dde99d3309e88444374879e68a14b817919087c4d8d04bc6. Oct 27 08:32:22.584269 containerd[1607]: 2025-10-27 08:32:22.386 [INFO][4816] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7948764c44--vhjsc-eth0 calico-apiserver-7948764c44- calico-apiserver 155c9352-5d7b-4c12-ac76-f7ec57d8dc42 825 0 2025-10-27 08:31:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7948764c44 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7948764c44-vhjsc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3f6903fb2e0 [] [] }} ContainerID="992e6c47730cb089f8882db59e43b9ae2e4279ba2075ed4e07519f0718d97786" Namespace="calico-apiserver" Pod="calico-apiserver-7948764c44-vhjsc" WorkloadEndpoint="localhost-k8s-calico--apiserver--7948764c44--vhjsc-" Oct 27 08:32:22.584269 containerd[1607]: 2025-10-27 08:32:22.386 [INFO][4816] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="992e6c47730cb089f8882db59e43b9ae2e4279ba2075ed4e07519f0718d97786" Namespace="calico-apiserver" Pod="calico-apiserver-7948764c44-vhjsc" WorkloadEndpoint="localhost-k8s-calico--apiserver--7948764c44--vhjsc-eth0" Oct 27 08:32:22.584269 containerd[1607]: 2025-10-27 08:32:22.409 [INFO][4830] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="992e6c47730cb089f8882db59e43b9ae2e4279ba2075ed4e07519f0718d97786" HandleID="k8s-pod-network.992e6c47730cb089f8882db59e43b9ae2e4279ba2075ed4e07519f0718d97786" Workload="localhost-k8s-calico--apiserver--7948764c44--vhjsc-eth0" Oct 27 08:32:22.584269 containerd[1607]: 2025-10-27 08:32:22.409 [INFO][4830] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="992e6c47730cb089f8882db59e43b9ae2e4279ba2075ed4e07519f0718d97786" HandleID="k8s-pod-network.992e6c47730cb089f8882db59e43b9ae2e4279ba2075ed4e07519f0718d97786" Workload="localhost-k8s-calico--apiserver--7948764c44--vhjsc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7080), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7948764c44-vhjsc", "timestamp":"2025-10-27 08:32:22.40899976 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:32:22.584269 containerd[1607]: 2025-10-27 08:32:22.409 [INFO][4830] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:32:22.584269 containerd[1607]: 2025-10-27 08:32:22.409 [INFO][4830] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:32:22.584269 containerd[1607]: 2025-10-27 08:32:22.409 [INFO][4830] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 08:32:22.584269 containerd[1607]: 2025-10-27 08:32:22.426 [INFO][4830] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.992e6c47730cb089f8882db59e43b9ae2e4279ba2075ed4e07519f0718d97786" host="localhost" Oct 27 08:32:22.584269 containerd[1607]: 2025-10-27 08:32:22.434 [INFO][4830] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 08:32:22.584269 containerd[1607]: 2025-10-27 08:32:22.438 [INFO][4830] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 08:32:22.584269 containerd[1607]: 2025-10-27 08:32:22.439 [INFO][4830] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 08:32:22.584269 containerd[1607]: 2025-10-27 08:32:22.441 [INFO][4830] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 08:32:22.584269 containerd[1607]: 2025-10-27 08:32:22.441 [INFO][4830] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.992e6c47730cb089f8882db59e43b9ae2e4279ba2075ed4e07519f0718d97786" host="localhost" Oct 27 08:32:22.584269 containerd[1607]: 2025-10-27 08:32:22.442 [INFO][4830] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.992e6c47730cb089f8882db59e43b9ae2e4279ba2075ed4e07519f0718d97786 Oct 27 08:32:22.584269 containerd[1607]: 2025-10-27 08:32:22.456 [INFO][4830] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.992e6c47730cb089f8882db59e43b9ae2e4279ba2075ed4e07519f0718d97786" host="localhost" Oct 27 08:32:22.584269 containerd[1607]: 2025-10-27 08:32:22.519 [INFO][4830] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.992e6c47730cb089f8882db59e43b9ae2e4279ba2075ed4e07519f0718d97786" host="localhost" Oct 27 08:32:22.584269 containerd[1607]: 2025-10-27 08:32:22.519 [INFO][4830] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.992e6c47730cb089f8882db59e43b9ae2e4279ba2075ed4e07519f0718d97786" host="localhost" Oct 27 08:32:22.584269 containerd[1607]: 2025-10-27 08:32:22.520 [INFO][4830] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:32:22.584269 containerd[1607]: 2025-10-27 08:32:22.520 [INFO][4830] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="992e6c47730cb089f8882db59e43b9ae2e4279ba2075ed4e07519f0718d97786" HandleID="k8s-pod-network.992e6c47730cb089f8882db59e43b9ae2e4279ba2075ed4e07519f0718d97786" Workload="localhost-k8s-calico--apiserver--7948764c44--vhjsc-eth0" Oct 27 08:32:22.585199 containerd[1607]: 2025-10-27 08:32:22.523 [INFO][4816] cni-plugin/k8s.go 418: Populated endpoint ContainerID="992e6c47730cb089f8882db59e43b9ae2e4279ba2075ed4e07519f0718d97786" Namespace="calico-apiserver" Pod="calico-apiserver-7948764c44-vhjsc" WorkloadEndpoint="localhost-k8s-calico--apiserver--7948764c44--vhjsc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7948764c44--vhjsc-eth0", GenerateName:"calico-apiserver-7948764c44-", Namespace:"calico-apiserver", SelfLink:"", UID:"155c9352-5d7b-4c12-ac76-f7ec57d8dc42", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 31, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7948764c44", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7948764c44-vhjsc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3f6903fb2e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:32:22.585199 containerd[1607]: 2025-10-27 08:32:22.523 [INFO][4816] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="992e6c47730cb089f8882db59e43b9ae2e4279ba2075ed4e07519f0718d97786" Namespace="calico-apiserver" Pod="calico-apiserver-7948764c44-vhjsc" WorkloadEndpoint="localhost-k8s-calico--apiserver--7948764c44--vhjsc-eth0" Oct 27 08:32:22.585199 containerd[1607]: 2025-10-27 08:32:22.523 [INFO][4816] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3f6903fb2e0 ContainerID="992e6c47730cb089f8882db59e43b9ae2e4279ba2075ed4e07519f0718d97786" Namespace="calico-apiserver" Pod="calico-apiserver-7948764c44-vhjsc" WorkloadEndpoint="localhost-k8s-calico--apiserver--7948764c44--vhjsc-eth0" Oct 27 08:32:22.585199 containerd[1607]: 2025-10-27 08:32:22.529 [INFO][4816] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="992e6c47730cb089f8882db59e43b9ae2e4279ba2075ed4e07519f0718d97786" Namespace="calico-apiserver" Pod="calico-apiserver-7948764c44-vhjsc" WorkloadEndpoint="localhost-k8s-calico--apiserver--7948764c44--vhjsc-eth0" Oct 27 08:32:22.585199 containerd[1607]: 2025-10-27 08:32:22.530 [INFO][4816] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="992e6c47730cb089f8882db59e43b9ae2e4279ba2075ed4e07519f0718d97786" Namespace="calico-apiserver" Pod="calico-apiserver-7948764c44-vhjsc" WorkloadEndpoint="localhost-k8s-calico--apiserver--7948764c44--vhjsc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7948764c44--vhjsc-eth0", GenerateName:"calico-apiserver-7948764c44-", Namespace:"calico-apiserver", SelfLink:"", UID:"155c9352-5d7b-4c12-ac76-f7ec57d8dc42", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 31, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7948764c44", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"992e6c47730cb089f8882db59e43b9ae2e4279ba2075ed4e07519f0718d97786", Pod:"calico-apiserver-7948764c44-vhjsc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3f6903fb2e0", MAC:"76:58:65:ca:b4:fd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:32:22.585199 containerd[1607]: 2025-10-27 08:32:22.579 [INFO][4816] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="992e6c47730cb089f8882db59e43b9ae2e4279ba2075ed4e07519f0718d97786" Namespace="calico-apiserver" Pod="calico-apiserver-7948764c44-vhjsc" WorkloadEndpoint="localhost-k8s-calico--apiserver--7948764c44--vhjsc-eth0" Oct 27 08:32:22.590455 containerd[1607]: time="2025-10-27T08:32:22.590307202Z" level=info msg="StartContainer for \"65eff03ebd864009dde99d3309e88444374879e68a14b817919087c4d8d04bc6\" returns successfully" Oct 27 08:32:22.766156 containerd[1607]: time="2025-10-27T08:32:22.766109638Z" level=info msg="connecting to shim 992e6c47730cb089f8882db59e43b9ae2e4279ba2075ed4e07519f0718d97786" address="unix:///run/containerd/s/c2d67a493046cae63edf94ad8b3241c495989f8f5be9190caed2cedfb6c796f2" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:32:22.769939 kubelet[2758]: E1027 08:32:22.769686 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:32:22.770382 kubelet[2758]: E1027 08:32:22.770343 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85866cf9fb-2xx5v" podUID="29f6d368-bb0a-4633-8401-1b96e3d04052" Oct 27 08:32:22.771609 kubelet[2758]: E1027 08:32:22.771573 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wxbd2" podUID="7910370a-d2b9-4ee0-8c0a-b904aff5f65a" Oct 27 08:32:22.796551 systemd[1]: Started cri-containerd-992e6c47730cb089f8882db59e43b9ae2e4279ba2075ed4e07519f0718d97786.scope - libcontainer container 992e6c47730cb089f8882db59e43b9ae2e4279ba2075ed4e07519f0718d97786. Oct 27 08:32:22.809300 systemd-resolved[1445]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 08:32:22.866063 containerd[1607]: time="2025-10-27T08:32:22.866010245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7948764c44-vhjsc,Uid:155c9352-5d7b-4c12-ac76-f7ec57d8dc42,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"992e6c47730cb089f8882db59e43b9ae2e4279ba2075ed4e07519f0718d97786\"" Oct 27 08:32:22.869029 containerd[1607]: time="2025-10-27T08:32:22.869000983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 08:32:22.950586 kubelet[2758]: I1027 08:32:22.950528 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qxbwc" podStartSLOduration=46.950509318 podStartE2EDuration="46.950509318s" podCreationTimestamp="2025-10-27 08:31:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 08:32:22.868488974 +0000 UTC m=+51.732202200" watchObservedRunningTime="2025-10-27 08:32:22.950509318 +0000 UTC m=+51.814222524" Oct 27 08:32:22.997976 systemd[1]: Started sshd@9-10.0.0.134:22-10.0.0.1:48424.service - OpenSSH per-connection server daemon (10.0.0.1:48424). Oct 27 08:32:23.043318 sshd[4931]: Accepted publickey for core from 10.0.0.1 port 48424 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:32:23.045351 sshd-session[4931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:32:23.049792 systemd-logind[1577]: New session 10 of user core. Oct 27 08:32:23.063552 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 27 08:32:23.235463 sshd[4940]: Connection closed by 10.0.0.1 port 48424 Oct 27 08:32:23.235883 sshd-session[4931]: pam_unix(sshd:session): session closed for user core Oct 27 08:32:23.240745 systemd[1]: sshd@9-10.0.0.134:22-10.0.0.1:48424.service: Deactivated successfully. Oct 27 08:32:23.242860 systemd[1]: session-10.scope: Deactivated successfully. Oct 27 08:32:23.243619 systemd-logind[1577]: Session 10 logged out. Waiting for processes to exit. Oct 27 08:32:23.244679 systemd-logind[1577]: Removed session 10. Oct 27 08:32:23.363546 systemd-networkd[1504]: califc68c43befc: Gained IPv6LL Oct 27 08:32:23.372450 containerd[1607]: time="2025-10-27T08:32:23.369580267Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:32:23.438102 containerd[1607]: time="2025-10-27T08:32:23.438013598Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 08:32:23.438268 containerd[1607]: time="2025-10-27T08:32:23.438043955Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 08:32:23.439635 kubelet[2758]: E1027 08:32:23.439578 2758 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:32:23.439736 kubelet[2758]: E1027 08:32:23.439650 2758 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:32:23.439827 kubelet[2758]: E1027 08:32:23.439787 2758 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k7njj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7948764c44-vhjsc_calico-apiserver(155c9352-5d7b-4c12-ac76-f7ec57d8dc42): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 08:32:23.440977 kubelet[2758]: E1027 08:32:23.440945 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7948764c44-vhjsc" podUID="155c9352-5d7b-4c12-ac76-f7ec57d8dc42" Oct 27 08:32:23.772955 kubelet[2758]: E1027 08:32:23.772904 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:32:23.772955 kubelet[2758]: E1027 08:32:23.773078 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7948764c44-vhjsc" podUID="155c9352-5d7b-4c12-ac76-f7ec57d8dc42" Oct 27 08:32:24.578633 systemd-networkd[1504]: cali3f6903fb2e0: Gained IPv6LL Oct 27 08:32:24.774594 kubelet[2758]: E1027 08:32:24.774549 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:32:24.775264 kubelet[2758]: E1027 08:32:24.775217 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7948764c44-vhjsc" podUID="155c9352-5d7b-4c12-ac76-f7ec57d8dc42" Oct 27 08:32:28.246539 systemd[1]: Started sshd@10-10.0.0.134:22-10.0.0.1:48426.service - OpenSSH per-connection server daemon (10.0.0.1:48426). Oct 27 08:32:28.304607 sshd[4969]: Accepted publickey for core from 10.0.0.1 port 48426 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:32:28.305911 sshd-session[4969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:32:28.310212 systemd-logind[1577]: New session 11 of user core. Oct 27 08:32:28.316534 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 27 08:32:28.425485 sshd[4972]: Connection closed by 10.0.0.1 port 48426 Oct 27 08:32:28.425789 sshd-session[4969]: pam_unix(sshd:session): session closed for user core Oct 27 08:32:28.434114 systemd[1]: sshd@10-10.0.0.134:22-10.0.0.1:48426.service: Deactivated successfully. Oct 27 08:32:28.436018 systemd[1]: session-11.scope: Deactivated successfully. Oct 27 08:32:28.436897 systemd-logind[1577]: Session 11 logged out. Waiting for processes to exit. Oct 27 08:32:28.439987 systemd[1]: Started sshd@11-10.0.0.134:22-10.0.0.1:48436.service - OpenSSH per-connection server daemon (10.0.0.1:48436). Oct 27 08:32:28.440622 systemd-logind[1577]: Removed session 11. Oct 27 08:32:28.491857 sshd[4987]: Accepted publickey for core from 10.0.0.1 port 48436 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:32:28.493047 sshd-session[4987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:32:28.497305 systemd-logind[1577]: New session 12 of user core. Oct 27 08:32:28.504556 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 27 08:32:28.655845 sshd[4990]: Connection closed by 10.0.0.1 port 48436 Oct 27 08:32:28.656334 sshd-session[4987]: pam_unix(sshd:session): session closed for user core Oct 27 08:32:28.664348 systemd[1]: sshd@11-10.0.0.134:22-10.0.0.1:48436.service: Deactivated successfully. Oct 27 08:32:28.666904 systemd[1]: session-12.scope: Deactivated successfully. Oct 27 08:32:28.668191 systemd-logind[1577]: Session 12 logged out. Waiting for processes to exit. Oct 27 08:32:28.672336 systemd[1]: Started sshd@12-10.0.0.134:22-10.0.0.1:48452.service - OpenSSH per-connection server daemon (10.0.0.1:48452). Oct 27 08:32:28.673312 systemd-logind[1577]: Removed session 12. Oct 27 08:32:28.737113 sshd[5001]: Accepted publickey for core from 10.0.0.1 port 48452 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:32:28.738497 sshd-session[5001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:32:28.743161 systemd-logind[1577]: New session 13 of user core. Oct 27 08:32:28.759545 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 27 08:32:28.882944 sshd[5004]: Connection closed by 10.0.0.1 port 48452 Oct 27 08:32:28.883243 sshd-session[5001]: pam_unix(sshd:session): session closed for user core Oct 27 08:32:28.888012 systemd[1]: sshd@12-10.0.0.134:22-10.0.0.1:48452.service: Deactivated successfully. Oct 27 08:32:28.890117 systemd[1]: session-13.scope: Deactivated successfully. Oct 27 08:32:28.890824 systemd-logind[1577]: Session 13 logged out. Waiting for processes to exit. Oct 27 08:32:28.891853 systemd-logind[1577]: Removed session 13. Oct 27 08:32:30.255236 containerd[1607]: time="2025-10-27T08:32:30.255124326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 27 08:32:30.600748 containerd[1607]: time="2025-10-27T08:32:30.600688785Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:32:30.601954 containerd[1607]: time="2025-10-27T08:32:30.601914191Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 27 08:32:30.602023 containerd[1607]: time="2025-10-27T08:32:30.601961630Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 27 08:32:30.602168 kubelet[2758]: E1027 08:32:30.602119 2758 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 08:32:30.602521 kubelet[2758]: E1027 08:32:30.602177 2758 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 08:32:30.602521 kubelet[2758]: E1027 08:32:30.602294 2758 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:80a7b8633959402db64dbfcc329ddd47,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xg9l5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-849fdbdcd5-csmld_calico-system(f2ff340b-16ee-4ed6-afb9-848c0501ad98): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 27 08:32:30.604222 containerd[1607]: time="2025-10-27T08:32:30.604193460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 27 08:32:30.969438 containerd[1607]: time="2025-10-27T08:32:30.969261243Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:32:30.970606 containerd[1607]: time="2025-10-27T08:32:30.970571077Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 27 08:32:30.970670 containerd[1607]: time="2025-10-27T08:32:30.970646258Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 27 08:32:30.970813 kubelet[2758]: E1027 08:32:30.970761 2758 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 08:32:30.970813 kubelet[2758]: E1027 08:32:30.970811 2758 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 08:32:30.970958 kubelet[2758]: E1027 08:32:30.970916 2758 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xg9l5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-849fdbdcd5-csmld_calico-system(f2ff340b-16ee-4ed6-afb9-848c0501ad98): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 27 08:32:30.972152 kubelet[2758]: E1027 08:32:30.972103 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-849fdbdcd5-csmld" podUID="f2ff340b-16ee-4ed6-afb9-848c0501ad98" Oct 27 08:32:33.255639 containerd[1607]: time="2025-10-27T08:32:33.255579343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 27 08:32:33.887857 containerd[1607]: time="2025-10-27T08:32:33.887812051Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:32:33.890224 containerd[1607]: time="2025-10-27T08:32:33.890191604Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 27 08:32:33.890285 containerd[1607]: time="2025-10-27T08:32:33.890257362Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 27 08:32:33.890385 kubelet[2758]: E1027 08:32:33.890349 2758 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 08:32:33.890757 kubelet[2758]: E1027 08:32:33.890394 2758 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 08:32:33.890757 kubelet[2758]: E1027 08:32:33.890534 2758 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9r84d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-pds2z_calico-system(4ba4a847-9e38-4bf2-a22e-46c61e29a54b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 27 08:32:33.891712 kubelet[2758]: E1027 08:32:33.891672 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pds2z" podUID="4ba4a847-9e38-4bf2-a22e-46c61e29a54b" Oct 27 08:32:33.899834 systemd[1]: Started sshd@13-10.0.0.134:22-10.0.0.1:58524.service - OpenSSH per-connection server daemon (10.0.0.1:58524). Oct 27 08:32:33.954754 sshd[5025]: Accepted publickey for core from 10.0.0.1 port 58524 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:32:33.956232 sshd-session[5025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:32:33.960435 systemd-logind[1577]: New session 14 of user core. Oct 27 08:32:33.967554 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 27 08:32:34.083188 sshd[5028]: Connection closed by 10.0.0.1 port 58524 Oct 27 08:32:34.083492 sshd-session[5025]: pam_unix(sshd:session): session closed for user core Oct 27 08:32:34.087623 systemd[1]: sshd@13-10.0.0.134:22-10.0.0.1:58524.service: Deactivated successfully. Oct 27 08:32:34.089738 systemd[1]: session-14.scope: Deactivated successfully. Oct 27 08:32:34.090550 systemd-logind[1577]: Session 14 logged out. Waiting for processes to exit. Oct 27 08:32:34.091651 systemd-logind[1577]: Removed session 14. Oct 27 08:32:34.255596 containerd[1607]: time="2025-10-27T08:32:34.255464587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 08:32:34.604718 containerd[1607]: time="2025-10-27T08:32:34.604648729Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:32:34.605974 containerd[1607]: time="2025-10-27T08:32:34.605936949Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 08:32:34.606046 containerd[1607]: time="2025-10-27T08:32:34.606022954Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 08:32:34.606233 kubelet[2758]: E1027 08:32:34.606190 2758 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:32:34.606291 kubelet[2758]: E1027 08:32:34.606251 2758 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:32:34.606469 kubelet[2758]: E1027 08:32:34.606403 2758 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qfhw9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7948764c44-6chns_calico-apiserver(dbb3c469-8dbc-4871-a711-e7befd27ac29): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 08:32:34.607646 kubelet[2758]: E1027 08:32:34.607601 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7948764c44-6chns" podUID="dbb3c469-8dbc-4871-a711-e7befd27ac29" Oct 27 08:32:36.255484 containerd[1607]: time="2025-10-27T08:32:36.255430109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 27 08:32:36.872539 containerd[1607]: time="2025-10-27T08:32:36.872458119Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:32:36.873656 containerd[1607]: time="2025-10-27T08:32:36.873602447Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 27 08:32:36.873823 containerd[1607]: time="2025-10-27T08:32:36.873698730Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 27 08:32:36.873849 kubelet[2758]: E1027 08:32:36.873804 2758 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 08:32:36.874162 kubelet[2758]: E1027 08:32:36.873866 2758 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 08:32:36.874162 kubelet[2758]: E1027 08:32:36.874027 2758 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8vzb5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-85866cf9fb-2xx5v_calico-system(29f6d368-bb0a-4633-8401-1b96e3d04052): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 27 08:32:36.876145 kubelet[2758]: E1027 08:32:36.876108 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85866cf9fb-2xx5v" podUID="29f6d368-bb0a-4633-8401-1b96e3d04052" Oct 27 08:32:37.257660 containerd[1607]: time="2025-10-27T08:32:37.257563841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 27 08:32:37.614853 containerd[1607]: time="2025-10-27T08:32:37.614724236Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:32:37.615847 containerd[1607]: time="2025-10-27T08:32:37.615810371Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 27 08:32:37.615905 containerd[1607]: time="2025-10-27T08:32:37.615894693Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 27 08:32:37.616094 kubelet[2758]: E1027 08:32:37.616047 2758 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 08:32:37.616139 kubelet[2758]: E1027 08:32:37.616107 2758 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 08:32:37.616267 kubelet[2758]: E1027 08:32:37.616231 2758 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zbxkp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wxbd2_calico-system(7910370a-d2b9-4ee0-8c0a-b904aff5f65a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 27 08:32:37.618222 containerd[1607]: time="2025-10-27T08:32:37.618198831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 27 08:32:37.984787 containerd[1607]: time="2025-10-27T08:32:37.984646697Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:32:37.985929 containerd[1607]: time="2025-10-27T08:32:37.985875679Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 27 08:32:37.985976 containerd[1607]: time="2025-10-27T08:32:37.985942961Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 27 08:32:37.986089 kubelet[2758]: E1027 08:32:37.986051 2758 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 08:32:37.986439 kubelet[2758]: E1027 08:32:37.986095 2758 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 08:32:37.986439 kubelet[2758]: E1027 08:32:37.986203 2758 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zbxkp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wxbd2_calico-system(7910370a-d2b9-4ee0-8c0a-b904aff5f65a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 27 08:32:37.987402 kubelet[2758]: E1027 08:32:37.987358 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wxbd2" podUID="7910370a-d2b9-4ee0-8c0a-b904aff5f65a" Oct 27 08:32:39.098892 systemd[1]: Started sshd@14-10.0.0.134:22-10.0.0.1:58536.service - OpenSSH per-connection server daemon (10.0.0.1:58536). Oct 27 08:32:39.138403 sshd[5050]: Accepted publickey for core from 10.0.0.1 port 58536 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:32:39.139661 sshd-session[5050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:32:39.144236 systemd-logind[1577]: New session 15 of user core. Oct 27 08:32:39.157570 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 27 08:32:39.255093 containerd[1607]: time="2025-10-27T08:32:39.255052231Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 08:32:39.276451 sshd[5053]: Connection closed by 10.0.0.1 port 58536 Oct 27 08:32:39.276562 sshd-session[5050]: pam_unix(sshd:session): session closed for user core Oct 27 08:32:39.280525 systemd[1]: sshd@14-10.0.0.134:22-10.0.0.1:58536.service: Deactivated successfully. Oct 27 08:32:39.282505 systemd[1]: session-15.scope: Deactivated successfully. Oct 27 08:32:39.283241 systemd-logind[1577]: Session 15 logged out. Waiting for processes to exit. Oct 27 08:32:39.284373 systemd-logind[1577]: Removed session 15. Oct 27 08:32:39.567609 containerd[1607]: time="2025-10-27T08:32:39.567535679Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:32:39.580657 containerd[1607]: time="2025-10-27T08:32:39.580581045Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 08:32:39.580827 containerd[1607]: time="2025-10-27T08:32:39.580658204Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 08:32:39.580927 kubelet[2758]: E1027 08:32:39.580862 2758 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:32:39.580927 kubelet[2758]: E1027 08:32:39.580917 2758 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:32:39.581321 kubelet[2758]: E1027 08:32:39.581053 2758 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k7njj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7948764c44-vhjsc_calico-apiserver(155c9352-5d7b-4c12-ac76-f7ec57d8dc42): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 08:32:39.582247 kubelet[2758]: E1027 08:32:39.582206 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7948764c44-vhjsc" podUID="155c9352-5d7b-4c12-ac76-f7ec57d8dc42" Oct 27 08:32:42.255122 kubelet[2758]: E1027 08:32:42.255065 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-849fdbdcd5-csmld" podUID="f2ff340b-16ee-4ed6-afb9-848c0501ad98" Oct 27 08:32:43.255093 kubelet[2758]: E1027 08:32:43.255030 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:32:44.293432 systemd[1]: Started sshd@15-10.0.0.134:22-10.0.0.1:43068.service - OpenSSH per-connection server daemon (10.0.0.1:43068). Oct 27 08:32:44.351314 sshd[5068]: Accepted publickey for core from 10.0.0.1 port 43068 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:32:44.352821 sshd-session[5068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:32:44.357386 systemd-logind[1577]: New session 16 of user core. Oct 27 08:32:44.372568 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 27 08:32:44.493082 sshd[5071]: Connection closed by 10.0.0.1 port 43068 Oct 27 08:32:44.493388 sshd-session[5068]: pam_unix(sshd:session): session closed for user core Oct 27 08:32:44.498150 systemd[1]: sshd@15-10.0.0.134:22-10.0.0.1:43068.service: Deactivated successfully. Oct 27 08:32:44.500372 systemd[1]: session-16.scope: Deactivated successfully. Oct 27 08:32:44.501265 systemd-logind[1577]: Session 16 logged out. Waiting for processes to exit. Oct 27 08:32:44.502652 systemd-logind[1577]: Removed session 16. Oct 27 08:32:45.255506 kubelet[2758]: E1027 08:32:45.255457 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pds2z" podUID="4ba4a847-9e38-4bf2-a22e-46c61e29a54b" Oct 27 08:32:47.838380 containerd[1607]: time="2025-10-27T08:32:47.838331144Z" level=info msg="TaskExit event in podsandbox handler container_id:\"386840f37713bcbc7dffc61f8593ec36ea759e2185bbca2faf04e5febe4d307d\" id:\"75141bf76c9507d804b1690f2cde0c4413ecd07bc258c30b9dfb69c9c4e944c5\" pid:5096 exited_at:{seconds:1761553967 nanos:838000092}" Oct 27 08:32:47.840494 kubelet[2758]: E1027 08:32:47.840444 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:32:49.254602 kubelet[2758]: E1027 08:32:49.254556 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85866cf9fb-2xx5v" podUID="29f6d368-bb0a-4633-8401-1b96e3d04052" Oct 27 08:32:49.509441 systemd[1]: Started sshd@16-10.0.0.134:22-10.0.0.1:43078.service - OpenSSH per-connection server daemon (10.0.0.1:43078). Oct 27 08:32:49.582931 sshd[5110]: Accepted publickey for core from 10.0.0.1 port 43078 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:32:49.584345 sshd-session[5110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:32:49.588399 systemd-logind[1577]: New session 17 of user core. Oct 27 08:32:49.596532 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 27 08:32:49.723824 sshd[5113]: Connection closed by 10.0.0.1 port 43078 Oct 27 08:32:49.724169 sshd-session[5110]: pam_unix(sshd:session): session closed for user core Oct 27 08:32:49.734031 systemd[1]: sshd@16-10.0.0.134:22-10.0.0.1:43078.service: Deactivated successfully. Oct 27 08:32:49.735762 systemd[1]: session-17.scope: Deactivated successfully. Oct 27 08:32:49.736535 systemd-logind[1577]: Session 17 logged out. Waiting for processes to exit. Oct 27 08:32:49.738903 systemd[1]: Started sshd@17-10.0.0.134:22-10.0.0.1:43086.service - OpenSSH per-connection server daemon (10.0.0.1:43086). Oct 27 08:32:49.739590 systemd-logind[1577]: Removed session 17. Oct 27 08:32:49.787893 sshd[5126]: Accepted publickey for core from 10.0.0.1 port 43086 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:32:49.789152 sshd-session[5126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:32:49.793262 systemd-logind[1577]: New session 18 of user core. Oct 27 08:32:49.804549 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 27 08:32:50.010143 sshd[5129]: Connection closed by 10.0.0.1 port 43086 Oct 27 08:32:50.010504 sshd-session[5126]: pam_unix(sshd:session): session closed for user core Oct 27 08:32:50.020278 systemd[1]: sshd@17-10.0.0.134:22-10.0.0.1:43086.service: Deactivated successfully. Oct 27 08:32:50.022357 systemd[1]: session-18.scope: Deactivated successfully. Oct 27 08:32:50.023123 systemd-logind[1577]: Session 18 logged out. Waiting for processes to exit. Oct 27 08:32:50.026438 systemd[1]: Started sshd@18-10.0.0.134:22-10.0.0.1:43088.service - OpenSSH per-connection server daemon (10.0.0.1:43088). Oct 27 08:32:50.027023 systemd-logind[1577]: Removed session 18. Oct 27 08:32:50.078544 sshd[5141]: Accepted publickey for core from 10.0.0.1 port 43088 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:32:50.079813 sshd-session[5141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:32:50.084069 systemd-logind[1577]: New session 19 of user core. Oct 27 08:32:50.093530 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 27 08:32:50.256441 kubelet[2758]: E1027 08:32:50.255487 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7948764c44-6chns" podUID="dbb3c469-8dbc-4871-a711-e7befd27ac29" Oct 27 08:32:50.578257 sshd[5144]: Connection closed by 10.0.0.1 port 43088 Oct 27 08:32:50.579534 sshd-session[5141]: pam_unix(sshd:session): session closed for user core Oct 27 08:32:50.590792 systemd[1]: sshd@18-10.0.0.134:22-10.0.0.1:43088.service: Deactivated successfully. Oct 27 08:32:50.592738 systemd[1]: session-19.scope: Deactivated successfully. Oct 27 08:32:50.594473 systemd-logind[1577]: Session 19 logged out. Waiting for processes to exit. Oct 27 08:32:50.597346 systemd-logind[1577]: Removed session 19. Oct 27 08:32:50.600635 systemd[1]: Started sshd@19-10.0.0.134:22-10.0.0.1:43100.service - OpenSSH per-connection server daemon (10.0.0.1:43100). Oct 27 08:32:50.648361 sshd[5163]: Accepted publickey for core from 10.0.0.1 port 43100 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:32:50.649868 sshd-session[5163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:32:50.654287 systemd-logind[1577]: New session 20 of user core. Oct 27 08:32:50.664558 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 27 08:32:50.887079 sshd[5167]: Connection closed by 10.0.0.1 port 43100 Oct 27 08:32:50.887692 sshd-session[5163]: pam_unix(sshd:session): session closed for user core Oct 27 08:32:50.896601 systemd[1]: sshd@19-10.0.0.134:22-10.0.0.1:43100.service: Deactivated successfully. Oct 27 08:32:50.898773 systemd[1]: session-20.scope: Deactivated successfully. Oct 27 08:32:50.899592 systemd-logind[1577]: Session 20 logged out. Waiting for processes to exit. Oct 27 08:32:50.902520 systemd[1]: Started sshd@20-10.0.0.134:22-10.0.0.1:43114.service - OpenSSH per-connection server daemon (10.0.0.1:43114). Oct 27 08:32:50.903186 systemd-logind[1577]: Removed session 20. Oct 27 08:32:50.953381 sshd[5180]: Accepted publickey for core from 10.0.0.1 port 43114 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:32:50.955579 sshd-session[5180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:32:50.962740 systemd-logind[1577]: New session 21 of user core. Oct 27 08:32:50.966548 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 27 08:32:51.091804 sshd[5183]: Connection closed by 10.0.0.1 port 43114 Oct 27 08:32:51.092902 sshd-session[5180]: pam_unix(sshd:session): session closed for user core Oct 27 08:32:51.099270 systemd-logind[1577]: Session 21 logged out. Waiting for processes to exit. Oct 27 08:32:51.099881 systemd[1]: sshd@20-10.0.0.134:22-10.0.0.1:43114.service: Deactivated successfully. Oct 27 08:32:51.104041 systemd[1]: session-21.scope: Deactivated successfully. Oct 27 08:32:51.106757 systemd-logind[1577]: Removed session 21. Oct 27 08:32:52.255393 kubelet[2758]: E1027 08:32:52.255339 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7948764c44-vhjsc" podUID="155c9352-5d7b-4c12-ac76-f7ec57d8dc42" Oct 27 08:32:52.256283 kubelet[2758]: E1027 08:32:52.256217 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wxbd2" podUID="7910370a-d2b9-4ee0-8c0a-b904aff5f65a" Oct 27 08:32:54.255726 containerd[1607]: time="2025-10-27T08:32:54.255667104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 27 08:32:54.613471 containerd[1607]: time="2025-10-27T08:32:54.613404771Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:32:54.654819 containerd[1607]: time="2025-10-27T08:32:54.654773184Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 27 08:32:54.654912 containerd[1607]: time="2025-10-27T08:32:54.654852799Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 27 08:32:54.655176 kubelet[2758]: E1027 08:32:54.655124 2758 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 08:32:54.655556 kubelet[2758]: E1027 08:32:54.655180 2758 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 08:32:54.655556 kubelet[2758]: E1027 08:32:54.655311 2758 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:80a7b8633959402db64dbfcc329ddd47,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xg9l5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-849fdbdcd5-csmld_calico-system(f2ff340b-16ee-4ed6-afb9-848c0501ad98): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 27 08:32:54.657625 containerd[1607]: time="2025-10-27T08:32:54.657566318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 27 08:32:55.006432 containerd[1607]: time="2025-10-27T08:32:55.006260764Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:32:55.065177 containerd[1607]: time="2025-10-27T08:32:55.065117126Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 27 08:32:55.065257 containerd[1607]: time="2025-10-27T08:32:55.065178768Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 27 08:32:55.065471 kubelet[2758]: E1027 08:32:55.065389 2758 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 08:32:55.065529 kubelet[2758]: E1027 08:32:55.065493 2758 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 08:32:55.065667 kubelet[2758]: E1027 08:32:55.065623 2758 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xg9l5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-849fdbdcd5-csmld_calico-system(f2ff340b-16ee-4ed6-afb9-848c0501ad98): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 27 08:32:55.066848 kubelet[2758]: E1027 08:32:55.066785 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-849fdbdcd5-csmld" podUID="f2ff340b-16ee-4ed6-afb9-848c0501ad98" Oct 27 08:32:56.112263 systemd[1]: Started sshd@21-10.0.0.134:22-10.0.0.1:54336.service - OpenSSH per-connection server daemon (10.0.0.1:54336). Oct 27 08:32:56.182618 sshd[5199]: Accepted publickey for core from 10.0.0.1 port 54336 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:32:56.184061 sshd-session[5199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:32:56.188555 systemd-logind[1577]: New session 22 of user core. Oct 27 08:32:56.195543 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 27 08:32:56.313522 sshd[5203]: Connection closed by 10.0.0.1 port 54336 Oct 27 08:32:56.313835 sshd-session[5199]: pam_unix(sshd:session): session closed for user core Oct 27 08:32:56.318654 systemd[1]: sshd@21-10.0.0.134:22-10.0.0.1:54336.service: Deactivated successfully. Oct 27 08:32:56.320697 systemd[1]: session-22.scope: Deactivated successfully. Oct 27 08:32:56.321537 systemd-logind[1577]: Session 22 logged out. Waiting for processes to exit. Oct 27 08:32:56.322924 systemd-logind[1577]: Removed session 22. Oct 27 08:32:57.257575 kubelet[2758]: E1027 08:32:57.257320 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:32:57.258116 containerd[1607]: time="2025-10-27T08:32:57.258079262Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 27 08:32:57.611658 containerd[1607]: time="2025-10-27T08:32:57.611509593Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:32:57.612934 containerd[1607]: time="2025-10-27T08:32:57.612867699Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 27 08:32:57.613074 containerd[1607]: time="2025-10-27T08:32:57.612976780Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 27 08:32:57.613235 kubelet[2758]: E1027 08:32:57.613128 2758 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 08:32:57.613235 kubelet[2758]: E1027 08:32:57.613189 2758 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 08:32:57.613369 kubelet[2758]: E1027 08:32:57.613323 2758 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9r84d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-pds2z_calico-system(4ba4a847-9e38-4bf2-a22e-46c61e29a54b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 27 08:32:57.614543 kubelet[2758]: E1027 08:32:57.614498 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pds2z" podUID="4ba4a847-9e38-4bf2-a22e-46c61e29a54b" Oct 27 08:33:00.254161 kubelet[2758]: E1027 08:33:00.254114 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:33:01.327582 systemd[1]: Started sshd@22-10.0.0.134:22-10.0.0.1:54342.service - OpenSSH per-connection server daemon (10.0.0.1:54342). Oct 27 08:33:01.384849 sshd[5222]: Accepted publickey for core from 10.0.0.1 port 54342 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:33:01.386026 sshd-session[5222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:33:01.390496 systemd-logind[1577]: New session 23 of user core. Oct 27 08:33:01.404543 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 27 08:33:01.518611 sshd[5225]: Connection closed by 10.0.0.1 port 54342 Oct 27 08:33:01.519086 sshd-session[5222]: pam_unix(sshd:session): session closed for user core Oct 27 08:33:01.524877 systemd[1]: sshd@22-10.0.0.134:22-10.0.0.1:54342.service: Deactivated successfully. Oct 27 08:33:01.527045 systemd[1]: session-23.scope: Deactivated successfully. Oct 27 08:33:01.529055 systemd-logind[1577]: Session 23 logged out. Waiting for processes to exit. Oct 27 08:33:01.531042 systemd-logind[1577]: Removed session 23. Oct 27 08:33:03.261347 containerd[1607]: time="2025-10-27T08:33:03.260655993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 08:33:03.642218 containerd[1607]: time="2025-10-27T08:33:03.642077562Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:33:03.643169 containerd[1607]: time="2025-10-27T08:33:03.643126449Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 08:33:03.643237 containerd[1607]: time="2025-10-27T08:33:03.643204353Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 08:33:03.643382 kubelet[2758]: E1027 08:33:03.643335 2758 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:33:03.643708 kubelet[2758]: E1027 08:33:03.643396 2758 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:33:03.643708 kubelet[2758]: E1027 08:33:03.643541 2758 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qfhw9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7948764c44-6chns_calico-apiserver(dbb3c469-8dbc-4871-a711-e7befd27ac29): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 08:33:03.644744 kubelet[2758]: E1027 08:33:03.644701 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7948764c44-6chns" podUID="dbb3c469-8dbc-4871-a711-e7befd27ac29" Oct 27 08:33:04.255014 containerd[1607]: time="2025-10-27T08:33:04.254966799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 08:33:04.599624 containerd[1607]: time="2025-10-27T08:33:04.599572516Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:33:04.600648 containerd[1607]: time="2025-10-27T08:33:04.600615874Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 08:33:04.600766 containerd[1607]: time="2025-10-27T08:33:04.600706570Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 08:33:04.600879 kubelet[2758]: E1027 08:33:04.600835 2758 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:33:04.600941 kubelet[2758]: E1027 08:33:04.600882 2758 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:33:04.601174 kubelet[2758]: E1027 08:33:04.601096 2758 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k7njj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7948764c44-vhjsc_calico-apiserver(155c9352-5d7b-4c12-ac76-f7ec57d8dc42): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 08:33:04.601381 containerd[1607]: time="2025-10-27T08:33:04.601213201Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 27 08:33:04.602382 kubelet[2758]: E1027 08:33:04.602341 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7948764c44-vhjsc" podUID="155c9352-5d7b-4c12-ac76-f7ec57d8dc42" Oct 27 08:33:04.946260 containerd[1607]: time="2025-10-27T08:33:04.946123979Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:33:04.947132 containerd[1607]: time="2025-10-27T08:33:04.947103169Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 27 08:33:04.947190 containerd[1607]: time="2025-10-27T08:33:04.947162608Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 27 08:33:04.947323 kubelet[2758]: E1027 08:33:04.947279 2758 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 08:33:04.947676 kubelet[2758]: E1027 08:33:04.947331 2758 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 08:33:04.947676 kubelet[2758]: E1027 08:33:04.947479 2758 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8vzb5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-85866cf9fb-2xx5v_calico-system(29f6d368-bb0a-4633-8401-1b96e3d04052): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 27 08:33:04.949154 kubelet[2758]: E1027 08:33:04.949120 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85866cf9fb-2xx5v" podUID="29f6d368-bb0a-4633-8401-1b96e3d04052" Oct 27 08:33:05.254974 kubelet[2758]: E1027 08:33:05.254857 2758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:33:06.535154 systemd[1]: Started sshd@23-10.0.0.134:22-10.0.0.1:49784.service - OpenSSH per-connection server daemon (10.0.0.1:49784). Oct 27 08:33:06.587665 sshd[5241]: Accepted publickey for core from 10.0.0.1 port 49784 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:33:06.589316 sshd-session[5241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:33:06.593531 systemd-logind[1577]: New session 24 of user core. Oct 27 08:33:06.602531 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 27 08:33:06.710705 sshd[5244]: Connection closed by 10.0.0.1 port 49784 Oct 27 08:33:06.711053 sshd-session[5241]: pam_unix(sshd:session): session closed for user core Oct 27 08:33:06.716020 systemd[1]: sshd@23-10.0.0.134:22-10.0.0.1:49784.service: Deactivated successfully. Oct 27 08:33:06.718102 systemd[1]: session-24.scope: Deactivated successfully. Oct 27 08:33:06.718822 systemd-logind[1577]: Session 24 logged out. Waiting for processes to exit. Oct 27 08:33:06.720125 systemd-logind[1577]: Removed session 24. Oct 27 08:33:07.255849 containerd[1607]: time="2025-10-27T08:33:07.255786016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 27 08:33:07.610937 containerd[1607]: time="2025-10-27T08:33:07.610852705Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:33:07.612150 containerd[1607]: time="2025-10-27T08:33:07.612114497Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 27 08:33:07.612237 containerd[1607]: time="2025-10-27T08:33:07.612194696Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 27 08:33:07.612422 kubelet[2758]: E1027 08:33:07.612361 2758 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 08:33:07.612888 kubelet[2758]: E1027 08:33:07.612443 2758 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 08:33:07.612888 kubelet[2758]: E1027 08:33:07.612563 2758 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zbxkp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wxbd2_calico-system(7910370a-d2b9-4ee0-8c0a-b904aff5f65a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 27 08:33:07.615226 containerd[1607]: time="2025-10-27T08:33:07.615201139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 27 08:33:07.994882 containerd[1607]: time="2025-10-27T08:33:07.994719831Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:33:07.996057 containerd[1607]: time="2025-10-27T08:33:07.995970913Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 27 08:33:07.996057 containerd[1607]: time="2025-10-27T08:33:07.996034190Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 27 08:33:07.996354 kubelet[2758]: E1027 08:33:07.996297 2758 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 08:33:07.996404 kubelet[2758]: E1027 08:33:07.996360 2758 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 08:33:07.996588 kubelet[2758]: E1027 08:33:07.996549 2758 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zbxkp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wxbd2_calico-system(7910370a-d2b9-4ee0-8c0a-b904aff5f65a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 27 08:33:07.998503 kubelet[2758]: E1027 08:33:07.998454 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wxbd2" podUID="7910370a-d2b9-4ee0-8c0a-b904aff5f65a" Oct 27 08:33:09.258331 kubelet[2758]: E1027 08:33:09.258261 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-849fdbdcd5-csmld" podUID="f2ff340b-16ee-4ed6-afb9-848c0501ad98" Oct 27 08:33:10.255174 kubelet[2758]: E1027 08:33:10.255115 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pds2z" podUID="4ba4a847-9e38-4bf2-a22e-46c61e29a54b" Oct 27 08:33:11.723963 systemd[1]: Started sshd@24-10.0.0.134:22-10.0.0.1:49788.service - OpenSSH per-connection server daemon (10.0.0.1:49788). Oct 27 08:33:11.783307 sshd[5259]: Accepted publickey for core from 10.0.0.1 port 49788 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:33:11.784651 sshd-session[5259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:33:11.789099 systemd-logind[1577]: New session 25 of user core. Oct 27 08:33:11.806553 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 27 08:33:11.940172 sshd[5262]: Connection closed by 10.0.0.1 port 49788 Oct 27 08:33:11.941736 sshd-session[5259]: pam_unix(sshd:session): session closed for user core Oct 27 08:33:11.946106 systemd[1]: sshd@24-10.0.0.134:22-10.0.0.1:49788.service: Deactivated successfully. Oct 27 08:33:11.948177 systemd[1]: session-25.scope: Deactivated successfully. Oct 27 08:33:11.949382 systemd-logind[1577]: Session 25 logged out. Waiting for processes to exit. Oct 27 08:33:11.951194 systemd-logind[1577]: Removed session 25.