Oct 29 11:46:37.302543 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 29 11:46:37.302565 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Wed Oct 29 10:16:55 -00 2025 Oct 29 11:46:37.302574 kernel: KASLR enabled Oct 29 11:46:37.302580 kernel: efi: EFI v2.7 by EDK II Oct 29 11:46:37.302586 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Oct 29 11:46:37.302591 kernel: random: crng init done Oct 29 11:46:37.302598 kernel: secureboot: Secure boot disabled Oct 29 11:46:37.302604 kernel: ACPI: Early table checksum verification disabled Oct 29 11:46:37.302612 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Oct 29 11:46:37.302621 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 29 11:46:37.302628 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 11:46:37.302634 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 11:46:37.302640 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 11:46:37.302647 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 11:46:37.302656 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 11:46:37.302662 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 11:46:37.302669 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 11:46:37.302676 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 11:46:37.302682 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 11:46:37.302688 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 29 11:46:37.302695 kernel: ACPI: Use ACPI SPCR as default console: No Oct 29 11:46:37.302701 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 29 11:46:37.302709 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Oct 29 11:46:37.302715 kernel: Zone ranges: Oct 29 11:46:37.302722 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 29 11:46:37.302728 kernel: DMA32 empty Oct 29 11:46:37.302735 kernel: Normal empty Oct 29 11:46:37.302741 kernel: Device empty Oct 29 11:46:37.302747 kernel: Movable zone start for each node Oct 29 11:46:37.302753 kernel: Early memory node ranges Oct 29 11:46:37.302760 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Oct 29 11:46:37.302766 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Oct 29 11:46:37.302772 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Oct 29 11:46:37.302779 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Oct 29 11:46:37.302786 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Oct 29 11:46:37.302793 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Oct 29 11:46:37.302799 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Oct 29 11:46:37.302806 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Oct 29 11:46:37.302812 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Oct 29 11:46:37.302819 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Oct 29 11:46:37.302829 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Oct 29 11:46:37.302836 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Oct 29 11:46:37.302843 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Oct 29 11:46:37.302849 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 29 11:46:37.302863 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 29 11:46:37.302871 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Oct 29 11:46:37.302878 kernel: psci: probing for conduit method from ACPI. Oct 29 11:46:37.302885 kernel: psci: PSCIv1.1 detected in firmware. Oct 29 11:46:37.302893 kernel: psci: Using standard PSCI v0.2 function IDs Oct 29 11:46:37.302900 kernel: psci: Trusted OS migration not required Oct 29 11:46:37.302906 kernel: psci: SMC Calling Convention v1.1 Oct 29 11:46:37.302913 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 29 11:46:37.302920 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Oct 29 11:46:37.302927 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Oct 29 11:46:37.302934 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 29 11:46:37.302949 kernel: Detected PIPT I-cache on CPU0 Oct 29 11:46:37.302958 kernel: CPU features: detected: GIC system register CPU interface Oct 29 11:46:37.302965 kernel: CPU features: detected: Spectre-v4 Oct 29 11:46:37.302972 kernel: CPU features: detected: Spectre-BHB Oct 29 11:46:37.302980 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 29 11:46:37.302987 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 29 11:46:37.302994 kernel: CPU features: detected: ARM erratum 1418040 Oct 29 11:46:37.303001 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 29 11:46:37.303008 kernel: alternatives: applying boot alternatives Oct 29 11:46:37.303015 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=13ad0e8cdb8340a7f2c7e816055a4bbda051a9ddd845a0bd42ed2186e05be3cd Oct 29 11:46:37.303023 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 29 11:46:37.303030 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 29 11:46:37.303036 kernel: Fallback order for Node 0: 0 Oct 29 11:46:37.303043 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Oct 29 11:46:37.303051 kernel: Policy zone: DMA Oct 29 11:46:37.303058 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 29 11:46:37.303065 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Oct 29 11:46:37.303071 kernel: software IO TLB: area num 4. Oct 29 11:46:37.303078 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Oct 29 11:46:37.303085 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Oct 29 11:46:37.303092 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 29 11:46:37.303099 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 29 11:46:37.303107 kernel: rcu: RCU event tracing is enabled. Oct 29 11:46:37.303114 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 29 11:46:37.303120 kernel: Trampoline variant of Tasks RCU enabled. Oct 29 11:46:37.303128 kernel: Tracing variant of Tasks RCU enabled. Oct 29 11:46:37.303135 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 29 11:46:37.303142 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 29 11:46:37.303149 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 29 11:46:37.303156 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 29 11:46:37.303163 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 29 11:46:37.303170 kernel: GICv3: 256 SPIs implemented Oct 29 11:46:37.303177 kernel: GICv3: 0 Extended SPIs implemented Oct 29 11:46:37.303183 kernel: Root IRQ handler: gic_handle_irq Oct 29 11:46:37.303190 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Oct 29 11:46:37.303197 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Oct 29 11:46:37.303205 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 29 11:46:37.303212 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 29 11:46:37.303219 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Oct 29 11:46:37.303226 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Oct 29 11:46:37.303233 kernel: GICv3: using LPI property table @0x0000000040130000 Oct 29 11:46:37.303240 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Oct 29 11:46:37.303246 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 29 11:46:37.303253 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 29 11:46:37.303260 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 29 11:46:37.303267 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 29 11:46:37.303274 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 29 11:46:37.303282 kernel: arm-pv: using stolen time PV Oct 29 11:46:37.303290 kernel: Console: colour dummy device 80x25 Oct 29 11:46:37.303297 kernel: ACPI: Core revision 20240827 Oct 29 11:46:37.303304 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 29 11:46:37.303312 kernel: pid_max: default: 32768 minimum: 301 Oct 29 11:46:37.303319 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 29 11:46:37.303326 kernel: landlock: Up and running. Oct 29 11:46:37.303333 kernel: SELinux: Initializing. Oct 29 11:46:37.303341 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 29 11:46:37.303349 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 29 11:46:37.303356 kernel: rcu: Hierarchical SRCU implementation. Oct 29 11:46:37.303363 kernel: rcu: Max phase no-delay instances is 400. Oct 29 11:46:37.303370 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 29 11:46:37.303377 kernel: Remapping and enabling EFI services. Oct 29 11:46:37.303384 kernel: smp: Bringing up secondary CPUs ... Oct 29 11:46:37.303392 kernel: Detected PIPT I-cache on CPU1 Oct 29 11:46:37.303404 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 29 11:46:37.303413 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Oct 29 11:46:37.303421 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 29 11:46:37.303428 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 29 11:46:37.303435 kernel: Detected PIPT I-cache on CPU2 Oct 29 11:46:37.303443 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 29 11:46:37.303452 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Oct 29 11:46:37.303459 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 29 11:46:37.303467 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 29 11:46:37.303474 kernel: Detected PIPT I-cache on CPU3 Oct 29 11:46:37.303482 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 29 11:46:37.303490 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Oct 29 11:46:37.303498 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 29 11:46:37.303506 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 29 11:46:37.303514 kernel: smp: Brought up 1 node, 4 CPUs Oct 29 11:46:37.303522 kernel: SMP: Total of 4 processors activated. Oct 29 11:46:37.303529 kernel: CPU: All CPU(s) started at EL1 Oct 29 11:46:37.303537 kernel: CPU features: detected: 32-bit EL0 Support Oct 29 11:46:37.303544 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 29 11:46:37.303552 kernel: CPU features: detected: Common not Private translations Oct 29 11:46:37.303561 kernel: CPU features: detected: CRC32 instructions Oct 29 11:46:37.303569 kernel: CPU features: detected: Enhanced Virtualization Traps Oct 29 11:46:37.303577 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 29 11:46:37.303585 kernel: CPU features: detected: LSE atomic instructions Oct 29 11:46:37.303593 kernel: CPU features: detected: Privileged Access Never Oct 29 11:46:37.303600 kernel: CPU features: detected: RAS Extension Support Oct 29 11:46:37.303608 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 29 11:46:37.303616 kernel: alternatives: applying system-wide alternatives Oct 29 11:46:37.303625 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Oct 29 11:46:37.303633 kernel: Memory: 2450400K/2572288K available (11136K kernel code, 2456K rwdata, 9084K rodata, 12992K init, 1038K bss, 99552K reserved, 16384K cma-reserved) Oct 29 11:46:37.303641 kernel: devtmpfs: initialized Oct 29 11:46:37.303648 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 29 11:46:37.303656 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 29 11:46:37.303664 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Oct 29 11:46:37.303671 kernel: 0 pages in range for non-PLT usage Oct 29 11:46:37.303680 kernel: 515056 pages in range for PLT usage Oct 29 11:46:37.303687 kernel: pinctrl core: initialized pinctrl subsystem Oct 29 11:46:37.303695 kernel: SMBIOS 3.0.0 present. Oct 29 11:46:37.303702 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Oct 29 11:46:37.303710 kernel: DMI: Memory slots populated: 1/1 Oct 29 11:46:37.303717 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 29 11:46:37.303725 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 29 11:46:37.303748 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 29 11:46:37.303756 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 29 11:46:37.303764 kernel: audit: initializing netlink subsys (disabled) Oct 29 11:46:37.303772 kernel: audit: type=2000 audit(0.021:1): state=initialized audit_enabled=0 res=1 Oct 29 11:46:37.303780 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 29 11:46:37.303788 kernel: cpuidle: using governor menu Oct 29 11:46:37.303795 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 29 11:46:37.303804 kernel: ASID allocator initialised with 32768 entries Oct 29 11:46:37.303812 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 29 11:46:37.303819 kernel: Serial: AMBA PL011 UART driver Oct 29 11:46:37.303827 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 29 11:46:37.303834 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 29 11:46:37.303842 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 29 11:46:37.303850 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 29 11:46:37.303862 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 29 11:46:37.303871 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 29 11:46:37.303879 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 29 11:46:37.303886 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 29 11:46:37.303893 kernel: ACPI: Added _OSI(Module Device) Oct 29 11:46:37.303901 kernel: ACPI: Added _OSI(Processor Device) Oct 29 11:46:37.303909 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 29 11:46:37.303916 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 29 11:46:37.303925 kernel: ACPI: Interpreter enabled Oct 29 11:46:37.303932 kernel: ACPI: Using GIC for interrupt routing Oct 29 11:46:37.303940 kernel: ACPI: MCFG table detected, 1 entries Oct 29 11:46:37.303961 kernel: ACPI: CPU0 has been hot-added Oct 29 11:46:37.303968 kernel: ACPI: CPU1 has been hot-added Oct 29 11:46:37.303976 kernel: ACPI: CPU2 has been hot-added Oct 29 11:46:37.303983 kernel: ACPI: CPU3 has been hot-added Oct 29 11:46:37.303991 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 29 11:46:37.304000 kernel: printk: legacy console [ttyAMA0] enabled Oct 29 11:46:37.304007 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 29 11:46:37.304175 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 29 11:46:37.304264 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 29 11:46:37.304344 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 29 11:46:37.304443 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 29 11:46:37.304523 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 29 11:46:37.304533 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 29 11:46:37.304541 kernel: PCI host bridge to bus 0000:00 Oct 29 11:46:37.304635 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 29 11:46:37.304708 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 29 11:46:37.304785 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 29 11:46:37.304866 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 29 11:46:37.304993 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Oct 29 11:46:37.305089 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 29 11:46:37.305179 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Oct 29 11:46:37.305260 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Oct 29 11:46:37.305344 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Oct 29 11:46:37.305425 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Oct 29 11:46:37.305507 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Oct 29 11:46:37.305607 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Oct 29 11:46:37.305685 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 29 11:46:37.305761 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 29 11:46:37.305866 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 29 11:46:37.305877 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 29 11:46:37.305885 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 29 11:46:37.305892 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 29 11:46:37.305900 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 29 11:46:37.305919 kernel: iommu: Default domain type: Translated Oct 29 11:46:37.305928 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 29 11:46:37.305936 kernel: efivars: Registered efivars operations Oct 29 11:46:37.305951 kernel: vgaarb: loaded Oct 29 11:46:37.305959 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 29 11:46:37.305966 kernel: VFS: Disk quotas dquot_6.6.0 Oct 29 11:46:37.305974 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 29 11:46:37.305981 kernel: pnp: PnP ACPI init Oct 29 11:46:37.306079 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 29 11:46:37.306090 kernel: pnp: PnP ACPI: found 1 devices Oct 29 11:46:37.306098 kernel: NET: Registered PF_INET protocol family Oct 29 11:46:37.306105 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 29 11:46:37.306113 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 29 11:46:37.306120 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 29 11:46:37.306128 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 29 11:46:37.306138 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 29 11:46:37.306145 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 29 11:46:37.306153 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 29 11:46:37.306160 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 29 11:46:37.306168 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 29 11:46:37.306175 kernel: PCI: CLS 0 bytes, default 64 Oct 29 11:46:37.306183 kernel: kvm [1]: HYP mode not available Oct 29 11:46:37.306191 kernel: Initialise system trusted keyrings Oct 29 11:46:37.306199 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 29 11:46:37.306206 kernel: Key type asymmetric registered Oct 29 11:46:37.306214 kernel: Asymmetric key parser 'x509' registered Oct 29 11:46:37.306222 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 29 11:46:37.306229 kernel: io scheduler mq-deadline registered Oct 29 11:46:37.306237 kernel: io scheduler kyber registered Oct 29 11:46:37.306245 kernel: io scheduler bfq registered Oct 29 11:46:37.306253 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 29 11:46:37.306260 kernel: ACPI: button: Power Button [PWRB] Oct 29 11:46:37.306269 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 29 11:46:37.306352 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 29 11:46:37.306362 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 29 11:46:37.306370 kernel: thunder_xcv, ver 1.0 Oct 29 11:46:37.306379 kernel: thunder_bgx, ver 1.0 Oct 29 11:46:37.306386 kernel: nicpf, ver 1.0 Oct 29 11:46:37.306394 kernel: nicvf, ver 1.0 Oct 29 11:46:37.306487 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 29 11:46:37.306565 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-10-29T11:46:36 UTC (1761738396) Oct 29 11:46:37.306575 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 29 11:46:37.306582 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Oct 29 11:46:37.306592 kernel: watchdog: NMI not fully supported Oct 29 11:46:37.306599 kernel: watchdog: Hard watchdog permanently disabled Oct 29 11:46:37.306607 kernel: NET: Registered PF_INET6 protocol family Oct 29 11:46:37.306614 kernel: Segment Routing with IPv6 Oct 29 11:46:37.306622 kernel: In-situ OAM (IOAM) with IPv6 Oct 29 11:46:37.306629 kernel: NET: Registered PF_PACKET protocol family Oct 29 11:46:37.306637 kernel: Key type dns_resolver registered Oct 29 11:46:37.306645 kernel: registered taskstats version 1 Oct 29 11:46:37.306653 kernel: Loading compiled-in X.509 certificates Oct 29 11:46:37.306660 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 53895462ccbd526ae059c2f6d634e41caa85bf3c' Oct 29 11:46:37.306668 kernel: Demotion targets for Node 0: null Oct 29 11:46:37.306675 kernel: Key type .fscrypt registered Oct 29 11:46:37.306682 kernel: Key type fscrypt-provisioning registered Oct 29 11:46:37.306690 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 29 11:46:37.306699 kernel: ima: Allocated hash algorithm: sha1 Oct 29 11:46:37.306706 kernel: ima: No architecture policies found Oct 29 11:46:37.306713 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 29 11:46:37.306721 kernel: clk: Disabling unused clocks Oct 29 11:46:37.306728 kernel: PM: genpd: Disabling unused power domains Oct 29 11:46:37.306736 kernel: Freeing unused kernel memory: 12992K Oct 29 11:46:37.306743 kernel: Run /init as init process Oct 29 11:46:37.306752 kernel: with arguments: Oct 29 11:46:37.306759 kernel: /init Oct 29 11:46:37.306767 kernel: with environment: Oct 29 11:46:37.306774 kernel: HOME=/ Oct 29 11:46:37.306781 kernel: TERM=linux Oct 29 11:46:37.306885 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Oct 29 11:46:37.306977 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Oct 29 11:46:37.306990 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 29 11:46:37.306998 kernel: GPT:16515071 != 27000831 Oct 29 11:46:37.307005 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 29 11:46:37.307013 kernel: GPT:16515071 != 27000831 Oct 29 11:46:37.307020 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 29 11:46:37.307028 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 29 11:46:37.307036 kernel: SCSI subsystem initialized Oct 29 11:46:37.307044 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 29 11:46:37.307052 kernel: device-mapper: uevent: version 1.0.3 Oct 29 11:46:37.307059 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 29 11:46:37.307067 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Oct 29 11:46:37.307074 kernel: raid6: neonx8 gen() 15788 MB/s Oct 29 11:46:37.307082 kernel: raid6: neonx4 gen() 15810 MB/s Oct 29 11:46:37.307090 kernel: raid6: neonx2 gen() 13187 MB/s Oct 29 11:46:37.307098 kernel: raid6: neonx1 gen() 10426 MB/s Oct 29 11:46:37.307105 kernel: raid6: int64x8 gen() 6897 MB/s Oct 29 11:46:37.307113 kernel: raid6: int64x4 gen() 7347 MB/s Oct 29 11:46:37.307121 kernel: raid6: int64x2 gen() 6108 MB/s Oct 29 11:46:37.307128 kernel: raid6: int64x1 gen() 5044 MB/s Oct 29 11:46:37.307135 kernel: raid6: using algorithm neonx4 gen() 15810 MB/s Oct 29 11:46:37.307143 kernel: raid6: .... xor() 12353 MB/s, rmw enabled Oct 29 11:46:37.307152 kernel: raid6: using neon recovery algorithm Oct 29 11:46:37.307159 kernel: xor: measuring software checksum speed Oct 29 11:46:37.307167 kernel: 8regs : 20712 MB/sec Oct 29 11:46:37.307174 kernel: 32regs : 21681 MB/sec Oct 29 11:46:37.307182 kernel: arm64_neon : 28041 MB/sec Oct 29 11:46:37.307189 kernel: xor: using function: arm64_neon (28041 MB/sec) Oct 29 11:46:37.307197 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 29 11:46:37.307206 kernel: BTRFS: device fsid 39bcdc01-efdd-4ab5-b67e-2f27f08e83e1 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (206) Oct 29 11:46:37.307214 kernel: BTRFS info (device dm-0): first mount of filesystem 39bcdc01-efdd-4ab5-b67e-2f27f08e83e1 Oct 29 11:46:37.307226 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 29 11:46:37.307236 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 29 11:46:37.307243 kernel: BTRFS info (device dm-0): enabling free space tree Oct 29 11:46:37.307251 kernel: loop: module loaded Oct 29 11:46:37.307258 kernel: loop0: detected capacity change from 0 to 91480 Oct 29 11:46:37.307267 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 29 11:46:37.307276 systemd[1]: Successfully made /usr/ read-only. Oct 29 11:46:37.307286 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 29 11:46:37.307295 systemd[1]: Detected virtualization kvm. Oct 29 11:46:37.307303 systemd[1]: Detected architecture arm64. Oct 29 11:46:37.307311 systemd[1]: Running in initrd. Oct 29 11:46:37.307320 systemd[1]: No hostname configured, using default hostname. Oct 29 11:46:37.307328 systemd[1]: Hostname set to . Oct 29 11:46:37.307336 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 29 11:46:37.307344 systemd[1]: Queued start job for default target initrd.target. Oct 29 11:46:37.307352 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 29 11:46:37.307360 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 29 11:46:37.307371 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 29 11:46:37.307380 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 29 11:46:37.307388 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 29 11:46:37.307397 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 29 11:46:37.307405 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 29 11:46:37.307414 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 29 11:46:37.307424 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 29 11:46:37.307432 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 29 11:46:37.307440 systemd[1]: Reached target paths.target - Path Units. Oct 29 11:46:37.307448 systemd[1]: Reached target slices.target - Slice Units. Oct 29 11:46:37.307456 systemd[1]: Reached target swap.target - Swaps. Oct 29 11:46:37.307464 systemd[1]: Reached target timers.target - Timer Units. Oct 29 11:46:37.307472 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 29 11:46:37.307481 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 29 11:46:37.307491 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 29 11:46:37.307504 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 29 11:46:37.307519 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 29 11:46:37.307529 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 29 11:46:37.307538 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 29 11:46:37.307547 systemd[1]: Reached target sockets.target - Socket Units. Oct 29 11:46:37.307556 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 29 11:46:37.307564 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 29 11:46:37.307573 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 29 11:46:37.307581 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 29 11:46:37.307591 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 29 11:46:37.307600 systemd[1]: Starting systemd-fsck-usr.service... Oct 29 11:46:37.307609 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 29 11:46:37.307617 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 29 11:46:37.307626 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 11:46:37.307637 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 29 11:46:37.307645 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 29 11:46:37.307653 systemd[1]: Finished systemd-fsck-usr.service. Oct 29 11:46:37.307663 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 29 11:46:37.307688 systemd-journald[345]: Collecting audit messages is disabled. Oct 29 11:46:37.307709 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 29 11:46:37.307718 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 29 11:46:37.307726 kernel: Bridge firewalling registered Oct 29 11:46:37.307734 systemd-journald[345]: Journal started Oct 29 11:46:37.307754 systemd-journald[345]: Runtime Journal (/run/log/journal/34bb1ef2795147dda2a8e6f90a073e25) is 6M, max 48.5M, 42.4M free. Oct 29 11:46:37.307448 systemd-modules-load[346]: Inserted module 'br_netfilter' Oct 29 11:46:37.309951 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 29 11:46:37.312966 systemd[1]: Started systemd-journald.service - Journal Service. Oct 29 11:46:37.324182 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 29 11:46:37.327066 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 11:46:37.330565 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 29 11:46:37.332308 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 29 11:46:37.334914 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 29 11:46:37.345331 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 29 11:46:37.354389 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 29 11:46:37.355021 systemd-tmpfiles[368]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 29 11:46:37.358538 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 29 11:46:37.360454 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 29 11:46:37.363055 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 29 11:46:37.375664 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 29 11:46:37.392844 dracut-cmdline[388]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=13ad0e8cdb8340a7f2c7e816055a4bbda051a9ddd845a0bd42ed2186e05be3cd Oct 29 11:46:37.409045 systemd-resolved[383]: Positive Trust Anchors: Oct 29 11:46:37.409063 systemd-resolved[383]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 29 11:46:37.409067 systemd-resolved[383]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 29 11:46:37.409102 systemd-resolved[383]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 29 11:46:37.433193 systemd-resolved[383]: Defaulting to hostname 'linux'. Oct 29 11:46:37.434147 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 29 11:46:37.435273 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 29 11:46:37.470973 kernel: Loading iSCSI transport class v2.0-870. Oct 29 11:46:37.479987 kernel: iscsi: registered transport (tcp) Oct 29 11:46:37.492963 kernel: iscsi: registered transport (qla4xxx) Oct 29 11:46:37.492990 kernel: QLogic iSCSI HBA Driver Oct 29 11:46:37.512754 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 29 11:46:37.529639 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 29 11:46:37.531769 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 29 11:46:37.575938 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 29 11:46:37.578180 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 29 11:46:37.579685 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 29 11:46:37.615642 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 29 11:46:37.618026 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 29 11:46:37.650327 systemd-udevd[623]: Using default interface naming scheme 'v257'. Oct 29 11:46:37.657995 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 29 11:46:37.660926 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 29 11:46:37.679125 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 29 11:46:37.681775 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 29 11:46:37.687390 dracut-pre-trigger[704]: rd.md=0: removing MD RAID activation Oct 29 11:46:37.711444 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 29 11:46:37.713352 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 29 11:46:37.724540 systemd-networkd[734]: lo: Link UP Oct 29 11:46:37.724547 systemd-networkd[734]: lo: Gained carrier Oct 29 11:46:37.724964 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 29 11:46:37.726268 systemd[1]: Reached target network.target - Network. Oct 29 11:46:37.772732 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 29 11:46:37.775215 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 29 11:46:37.812613 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 29 11:46:37.830609 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 29 11:46:37.838790 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 29 11:46:37.853000 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 29 11:46:37.856451 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 29 11:46:37.857892 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 29 11:46:37.858300 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 11:46:37.860064 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 11:46:37.861828 systemd-networkd[734]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 29 11:46:37.861832 systemd-networkd[734]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 29 11:46:37.862566 systemd-networkd[734]: eth0: Link UP Oct 29 11:46:37.862709 systemd-networkd[734]: eth0: Gained carrier Oct 29 11:46:37.862718 systemd-networkd[734]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 29 11:46:37.868614 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 11:46:37.875232 disk-uuid[802]: Primary Header is updated. Oct 29 11:46:37.875232 disk-uuid[802]: Secondary Entries is updated. Oct 29 11:46:37.875232 disk-uuid[802]: Secondary Header is updated. Oct 29 11:46:37.878001 systemd-networkd[734]: eth0: DHCPv4 address 10.0.0.75/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 29 11:46:37.878388 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 29 11:46:37.883075 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 29 11:46:37.884258 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 29 11:46:37.888320 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 29 11:46:37.893221 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 29 11:46:37.897073 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 11:46:37.923978 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 29 11:46:38.911411 disk-uuid[806]: Warning: The kernel is still using the old partition table. Oct 29 11:46:38.911411 disk-uuid[806]: The new table will be used at the next reboot or after you Oct 29 11:46:38.911411 disk-uuid[806]: run partprobe(8) or kpartx(8) Oct 29 11:46:38.911411 disk-uuid[806]: The operation has completed successfully. Oct 29 11:46:38.917038 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 29 11:46:38.917147 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 29 11:46:38.919219 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 29 11:46:38.944972 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (835) Oct 29 11:46:38.947233 kernel: BTRFS info (device vda6): first mount of filesystem e599792d-5b18-4409-900a-465c02f78c56 Oct 29 11:46:38.947258 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 29 11:46:38.949965 kernel: BTRFS info (device vda6): turning on async discard Oct 29 11:46:38.949990 kernel: BTRFS info (device vda6): enabling free space tree Oct 29 11:46:38.955986 kernel: BTRFS info (device vda6): last unmount of filesystem e599792d-5b18-4409-900a-465c02f78c56 Oct 29 11:46:38.956170 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 29 11:46:38.958303 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 29 11:46:39.047338 ignition[854]: Ignition 2.22.0 Oct 29 11:46:39.047351 ignition[854]: Stage: fetch-offline Oct 29 11:46:39.047383 ignition[854]: no configs at "/usr/lib/ignition/base.d" Oct 29 11:46:39.047392 ignition[854]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 11:46:39.047532 ignition[854]: parsed url from cmdline: "" Oct 29 11:46:39.047535 ignition[854]: no config URL provided Oct 29 11:46:39.047539 ignition[854]: reading system config file "/usr/lib/ignition/user.ign" Oct 29 11:46:39.047548 ignition[854]: no config at "/usr/lib/ignition/user.ign" Oct 29 11:46:39.053018 systemd-networkd[734]: eth0: Gained IPv6LL Oct 29 11:46:39.047584 ignition[854]: op(1): [started] loading QEMU firmware config module Oct 29 11:46:39.047588 ignition[854]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 29 11:46:39.055952 ignition[854]: op(1): [finished] loading QEMU firmware config module Oct 29 11:46:39.097843 ignition[854]: parsing config with SHA512: 58558d49a1a4f8d3bb534d1208902055f346e1b2163897e2e85e8d7a16c270b70d74ac15aef023de9152e3bf591ba58eec2b7dcac8af493299585572c191e116 Oct 29 11:46:39.103206 unknown[854]: fetched base config from "system" Oct 29 11:46:39.103218 unknown[854]: fetched user config from "qemu" Oct 29 11:46:39.103581 ignition[854]: fetch-offline: fetch-offline passed Oct 29 11:46:39.103639 ignition[854]: Ignition finished successfully Oct 29 11:46:39.105547 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 29 11:46:39.107049 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 29 11:46:39.107860 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 29 11:46:39.138112 ignition[871]: Ignition 2.22.0 Oct 29 11:46:39.138128 ignition[871]: Stage: kargs Oct 29 11:46:39.138261 ignition[871]: no configs at "/usr/lib/ignition/base.d" Oct 29 11:46:39.138269 ignition[871]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 11:46:39.139051 ignition[871]: kargs: kargs passed Oct 29 11:46:39.139094 ignition[871]: Ignition finished successfully Oct 29 11:46:39.142093 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 29 11:46:39.144724 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 29 11:46:39.179743 ignition[879]: Ignition 2.22.0 Oct 29 11:46:39.179762 ignition[879]: Stage: disks Oct 29 11:46:39.179905 ignition[879]: no configs at "/usr/lib/ignition/base.d" Oct 29 11:46:39.179913 ignition[879]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 11:46:39.183200 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 29 11:46:39.180718 ignition[879]: disks: disks passed Oct 29 11:46:39.185292 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 29 11:46:39.180761 ignition[879]: Ignition finished successfully Oct 29 11:46:39.186726 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 29 11:46:39.188367 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 29 11:46:39.190148 systemd[1]: Reached target sysinit.target - System Initialization. Oct 29 11:46:39.191673 systemd[1]: Reached target basic.target - Basic System. Oct 29 11:46:39.194469 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 29 11:46:39.221184 systemd-fsck[889]: ROOT: clean, 15/456736 files, 38230/456704 blocks Oct 29 11:46:39.226132 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 29 11:46:39.228899 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 29 11:46:39.290984 kernel: EXT4-fs (vda9): mounted filesystem ed0c3329-91c4-41e1-aa11-4d04384caf5a r/w with ordered data mode. Quota mode: none. Oct 29 11:46:39.291749 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 29 11:46:39.293062 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 29 11:46:39.295463 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 29 11:46:39.297070 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 29 11:46:39.298025 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 29 11:46:39.298058 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 29 11:46:39.298083 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 29 11:46:39.312363 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 29 11:46:39.314887 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 29 11:46:39.317574 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (898) Oct 29 11:46:39.317595 kernel: BTRFS info (device vda6): first mount of filesystem e599792d-5b18-4409-900a-465c02f78c56 Oct 29 11:46:39.318974 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 29 11:46:39.322426 kernel: BTRFS info (device vda6): turning on async discard Oct 29 11:46:39.322486 kernel: BTRFS info (device vda6): enabling free space tree Oct 29 11:46:39.323383 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 29 11:46:39.353312 initrd-setup-root[922]: cut: /sysroot/etc/passwd: No such file or directory Oct 29 11:46:39.357827 initrd-setup-root[929]: cut: /sysroot/etc/group: No such file or directory Oct 29 11:46:39.361676 initrd-setup-root[936]: cut: /sysroot/etc/shadow: No such file or directory Oct 29 11:46:39.365615 initrd-setup-root[943]: cut: /sysroot/etc/gshadow: No such file or directory Oct 29 11:46:39.437296 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 29 11:46:39.439628 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 29 11:46:39.441257 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 29 11:46:39.460886 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 29 11:46:39.462268 kernel: BTRFS info (device vda6): last unmount of filesystem e599792d-5b18-4409-900a-465c02f78c56 Oct 29 11:46:39.485996 ignition[1011]: INFO : Ignition 2.22.0 Oct 29 11:46:39.485996 ignition[1011]: INFO : Stage: mount Oct 29 11:46:39.487559 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 29 11:46:39.487559 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 11:46:39.487559 ignition[1011]: INFO : mount: mount passed Oct 29 11:46:39.487559 ignition[1011]: INFO : Ignition finished successfully Oct 29 11:46:39.488446 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 29 11:46:39.490595 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 29 11:46:39.494599 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 29 11:46:40.293354 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 29 11:46:40.322979 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1024) Oct 29 11:46:40.325310 kernel: BTRFS info (device vda6): first mount of filesystem e599792d-5b18-4409-900a-465c02f78c56 Oct 29 11:46:40.325331 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 29 11:46:40.327960 kernel: BTRFS info (device vda6): turning on async discard Oct 29 11:46:40.327989 kernel: BTRFS info (device vda6): enabling free space tree Oct 29 11:46:40.329404 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 29 11:46:40.368198 ignition[1042]: INFO : Ignition 2.22.0 Oct 29 11:46:40.368198 ignition[1042]: INFO : Stage: files Oct 29 11:46:40.369760 ignition[1042]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 29 11:46:40.369760 ignition[1042]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 11:46:40.369760 ignition[1042]: DEBUG : files: compiled without relabeling support, skipping Oct 29 11:46:40.373099 ignition[1042]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 29 11:46:40.373099 ignition[1042]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 29 11:46:40.376065 ignition[1042]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 29 11:46:40.376065 ignition[1042]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 29 11:46:40.376065 ignition[1042]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 29 11:46:40.375460 unknown[1042]: wrote ssh authorized keys file for user: core Oct 29 11:46:40.381115 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Oct 29 11:46:40.381115 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Oct 29 11:46:40.418293 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 29 11:46:40.605613 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Oct 29 11:46:40.605613 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 29 11:46:40.610263 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 29 11:46:40.610263 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 29 11:46:40.610263 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 29 11:46:40.610263 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 29 11:46:40.610263 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 29 11:46:40.610263 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 29 11:46:40.610263 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 29 11:46:40.610263 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 29 11:46:40.610263 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 29 11:46:40.610263 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Oct 29 11:46:40.610263 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Oct 29 11:46:40.610263 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Oct 29 11:46:40.610263 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Oct 29 11:46:41.015663 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 29 11:46:41.254422 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Oct 29 11:46:41.254422 ignition[1042]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 29 11:46:41.258263 ignition[1042]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 29 11:46:41.258263 ignition[1042]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 29 11:46:41.258263 ignition[1042]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 29 11:46:41.258263 ignition[1042]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 29 11:46:41.258263 ignition[1042]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 29 11:46:41.258263 ignition[1042]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 29 11:46:41.258263 ignition[1042]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 29 11:46:41.258263 ignition[1042]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 29 11:46:41.272391 ignition[1042]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 29 11:46:41.275044 ignition[1042]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 29 11:46:41.277651 ignition[1042]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 29 11:46:41.277651 ignition[1042]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 29 11:46:41.277651 ignition[1042]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 29 11:46:41.277651 ignition[1042]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 29 11:46:41.277651 ignition[1042]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 29 11:46:41.277651 ignition[1042]: INFO : files: files passed Oct 29 11:46:41.277651 ignition[1042]: INFO : Ignition finished successfully Oct 29 11:46:41.278246 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 29 11:46:41.281098 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 29 11:46:41.282887 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 29 11:46:41.295224 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 29 11:46:41.295331 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 29 11:46:41.298511 initrd-setup-root-after-ignition[1072]: grep: /sysroot/oem/oem-release: No such file or directory Oct 29 11:46:41.300447 initrd-setup-root-after-ignition[1074]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 29 11:46:41.300447 initrd-setup-root-after-ignition[1074]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 29 11:46:41.303664 initrd-setup-root-after-ignition[1078]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 29 11:46:41.303116 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 29 11:46:41.305177 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 29 11:46:41.307747 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 29 11:46:41.374102 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 29 11:46:41.374235 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 29 11:46:41.376412 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 29 11:46:41.378207 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 29 11:46:41.380137 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 29 11:46:41.380881 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 29 11:46:41.421150 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 29 11:46:41.423533 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 29 11:46:41.447416 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 29 11:46:41.447601 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 29 11:46:41.449595 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 29 11:46:41.451535 systemd[1]: Stopped target timers.target - Timer Units. Oct 29 11:46:41.453232 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 29 11:46:41.453340 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 29 11:46:41.455909 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 29 11:46:41.457814 systemd[1]: Stopped target basic.target - Basic System. Oct 29 11:46:41.459470 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 29 11:46:41.461076 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 29 11:46:41.462925 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 29 11:46:41.464865 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 29 11:46:41.466748 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 29 11:46:41.468580 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 29 11:46:41.470434 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 29 11:46:41.472273 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 29 11:46:41.473987 systemd[1]: Stopped target swap.target - Swaps. Oct 29 11:46:41.475474 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 29 11:46:41.475583 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 29 11:46:41.477740 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 29 11:46:41.478849 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 29 11:46:41.480876 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 29 11:46:41.480978 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 29 11:46:41.482954 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 29 11:46:41.483066 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 29 11:46:41.485630 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 29 11:46:41.485744 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 29 11:46:41.488109 systemd[1]: Stopped target paths.target - Path Units. Oct 29 11:46:41.489615 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 29 11:46:41.493014 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 29 11:46:41.494381 systemd[1]: Stopped target slices.target - Slice Units. Oct 29 11:46:41.495976 systemd[1]: Stopped target sockets.target - Socket Units. Oct 29 11:46:41.497985 systemd[1]: iscsid.socket: Deactivated successfully. Oct 29 11:46:41.498066 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 29 11:46:41.499559 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 29 11:46:41.499631 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 29 11:46:41.501142 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 29 11:46:41.501252 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 29 11:46:41.502860 systemd[1]: ignition-files.service: Deactivated successfully. Oct 29 11:46:41.502975 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 29 11:46:41.505305 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 29 11:46:41.507456 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 29 11:46:41.508603 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 29 11:46:41.508725 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 29 11:46:41.510909 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 29 11:46:41.511028 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 29 11:46:41.512920 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 29 11:46:41.513040 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 29 11:46:41.519989 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 29 11:46:41.521972 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 29 11:46:41.525139 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 29 11:46:41.532630 ignition[1100]: INFO : Ignition 2.22.0 Oct 29 11:46:41.532630 ignition[1100]: INFO : Stage: umount Oct 29 11:46:41.535668 ignition[1100]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 29 11:46:41.535668 ignition[1100]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 11:46:41.535668 ignition[1100]: INFO : umount: umount passed Oct 29 11:46:41.535668 ignition[1100]: INFO : Ignition finished successfully Oct 29 11:46:41.537200 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 29 11:46:41.537292 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 29 11:46:41.539146 systemd[1]: Stopped target network.target - Network. Oct 29 11:46:41.540505 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 29 11:46:41.540571 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 29 11:46:41.542217 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 29 11:46:41.542272 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 29 11:46:41.543903 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 29 11:46:41.543966 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 29 11:46:41.545801 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 29 11:46:41.545861 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 29 11:46:41.547552 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 29 11:46:41.549334 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 29 11:46:41.556424 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 29 11:46:41.556531 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 29 11:46:41.561366 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 29 11:46:41.561484 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 29 11:46:41.564913 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 29 11:46:41.565012 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 29 11:46:41.567110 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 29 11:46:41.568332 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 29 11:46:41.568374 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 29 11:46:41.569974 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 29 11:46:41.570034 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 29 11:46:41.572659 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 29 11:46:41.573581 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 29 11:46:41.573652 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 29 11:46:41.575764 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 29 11:46:41.575815 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 29 11:46:41.577585 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 29 11:46:41.577639 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 29 11:46:41.579373 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 29 11:46:41.598247 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 29 11:46:41.605149 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 29 11:46:41.607761 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 29 11:46:41.607882 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 29 11:46:41.610130 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 29 11:46:41.610194 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 29 11:46:41.611467 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 29 11:46:41.611504 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 29 11:46:41.613115 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 29 11:46:41.613165 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 29 11:46:41.615760 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 29 11:46:41.615817 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 29 11:46:41.618416 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 29 11:46:41.618469 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 29 11:46:41.621777 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 29 11:46:41.622934 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 29 11:46:41.623007 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 29 11:46:41.624938 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 29 11:46:41.625003 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 29 11:46:41.627122 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 29 11:46:41.627173 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 29 11:46:41.629067 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 29 11:46:41.629116 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 29 11:46:41.631083 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 29 11:46:41.631137 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 11:46:41.641106 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 29 11:46:41.641204 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 29 11:46:41.642910 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 29 11:46:41.645326 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 29 11:46:41.673666 systemd[1]: Switching root. Oct 29 11:46:41.716510 systemd-journald[345]: Journal stopped Oct 29 11:46:42.488168 systemd-journald[345]: Received SIGTERM from PID 1 (systemd). Oct 29 11:46:42.488219 kernel: SELinux: policy capability network_peer_controls=1 Oct 29 11:46:42.488232 kernel: SELinux: policy capability open_perms=1 Oct 29 11:46:42.488242 kernel: SELinux: policy capability extended_socket_class=1 Oct 29 11:46:42.488253 kernel: SELinux: policy capability always_check_network=0 Oct 29 11:46:42.488265 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 29 11:46:42.488290 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 29 11:46:42.488305 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 29 11:46:42.488315 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 29 11:46:42.488324 kernel: SELinux: policy capability userspace_initial_context=0 Oct 29 11:46:42.488336 kernel: audit: type=1403 audit(1761738401.908:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 29 11:46:42.488350 systemd[1]: Successfully loaded SELinux policy in 61.189ms. Oct 29 11:46:42.488365 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.448ms. Oct 29 11:46:42.488377 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 29 11:46:42.488389 systemd[1]: Detected virtualization kvm. Oct 29 11:46:42.488399 systemd[1]: Detected architecture arm64. Oct 29 11:46:42.488409 systemd[1]: Detected first boot. Oct 29 11:46:42.488421 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 29 11:46:42.488432 zram_generator::config[1148]: No configuration found. Oct 29 11:46:42.488443 kernel: NET: Registered PF_VSOCK protocol family Oct 29 11:46:42.488453 systemd[1]: Populated /etc with preset unit settings. Oct 29 11:46:42.488463 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 29 11:46:42.488473 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 29 11:46:42.488488 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 29 11:46:42.488501 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 29 11:46:42.488512 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 29 11:46:42.488523 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 29 11:46:42.488536 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 29 11:46:42.488551 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 29 11:46:42.488562 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 29 11:46:42.488575 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 29 11:46:42.488586 systemd[1]: Created slice user.slice - User and Session Slice. Oct 29 11:46:42.488597 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 29 11:46:42.488607 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 29 11:46:42.488618 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 29 11:46:42.488629 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 29 11:46:42.488640 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 29 11:46:42.488652 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 29 11:46:42.488662 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Oct 29 11:46:42.488673 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 29 11:46:42.488684 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 29 11:46:42.488698 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 29 11:46:42.488708 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 29 11:46:42.488718 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 29 11:46:42.488731 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 29 11:46:42.488742 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 29 11:46:42.488753 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 29 11:46:42.488763 systemd[1]: Reached target slices.target - Slice Units. Oct 29 11:46:42.488774 systemd[1]: Reached target swap.target - Swaps. Oct 29 11:46:42.488784 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 29 11:46:42.488795 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 29 11:46:42.488808 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 29 11:46:42.488818 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 29 11:46:42.488828 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 29 11:46:42.488845 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 29 11:46:42.488858 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 29 11:46:42.488869 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 29 11:46:42.488879 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 29 11:46:42.488891 systemd[1]: Mounting media.mount - External Media Directory... Oct 29 11:46:42.488902 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 29 11:46:42.488912 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 29 11:46:42.488924 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 29 11:46:42.488935 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 29 11:46:42.488976 systemd[1]: Reached target machines.target - Containers. Oct 29 11:46:42.488989 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 29 11:46:42.489003 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 29 11:46:42.489014 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 29 11:46:42.489025 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 29 11:46:42.489036 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 29 11:46:42.489047 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 29 11:46:42.489058 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 29 11:46:42.489069 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 29 11:46:42.489081 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 29 11:46:42.489092 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 29 11:46:42.489103 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 29 11:46:42.489114 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 29 11:46:42.489125 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 29 11:46:42.489135 systemd[1]: Stopped systemd-fsck-usr.service. Oct 29 11:46:42.489148 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 11:46:42.489160 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 29 11:46:42.489170 kernel: fuse: init (API version 7.41) Oct 29 11:46:42.489180 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 29 11:46:42.489191 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 29 11:46:42.489203 kernel: ACPI: bus type drm_connector registered Oct 29 11:46:42.489213 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 29 11:46:42.489226 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 29 11:46:42.489237 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 29 11:46:42.489248 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 29 11:46:42.489258 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 29 11:46:42.489270 systemd[1]: Mounted media.mount - External Media Directory. Oct 29 11:46:42.489281 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 29 11:46:42.489311 systemd-journald[1223]: Collecting audit messages is disabled. Oct 29 11:46:42.489334 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 29 11:46:42.489346 systemd-journald[1223]: Journal started Oct 29 11:46:42.489368 systemd-journald[1223]: Runtime Journal (/run/log/journal/34bb1ef2795147dda2a8e6f90a073e25) is 6M, max 48.5M, 42.4M free. Oct 29 11:46:42.261294 systemd[1]: Queued start job for default target multi-user.target. Oct 29 11:46:42.286829 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 29 11:46:42.287266 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 29 11:46:42.490979 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 29 11:46:42.492800 systemd[1]: Started systemd-journald.service - Journal Service. Oct 29 11:46:42.494993 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 29 11:46:42.496343 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 29 11:46:42.497816 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 29 11:46:42.498003 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 29 11:46:42.499305 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 11:46:42.499455 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 29 11:46:42.500766 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 29 11:46:42.500939 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 29 11:46:42.503285 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 11:46:42.503445 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 29 11:46:42.504938 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 29 11:46:42.505147 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 29 11:46:42.506377 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 11:46:42.506528 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 29 11:46:42.507907 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 29 11:46:42.509477 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 29 11:46:42.511511 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 29 11:46:42.513299 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 29 11:46:42.525432 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 29 11:46:42.526886 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 29 11:46:42.529075 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 29 11:46:42.530896 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 29 11:46:42.532056 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 29 11:46:42.532111 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 29 11:46:42.533869 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 29 11:46:42.535395 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 11:46:42.545504 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 29 11:46:42.547831 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 29 11:46:42.549163 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 29 11:46:42.550239 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 29 11:46:42.551354 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 29 11:46:42.552436 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 29 11:46:42.559648 systemd-journald[1223]: Time spent on flushing to /var/log/journal/34bb1ef2795147dda2a8e6f90a073e25 is 16.026ms for 870 entries. Oct 29 11:46:42.559648 systemd-journald[1223]: System Journal (/var/log/journal/34bb1ef2795147dda2a8e6f90a073e25) is 8M, max 163.5M, 155.5M free. Oct 29 11:46:42.588387 systemd-journald[1223]: Received client request to flush runtime journal. Oct 29 11:46:42.588438 kernel: loop1: detected capacity change from 0 to 119400 Oct 29 11:46:42.556065 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 29 11:46:42.557928 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 29 11:46:42.566988 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 29 11:46:42.569213 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 29 11:46:42.570505 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 29 11:46:42.572972 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 29 11:46:42.577302 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 29 11:46:42.580858 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 29 11:46:42.586083 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 29 11:46:42.587512 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Oct 29 11:46:42.587522 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Oct 29 11:46:42.591447 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 29 11:46:42.595094 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 29 11:46:42.599099 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 29 11:46:42.603454 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 29 11:46:42.603977 kernel: loop2: detected capacity change from 0 to 100192 Oct 29 11:46:42.625427 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 29 11:46:42.628959 kernel: loop3: detected capacity change from 0 to 211168 Oct 29 11:46:42.629242 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 29 11:46:42.632162 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 29 11:46:42.638188 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 29 11:46:42.652975 kernel: loop4: detected capacity change from 0 to 119400 Oct 29 11:46:42.656962 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Oct 29 11:46:42.657231 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Oct 29 11:46:42.658974 kernel: loop5: detected capacity change from 0 to 100192 Oct 29 11:46:42.661629 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 29 11:46:42.665983 kernel: loop6: detected capacity change from 0 to 211168 Oct 29 11:46:42.670272 (sd-merge)[1291]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Oct 29 11:46:42.672443 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 29 11:46:42.674263 (sd-merge)[1291]: Merged extensions into '/usr'. Oct 29 11:46:42.679059 systemd[1]: Reload requested from client PID 1264 ('systemd-sysext') (unit systemd-sysext.service)... Oct 29 11:46:42.679079 systemd[1]: Reloading... Oct 29 11:46:42.728327 systemd-resolved[1286]: Positive Trust Anchors: Oct 29 11:46:42.728589 systemd-resolved[1286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 29 11:46:42.728644 systemd-resolved[1286]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 29 11:46:42.728716 systemd-resolved[1286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 29 11:46:42.734990 systemd-resolved[1286]: Defaulting to hostname 'linux'. Oct 29 11:46:42.741972 zram_generator::config[1321]: No configuration found. Oct 29 11:46:42.870566 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 29 11:46:42.870686 systemd[1]: Reloading finished in 191 ms. Oct 29 11:46:42.896432 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 29 11:46:42.897836 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 29 11:46:42.902926 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 29 11:46:42.922141 systemd[1]: Starting ensure-sysext.service... Oct 29 11:46:42.924080 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 29 11:46:42.933910 systemd[1]: Reload requested from client PID 1358 ('systemctl') (unit ensure-sysext.service)... Oct 29 11:46:42.933928 systemd[1]: Reloading... Oct 29 11:46:42.941851 systemd-tmpfiles[1359]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 29 11:46:42.941889 systemd-tmpfiles[1359]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 29 11:46:42.942462 systemd-tmpfiles[1359]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 29 11:46:42.942742 systemd-tmpfiles[1359]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 29 11:46:42.943463 systemd-tmpfiles[1359]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 29 11:46:42.943761 systemd-tmpfiles[1359]: ACLs are not supported, ignoring. Oct 29 11:46:42.943895 systemd-tmpfiles[1359]: ACLs are not supported, ignoring. Oct 29 11:46:42.966166 systemd-tmpfiles[1359]: Detected autofs mount point /boot during canonicalization of boot. Oct 29 11:46:42.966298 systemd-tmpfiles[1359]: Skipping /boot Oct 29 11:46:42.973993 systemd-tmpfiles[1359]: Detected autofs mount point /boot during canonicalization of boot. Oct 29 11:46:42.974076 systemd-tmpfiles[1359]: Skipping /boot Oct 29 11:46:42.984965 zram_generator::config[1389]: No configuration found. Oct 29 11:46:43.109970 systemd[1]: Reloading finished in 175 ms. Oct 29 11:46:43.131522 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 29 11:46:43.158821 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 29 11:46:43.166049 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 29 11:46:43.168250 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 29 11:46:43.174411 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 29 11:46:43.176623 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 29 11:46:43.180253 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 29 11:46:43.182526 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 29 11:46:43.186273 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 29 11:46:43.188265 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 29 11:46:43.191256 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 29 11:46:43.199671 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 29 11:46:43.203072 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 11:46:43.203192 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 11:46:43.205518 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 11:46:43.205685 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 29 11:46:43.207433 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 11:46:43.207583 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 29 11:46:43.210495 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 11:46:43.210911 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 29 11:46:43.214817 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 29 11:46:43.220660 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 29 11:46:43.222438 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 29 11:46:43.227167 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 29 11:46:43.230198 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 29 11:46:43.231221 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 11:46:43.231322 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 11:46:43.232625 systemd-udevd[1430]: Using default interface naming scheme 'v257'. Oct 29 11:46:43.234006 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 29 11:46:43.238084 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 29 11:46:43.240199 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 11:46:43.240334 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 29 11:46:43.241983 augenrules[1460]: No rules Oct 29 11:46:43.242201 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 11:46:43.242359 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 29 11:46:43.244288 systemd[1]: audit-rules.service: Deactivated successfully. Oct 29 11:46:43.244468 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 29 11:46:43.246268 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 11:46:43.246411 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 29 11:46:43.255410 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 29 11:46:43.256589 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 29 11:46:43.259157 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 29 11:46:43.262136 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 29 11:46:43.269653 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 29 11:46:43.273115 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 29 11:46:43.275182 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 11:46:43.275305 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 11:46:43.275416 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 29 11:46:43.276334 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 29 11:46:43.290179 systemd[1]: Finished ensure-sysext.service. Oct 29 11:46:43.294815 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 29 11:46:43.296052 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 29 11:46:43.306199 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 29 11:46:43.311989 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 29 11:46:43.325637 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 11:46:43.326278 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 29 11:46:43.328404 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 11:46:43.331008 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 29 11:46:43.334853 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 11:46:43.335553 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 29 11:46:43.342264 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 29 11:46:43.342324 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 29 11:46:43.346869 augenrules[1471]: /sbin/augenrules: No change Oct 29 11:46:43.356879 augenrules[1522]: No rules Oct 29 11:46:43.358310 systemd[1]: audit-rules.service: Deactivated successfully. Oct 29 11:46:43.366648 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 29 11:46:43.372707 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Oct 29 11:46:43.399388 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 29 11:46:43.404365 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 29 11:46:43.406030 systemd[1]: Reached target time-set.target - System Time Set. Oct 29 11:46:43.410174 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 29 11:46:43.426617 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 11:46:43.456016 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 29 11:46:43.464603 systemd-networkd[1504]: lo: Link UP Oct 29 11:46:43.464617 systemd-networkd[1504]: lo: Gained carrier Oct 29 11:46:43.465853 systemd-networkd[1504]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 29 11:46:43.465861 systemd-networkd[1504]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 29 11:46:43.466781 systemd-networkd[1504]: eth0: Link UP Oct 29 11:46:43.467010 systemd-networkd[1504]: eth0: Gained carrier Oct 29 11:46:43.467025 systemd-networkd[1504]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 29 11:46:43.467131 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 29 11:46:43.470257 systemd[1]: Reached target network.target - Network. Oct 29 11:46:43.473161 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 29 11:46:43.475774 ldconfig[1427]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 29 11:46:43.478194 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 29 11:46:43.482301 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 29 11:46:43.495562 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 29 11:46:43.502013 systemd-networkd[1504]: eth0: DHCPv4 address 10.0.0.75/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 29 11:46:43.504515 systemd-timesyncd[1505]: Network configuration changed, trying to establish connection. Oct 29 11:46:43.505057 systemd-timesyncd[1505]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 29 11:46:43.505116 systemd-timesyncd[1505]: Initial clock synchronization to Wed 2025-10-29 11:46:43.778659 UTC. Oct 29 11:46:43.510033 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 29 11:46:43.513084 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 11:46:43.515723 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 29 11:46:43.518023 systemd[1]: Reached target sysinit.target - System Initialization. Oct 29 11:46:43.519103 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 29 11:46:43.520381 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 29 11:46:43.521692 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 29 11:46:43.522815 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 29 11:46:43.524057 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 29 11:46:43.525305 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 29 11:46:43.525337 systemd[1]: Reached target paths.target - Path Units. Oct 29 11:46:43.526176 systemd[1]: Reached target timers.target - Timer Units. Oct 29 11:46:43.527819 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 29 11:46:43.530067 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 29 11:46:43.532593 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 29 11:46:43.534025 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 29 11:46:43.535163 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 29 11:46:43.538646 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 29 11:46:43.539932 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 29 11:46:43.541558 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 29 11:46:43.542696 systemd[1]: Reached target sockets.target - Socket Units. Oct 29 11:46:43.543630 systemd[1]: Reached target basic.target - Basic System. Oct 29 11:46:43.544558 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 29 11:46:43.544585 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 29 11:46:43.545441 systemd[1]: Starting containerd.service - containerd container runtime... Oct 29 11:46:43.547313 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 29 11:46:43.549029 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 29 11:46:43.550878 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 29 11:46:43.552899 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 29 11:46:43.553940 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 29 11:46:43.554820 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 29 11:46:43.558463 jq[1571]: false Oct 29 11:46:43.558903 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 29 11:46:43.561235 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 29 11:46:43.564122 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 29 11:46:43.566074 extend-filesystems[1572]: Found /dev/vda6 Oct 29 11:46:43.568033 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 29 11:46:43.568993 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 29 11:46:43.569385 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 29 11:46:43.570507 extend-filesystems[1572]: Found /dev/vda9 Oct 29 11:46:43.571219 systemd[1]: Starting update-engine.service - Update Engine... Oct 29 11:46:43.572908 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 29 11:46:43.573556 extend-filesystems[1572]: Checking size of /dev/vda9 Oct 29 11:46:43.577366 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 29 11:46:43.578879 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 29 11:46:43.579115 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 29 11:46:43.579360 systemd[1]: motdgen.service: Deactivated successfully. Oct 29 11:46:43.579525 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 29 11:46:43.583382 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 29 11:46:43.585006 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 29 11:46:43.585514 jq[1589]: true Oct 29 11:46:43.589913 extend-filesystems[1572]: Resized partition /dev/vda9 Oct 29 11:46:43.595989 extend-filesystems[1610]: resize2fs 1.47.3 (8-Jul-2025) Oct 29 11:46:43.598127 update_engine[1587]: I20251029 11:46:43.594478 1587 main.cc:92] Flatcar Update Engine starting Oct 29 11:46:43.600351 tar[1596]: linux-arm64/LICENSE Oct 29 11:46:43.600351 tar[1596]: linux-arm64/helm Oct 29 11:46:43.602973 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Oct 29 11:46:43.607547 jq[1598]: true Oct 29 11:46:43.634415 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Oct 29 11:46:43.649572 extend-filesystems[1610]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 29 11:46:43.649572 extend-filesystems[1610]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 29 11:46:43.649572 extend-filesystems[1610]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Oct 29 11:46:43.655154 dbus-daemon[1569]: [system] SELinux support is enabled Oct 29 11:46:43.655564 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 29 11:46:43.656790 extend-filesystems[1572]: Resized filesystem in /dev/vda9 Oct 29 11:46:43.659863 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 29 11:46:43.662002 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 29 11:46:43.667774 bash[1635]: Updated "/home/core/.ssh/authorized_keys" Oct 29 11:46:43.670026 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 29 11:46:43.671562 update_engine[1587]: I20251029 11:46:43.671514 1587 update_check_scheduler.cc:74] Next update check in 4m57s Oct 29 11:46:43.673449 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 29 11:46:43.673550 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 29 11:46:43.673576 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 29 11:46:43.675140 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 29 11:46:43.675231 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 29 11:46:43.677767 systemd[1]: Started update-engine.service - Update Engine. Oct 29 11:46:43.679680 systemd-logind[1586]: Watching system buttons on /dev/input/event0 (Power Button) Oct 29 11:46:43.681066 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 29 11:46:43.681196 systemd-logind[1586]: New seat seat0. Oct 29 11:46:43.685777 systemd[1]: Started systemd-logind.service - User Login Management. Oct 29 11:46:43.737314 locksmithd[1638]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 29 11:46:43.772832 containerd[1601]: time="2025-10-29T11:46:43Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 29 11:46:43.774103 containerd[1601]: time="2025-10-29T11:46:43.773344920Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 29 11:46:43.784887 containerd[1601]: time="2025-10-29T11:46:43.784828600Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.68µs" Oct 29 11:46:43.784887 containerd[1601]: time="2025-10-29T11:46:43.784879560Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 29 11:46:43.784975 containerd[1601]: time="2025-10-29T11:46:43.784902280Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 29 11:46:43.785366 containerd[1601]: time="2025-10-29T11:46:43.785210640Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 29 11:46:43.785366 containerd[1601]: time="2025-10-29T11:46:43.785296360Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 29 11:46:43.785366 containerd[1601]: time="2025-10-29T11:46:43.785330200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 29 11:46:43.785449 containerd[1601]: time="2025-10-29T11:46:43.785384440Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 29 11:46:43.785449 containerd[1601]: time="2025-10-29T11:46:43.785395520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 29 11:46:43.785706 containerd[1601]: time="2025-10-29T11:46:43.785664680Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 29 11:46:43.785706 containerd[1601]: time="2025-10-29T11:46:43.785691360Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 29 11:46:43.785706 containerd[1601]: time="2025-10-29T11:46:43.785704360Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 29 11:46:43.785770 containerd[1601]: time="2025-10-29T11:46:43.785712160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 29 11:46:43.785882 containerd[1601]: time="2025-10-29T11:46:43.785862120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 29 11:46:43.786330 containerd[1601]: time="2025-10-29T11:46:43.786294840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 29 11:46:43.786356 containerd[1601]: time="2025-10-29T11:46:43.786336920Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 29 11:46:43.786411 containerd[1601]: time="2025-10-29T11:46:43.786393880Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 29 11:46:43.786461 containerd[1601]: time="2025-10-29T11:46:43.786431320Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 29 11:46:43.786677 containerd[1601]: time="2025-10-29T11:46:43.786657120Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 29 11:46:43.786805 containerd[1601]: time="2025-10-29T11:46:43.786786240Z" level=info msg="metadata content store policy set" policy=shared Oct 29 11:46:43.790505 containerd[1601]: time="2025-10-29T11:46:43.790473120Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 29 11:46:43.790555 containerd[1601]: time="2025-10-29T11:46:43.790527960Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 29 11:46:43.790555 containerd[1601]: time="2025-10-29T11:46:43.790549400Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 29 11:46:43.790610 containerd[1601]: time="2025-10-29T11:46:43.790561520Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 29 11:46:43.790610 containerd[1601]: time="2025-10-29T11:46:43.790573360Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 29 11:46:43.790610 containerd[1601]: time="2025-10-29T11:46:43.790584080Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 29 11:46:43.790610 containerd[1601]: time="2025-10-29T11:46:43.790600920Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 29 11:46:43.790674 containerd[1601]: time="2025-10-29T11:46:43.790612240Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 29 11:46:43.790674 containerd[1601]: time="2025-10-29T11:46:43.790622240Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 29 11:46:43.790674 containerd[1601]: time="2025-10-29T11:46:43.790631720Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 29 11:46:43.790674 containerd[1601]: time="2025-10-29T11:46:43.790641040Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 29 11:46:43.790674 containerd[1601]: time="2025-10-29T11:46:43.790652920Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 29 11:46:43.790777 containerd[1601]: time="2025-10-29T11:46:43.790755920Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 29 11:46:43.790804 containerd[1601]: time="2025-10-29T11:46:43.790782560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 29 11:46:43.790804 containerd[1601]: time="2025-10-29T11:46:43.790799000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 29 11:46:43.790886 containerd[1601]: time="2025-10-29T11:46:43.790811400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 29 11:46:43.790886 containerd[1601]: time="2025-10-29T11:46:43.790825840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 29 11:46:43.790886 containerd[1601]: time="2025-10-29T11:46:43.790843160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 29 11:46:43.790886 containerd[1601]: time="2025-10-29T11:46:43.790856600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 29 11:46:43.790886 containerd[1601]: time="2025-10-29T11:46:43.790866720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 29 11:46:43.790886 containerd[1601]: time="2025-10-29T11:46:43.790877560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 29 11:46:43.790886 containerd[1601]: time="2025-10-29T11:46:43.790887680Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 29 11:46:43.791042 containerd[1601]: time="2025-10-29T11:46:43.790897680Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 29 11:46:43.791163 containerd[1601]: time="2025-10-29T11:46:43.791145920Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 29 11:46:43.791196 containerd[1601]: time="2025-10-29T11:46:43.791165880Z" level=info msg="Start snapshots syncer" Oct 29 11:46:43.791196 containerd[1601]: time="2025-10-29T11:46:43.791191600Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 29 11:46:43.791423 containerd[1601]: time="2025-10-29T11:46:43.791387840Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 29 11:46:43.791519 containerd[1601]: time="2025-10-29T11:46:43.791434200Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 29 11:46:43.791519 containerd[1601]: time="2025-10-29T11:46:43.791487440Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 29 11:46:43.791600 containerd[1601]: time="2025-10-29T11:46:43.791580600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 29 11:46:43.791626 containerd[1601]: time="2025-10-29T11:46:43.791606840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 29 11:46:43.791626 containerd[1601]: time="2025-10-29T11:46:43.791618640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 29 11:46:43.791706 containerd[1601]: time="2025-10-29T11:46:43.791629640Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 29 11:46:43.791706 containerd[1601]: time="2025-10-29T11:46:43.791642080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 29 11:46:43.791706 containerd[1601]: time="2025-10-29T11:46:43.791651880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 29 11:46:43.791706 containerd[1601]: time="2025-10-29T11:46:43.791661680Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 29 11:46:43.791706 containerd[1601]: time="2025-10-29T11:46:43.791690000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 29 11:46:43.791706 containerd[1601]: time="2025-10-29T11:46:43.791701120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 29 11:46:43.791874 containerd[1601]: time="2025-10-29T11:46:43.791712320Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 29 11:46:43.791874 containerd[1601]: time="2025-10-29T11:46:43.791747760Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 29 11:46:43.791874 containerd[1601]: time="2025-10-29T11:46:43.791761640Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 29 11:46:43.791874 containerd[1601]: time="2025-10-29T11:46:43.791770880Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 29 11:46:43.791874 containerd[1601]: time="2025-10-29T11:46:43.791779760Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 29 11:46:43.791874 containerd[1601]: time="2025-10-29T11:46:43.791786920Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 29 11:46:43.791874 containerd[1601]: time="2025-10-29T11:46:43.791796120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 29 11:46:43.791874 containerd[1601]: time="2025-10-29T11:46:43.791805920Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 29 11:46:43.792027 containerd[1601]: time="2025-10-29T11:46:43.791893800Z" level=info msg="runtime interface created" Oct 29 11:46:43.792027 containerd[1601]: time="2025-10-29T11:46:43.791900760Z" level=info msg="created NRI interface" Oct 29 11:46:43.792027 containerd[1601]: time="2025-10-29T11:46:43.791911440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 29 11:46:43.792027 containerd[1601]: time="2025-10-29T11:46:43.791922160Z" level=info msg="Connect containerd service" Oct 29 11:46:43.792111 containerd[1601]: time="2025-10-29T11:46:43.792086200Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 29 11:46:43.792990 containerd[1601]: time="2025-10-29T11:46:43.792963440Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 29 11:46:43.869034 containerd[1601]: time="2025-10-29T11:46:43.868981040Z" level=info msg="Start subscribing containerd event" Oct 29 11:46:43.869368 containerd[1601]: time="2025-10-29T11:46:43.869342200Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 29 11:46:43.869368 containerd[1601]: time="2025-10-29T11:46:43.869406960Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 29 11:46:43.869851 containerd[1601]: time="2025-10-29T11:46:43.869509400Z" level=info msg="Start recovering state" Oct 29 11:46:43.869851 containerd[1601]: time="2025-10-29T11:46:43.869603920Z" level=info msg="Start event monitor" Oct 29 11:46:43.869851 containerd[1601]: time="2025-10-29T11:46:43.869617120Z" level=info msg="Start cni network conf syncer for default" Oct 29 11:46:43.869851 containerd[1601]: time="2025-10-29T11:46:43.869623760Z" level=info msg="Start streaming server" Oct 29 11:46:43.869851 containerd[1601]: time="2025-10-29T11:46:43.869715120Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 29 11:46:43.869851 containerd[1601]: time="2025-10-29T11:46:43.869722000Z" level=info msg="runtime interface starting up..." Oct 29 11:46:43.869851 containerd[1601]: time="2025-10-29T11:46:43.869727840Z" level=info msg="starting plugins..." Oct 29 11:46:43.869851 containerd[1601]: time="2025-10-29T11:46:43.869741160Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 29 11:46:43.870450 containerd[1601]: time="2025-10-29T11:46:43.870431440Z" level=info msg="containerd successfully booted in 0.097962s" Oct 29 11:46:43.870494 systemd[1]: Started containerd.service - containerd container runtime. Oct 29 11:46:43.930546 tar[1596]: linux-arm64/README.md Oct 29 11:46:43.948070 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 29 11:46:44.037904 sshd_keygen[1600]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 29 11:46:44.059047 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 29 11:46:44.061629 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 29 11:46:44.079486 systemd[1]: issuegen.service: Deactivated successfully. Oct 29 11:46:44.081043 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 29 11:46:44.083480 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 29 11:46:44.116599 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 29 11:46:44.119264 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 29 11:46:44.121313 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Oct 29 11:46:44.122614 systemd[1]: Reached target getty.target - Login Prompts. Oct 29 11:46:44.620744 systemd-networkd[1504]: eth0: Gained IPv6LL Oct 29 11:46:44.623448 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 29 11:46:44.625172 systemd[1]: Reached target network-online.target - Network is Online. Oct 29 11:46:44.627572 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 29 11:46:44.630018 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 11:46:44.632208 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 29 11:46:44.659147 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 29 11:46:44.660768 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 29 11:46:44.660939 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 29 11:46:44.662946 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 29 11:46:45.198956 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 11:46:45.200500 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 29 11:46:45.202492 systemd[1]: Startup finished in 1.137s (kernel) + 4.826s (initrd) + 3.355s (userspace) = 9.319s. Oct 29 11:46:45.203440 (kubelet)[1706]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 29 11:46:45.553334 kubelet[1706]: E1029 11:46:45.553260 1706 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 11:46:45.557153 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 11:46:45.557284 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 11:46:45.558077 systemd[1]: kubelet.service: Consumed 746ms CPU time, 257.7M memory peak. Oct 29 11:46:48.632312 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 29 11:46:48.633648 systemd[1]: Started sshd@0-10.0.0.75:22-10.0.0.1:35660.service - OpenSSH per-connection server daemon (10.0.0.1:35660). Oct 29 11:46:48.704970 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 35660 ssh2: RSA SHA256:+91isbynmBIjf6V6jkIkZf2tk+egrDOc6wdtdos75g8 Oct 29 11:46:48.706455 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 11:46:48.712207 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 29 11:46:48.713060 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 29 11:46:48.718043 systemd-logind[1586]: New session 1 of user core. Oct 29 11:46:48.740816 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 29 11:46:48.745195 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 29 11:46:48.764600 (systemd)[1724]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 29 11:46:48.766703 systemd-logind[1586]: New session c1 of user core. Oct 29 11:46:48.865249 systemd[1724]: Queued start job for default target default.target. Oct 29 11:46:48.877941 systemd[1724]: Created slice app.slice - User Application Slice. Oct 29 11:46:48.877997 systemd[1724]: Reached target paths.target - Paths. Oct 29 11:46:48.878040 systemd[1724]: Reached target timers.target - Timers. Oct 29 11:46:48.879281 systemd[1724]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 29 11:46:48.888559 systemd[1724]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 29 11:46:48.888629 systemd[1724]: Reached target sockets.target - Sockets. Oct 29 11:46:48.888675 systemd[1724]: Reached target basic.target - Basic System. Oct 29 11:46:48.888704 systemd[1724]: Reached target default.target - Main User Target. Oct 29 11:46:48.888730 systemd[1724]: Startup finished in 116ms. Oct 29 11:46:48.888881 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 29 11:46:48.890510 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 29 11:46:48.899566 systemd[1]: Started sshd@1-10.0.0.75:22-10.0.0.1:35676.service - OpenSSH per-connection server daemon (10.0.0.1:35676). Oct 29 11:46:48.945956 sshd[1735]: Accepted publickey for core from 10.0.0.1 port 35676 ssh2: RSA SHA256:+91isbynmBIjf6V6jkIkZf2tk+egrDOc6wdtdos75g8 Oct 29 11:46:48.947126 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 11:46:48.951631 systemd-logind[1586]: New session 2 of user core. Oct 29 11:46:48.958101 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 29 11:46:48.968062 sshd[1738]: Connection closed by 10.0.0.1 port 35676 Oct 29 11:46:48.968350 sshd-session[1735]: pam_unix(sshd:session): session closed for user core Oct 29 11:46:48.985044 systemd[1]: sshd@1-10.0.0.75:22-10.0.0.1:35676.service: Deactivated successfully. Oct 29 11:46:48.986473 systemd[1]: session-2.scope: Deactivated successfully. Oct 29 11:46:48.987129 systemd-logind[1586]: Session 2 logged out. Waiting for processes to exit. Oct 29 11:46:48.989298 systemd[1]: Started sshd@2-10.0.0.75:22-10.0.0.1:35688.service - OpenSSH per-connection server daemon (10.0.0.1:35688). Oct 29 11:46:48.990233 systemd-logind[1586]: Removed session 2. Oct 29 11:46:49.047814 sshd[1744]: Accepted publickey for core from 10.0.0.1 port 35688 ssh2: RSA SHA256:+91isbynmBIjf6V6jkIkZf2tk+egrDOc6wdtdos75g8 Oct 29 11:46:49.049188 sshd-session[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 11:46:49.053098 systemd-logind[1586]: New session 3 of user core. Oct 29 11:46:49.071120 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 29 11:46:49.077236 sshd[1747]: Connection closed by 10.0.0.1 port 35688 Oct 29 11:46:49.077511 sshd-session[1744]: pam_unix(sshd:session): session closed for user core Oct 29 11:46:49.088939 systemd[1]: sshd@2-10.0.0.75:22-10.0.0.1:35688.service: Deactivated successfully. Oct 29 11:46:49.090405 systemd[1]: session-3.scope: Deactivated successfully. Oct 29 11:46:49.092637 systemd-logind[1586]: Session 3 logged out. Waiting for processes to exit. Oct 29 11:46:49.094657 systemd[1]: Started sshd@3-10.0.0.75:22-10.0.0.1:55540.service - OpenSSH per-connection server daemon (10.0.0.1:55540). Oct 29 11:46:49.095679 systemd-logind[1586]: Removed session 3. Oct 29 11:46:49.148147 sshd[1753]: Accepted publickey for core from 10.0.0.1 port 55540 ssh2: RSA SHA256:+91isbynmBIjf6V6jkIkZf2tk+egrDOc6wdtdos75g8 Oct 29 11:46:49.149294 sshd-session[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 11:46:49.153281 systemd-logind[1586]: New session 4 of user core. Oct 29 11:46:49.163779 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 29 11:46:49.174724 sshd[1756]: Connection closed by 10.0.0.1 port 55540 Oct 29 11:46:49.175160 sshd-session[1753]: pam_unix(sshd:session): session closed for user core Oct 29 11:46:49.178708 systemd[1]: sshd@3-10.0.0.75:22-10.0.0.1:55540.service: Deactivated successfully. Oct 29 11:46:49.181149 systemd[1]: session-4.scope: Deactivated successfully. Oct 29 11:46:49.181830 systemd-logind[1586]: Session 4 logged out. Waiting for processes to exit. Oct 29 11:46:49.183903 systemd[1]: Started sshd@4-10.0.0.75:22-10.0.0.1:55544.service - OpenSSH per-connection server daemon (10.0.0.1:55544). Oct 29 11:46:49.185072 systemd-logind[1586]: Removed session 4. Oct 29 11:46:49.234843 sshd[1762]: Accepted publickey for core from 10.0.0.1 port 55544 ssh2: RSA SHA256:+91isbynmBIjf6V6jkIkZf2tk+egrDOc6wdtdos75g8 Oct 29 11:46:49.236109 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 11:46:49.239858 systemd-logind[1586]: New session 5 of user core. Oct 29 11:46:49.248121 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 29 11:46:49.264172 sudo[1766]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 29 11:46:49.264433 sudo[1766]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 29 11:46:49.277875 sudo[1766]: pam_unix(sudo:session): session closed for user root Oct 29 11:46:49.279609 sshd[1765]: Connection closed by 10.0.0.1 port 55544 Oct 29 11:46:49.280096 sshd-session[1762]: pam_unix(sshd:session): session closed for user core Oct 29 11:46:49.289025 systemd[1]: sshd@4-10.0.0.75:22-10.0.0.1:55544.service: Deactivated successfully. Oct 29 11:46:49.291359 systemd[1]: session-5.scope: Deactivated successfully. Oct 29 11:46:49.292297 systemd-logind[1586]: Session 5 logged out. Waiting for processes to exit. Oct 29 11:46:49.294579 systemd[1]: Started sshd@5-10.0.0.75:22-10.0.0.1:55558.service - OpenSSH per-connection server daemon (10.0.0.1:55558). Oct 29 11:46:49.295491 systemd-logind[1586]: Removed session 5. Oct 29 11:46:49.353899 sshd[1772]: Accepted publickey for core from 10.0.0.1 port 55558 ssh2: RSA SHA256:+91isbynmBIjf6V6jkIkZf2tk+egrDOc6wdtdos75g8 Oct 29 11:46:49.355085 sshd-session[1772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 11:46:49.358834 systemd-logind[1586]: New session 6 of user core. Oct 29 11:46:49.367177 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 29 11:46:49.378522 sudo[1777]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 29 11:46:49.378775 sudo[1777]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 29 11:46:49.383457 sudo[1777]: pam_unix(sudo:session): session closed for user root Oct 29 11:46:49.389217 sudo[1776]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 29 11:46:49.389491 sudo[1776]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 29 11:46:49.398682 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 29 11:46:49.446118 augenrules[1799]: No rules Oct 29 11:46:49.447487 systemd[1]: audit-rules.service: Deactivated successfully. Oct 29 11:46:49.449046 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 29 11:46:49.450388 sudo[1776]: pam_unix(sudo:session): session closed for user root Oct 29 11:46:49.451947 sshd[1775]: Connection closed by 10.0.0.1 port 55558 Oct 29 11:46:49.452267 sshd-session[1772]: pam_unix(sshd:session): session closed for user core Oct 29 11:46:49.467079 systemd[1]: sshd@5-10.0.0.75:22-10.0.0.1:55558.service: Deactivated successfully. Oct 29 11:46:49.469358 systemd[1]: session-6.scope: Deactivated successfully. Oct 29 11:46:49.470154 systemd-logind[1586]: Session 6 logged out. Waiting for processes to exit. Oct 29 11:46:49.472409 systemd[1]: Started sshd@6-10.0.0.75:22-10.0.0.1:55570.service - OpenSSH per-connection server daemon (10.0.0.1:55570). Oct 29 11:46:49.473164 systemd-logind[1586]: Removed session 6. Oct 29 11:46:49.534278 sshd[1808]: Accepted publickey for core from 10.0.0.1 port 55570 ssh2: RSA SHA256:+91isbynmBIjf6V6jkIkZf2tk+egrDOc6wdtdos75g8 Oct 29 11:46:49.535540 sshd-session[1808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 11:46:49.540066 systemd-logind[1586]: New session 7 of user core. Oct 29 11:46:49.547156 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 29 11:46:49.558891 sudo[1812]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 29 11:46:49.559607 sudo[1812]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 29 11:46:49.833459 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 29 11:46:49.845263 (dockerd)[1832]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 29 11:46:50.044036 dockerd[1832]: time="2025-10-29T11:46:50.043971774Z" level=info msg="Starting up" Oct 29 11:46:50.044855 dockerd[1832]: time="2025-10-29T11:46:50.044833118Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 29 11:46:50.054592 dockerd[1832]: time="2025-10-29T11:46:50.054556701Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 29 11:46:50.157100 dockerd[1832]: time="2025-10-29T11:46:50.157000682Z" level=info msg="Loading containers: start." Oct 29 11:46:50.164991 kernel: Initializing XFRM netlink socket Oct 29 11:46:50.351487 systemd-networkd[1504]: docker0: Link UP Oct 29 11:46:50.354910 dockerd[1832]: time="2025-10-29T11:46:50.354388032Z" level=info msg="Loading containers: done." Oct 29 11:46:50.365774 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck907124105-merged.mount: Deactivated successfully. Oct 29 11:46:50.368952 dockerd[1832]: time="2025-10-29T11:46:50.368598376Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 29 11:46:50.368952 dockerd[1832]: time="2025-10-29T11:46:50.368676031Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 29 11:46:50.368952 dockerd[1832]: time="2025-10-29T11:46:50.368813550Z" level=info msg="Initializing buildkit" Oct 29 11:46:50.389477 dockerd[1832]: time="2025-10-29T11:46:50.389447582Z" level=info msg="Completed buildkit initialization" Oct 29 11:46:50.394191 dockerd[1832]: time="2025-10-29T11:46:50.394159426Z" level=info msg="Daemon has completed initialization" Oct 29 11:46:50.394389 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 29 11:46:50.394491 dockerd[1832]: time="2025-10-29T11:46:50.394298245Z" level=info msg="API listen on /run/docker.sock" Oct 29 11:46:51.199319 containerd[1601]: time="2025-10-29T11:46:51.199280022Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Oct 29 11:46:51.885426 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2405025284.mount: Deactivated successfully. Oct 29 11:46:52.887971 containerd[1601]: time="2025-10-29T11:46:52.887902412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 11:46:52.888555 containerd[1601]: time="2025-10-29T11:46:52.888503191Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=27390230" Oct 29 11:46:52.889504 containerd[1601]: time="2025-10-29T11:46:52.889468081Z" level=info msg="ImageCreate event name:\"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 11:46:52.892025 containerd[1601]: time="2025-10-29T11:46:52.891991517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 11:46:52.893456 containerd[1601]: time="2025-10-29T11:46:52.893423109Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"27386827\" in 1.694101713s" Oct 29 11:46:52.893492 containerd[1601]: time="2025-10-29T11:46:52.893461879Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\"" Oct 29 11:46:52.894888 containerd[1601]: time="2025-10-29T11:46:52.894692618Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Oct 29 11:46:54.032544 containerd[1601]: time="2025-10-29T11:46:54.032110677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 11:46:54.032916 containerd[1601]: time="2025-10-29T11:46:54.032893096Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=23547919" Oct 29 11:46:54.033559 containerd[1601]: time="2025-10-29T11:46:54.033533124Z" level=info msg="ImageCreate event name:\"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 11:46:54.036180 containerd[1601]: time="2025-10-29T11:46:54.036146109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 11:46:54.037894 containerd[1601]: time="2025-10-29T11:46:54.037776894Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"25135832\" in 1.143056455s" Oct 29 11:46:54.037894 containerd[1601]: time="2025-10-29T11:46:54.037809465Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\"" Oct 29 11:46:54.038162 containerd[1601]: time="2025-10-29T11:46:54.038133636Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Oct 29 11:46:55.265894 containerd[1601]: time="2025-10-29T11:46:55.265079306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 11:46:55.265894 containerd[1601]: time="2025-10-29T11:46:55.265656535Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=18295979" Oct 29 11:46:55.266491 containerd[1601]: time="2025-10-29T11:46:55.266460255Z" level=info msg="ImageCreate event name:\"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 11:46:55.269218 containerd[1601]: time="2025-10-29T11:46:55.269182161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 11:46:55.272235 containerd[1601]: time="2025-10-29T11:46:55.272195302Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"19883910\" in 1.234027209s" Oct 29 11:46:55.272235 containerd[1601]: time="2025-10-29T11:46:55.272237149Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\"" Oct 29 11:46:55.273514 containerd[1601]: time="2025-10-29T11:46:55.273467360Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Oct 29 11:46:55.807989 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 29 11:46:55.809403 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 11:46:55.998448 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 11:46:56.002774 (kubelet)[2127]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 29 11:46:56.089018 kubelet[2127]: E1029 11:46:56.088918 2127 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 11:46:56.091971 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 11:46:56.092092 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 11:46:56.092557 systemd[1]: kubelet.service: Consumed 142ms CPU time, 107.6M memory peak. Oct 29 11:46:56.331692 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount167193539.mount: Deactivated successfully. Oct 29 11:46:56.745479 containerd[1601]: time="2025-10-29T11:46:56.745003725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 11:46:56.745801 containerd[1601]: time="2025-10-29T11:46:56.745759698Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=28240108" Oct 29 11:46:56.746468 containerd[1601]: time="2025-10-29T11:46:56.746438141Z" level=info msg="ImageCreate event name:\"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 11:46:56.748477 containerd[1601]: time="2025-10-29T11:46:56.748425822Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 11:46:56.749098 containerd[1601]: time="2025-10-29T11:46:56.749067372Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"28239125\" in 1.475561842s" Oct 29 11:46:56.749158 containerd[1601]: time="2025-10-29T11:46:56.749098626Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\"" Oct 29 11:46:56.749750 containerd[1601]: time="2025-10-29T11:46:56.749728496Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Oct 29 11:46:57.266058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3787025165.mount: Deactivated successfully. Oct 29 11:46:58.006537 containerd[1601]: time="2025-10-29T11:46:58.006487602Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 11:46:58.007922 containerd[1601]: time="2025-10-29T11:46:58.007684080Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Oct 29 11:46:58.008745 containerd[1601]: time="2025-10-29T11:46:58.008713000Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 11:46:58.011759 containerd[1601]: time="2025-10-29T11:46:58.011730313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 11:46:58.013043 containerd[1601]: time="2025-10-29T11:46:58.013005042Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.2632474s" Oct 29 11:46:58.013043 containerd[1601]: time="2025-10-29T11:46:58.013038136Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Oct 29 11:46:58.013665 containerd[1601]: time="2025-10-29T11:46:58.013477642Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 29 11:46:58.411569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1460996321.mount: Deactivated successfully. Oct 29 11:46:58.416020 containerd[1601]: time="2025-10-29T11:46:58.415983368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 29 11:46:58.416555 containerd[1601]: time="2025-10-29T11:46:58.416517088Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Oct 29 11:46:58.417509 containerd[1601]: time="2025-10-29T11:46:58.417480182Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 29 11:46:58.419498 containerd[1601]: time="2025-10-29T11:46:58.419453135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 29 11:46:58.420179 containerd[1601]: time="2025-10-29T11:46:58.419966670Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 406.463253ms" Oct 29 11:46:58.420179 containerd[1601]: time="2025-10-29T11:46:58.419993732Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Oct 29 11:46:58.420412 containerd[1601]: time="2025-10-29T11:46:58.420384865Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Oct 29 11:46:58.879353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3542564896.mount: Deactivated successfully. Oct 29 11:47:00.664467 containerd[1601]: time="2025-10-29T11:47:00.664418241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 11:47:00.665477 containerd[1601]: time="2025-10-29T11:47:00.665445734Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465859" Oct 29 11:47:00.666670 containerd[1601]: time="2025-10-29T11:47:00.666242579Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 11:47:00.668761 containerd[1601]: time="2025-10-29T11:47:00.668726370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 11:47:00.670553 containerd[1601]: time="2025-10-29T11:47:00.670514001Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.250101601s" Oct 29 11:47:00.670553 containerd[1601]: time="2025-10-29T11:47:00.670549263Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Oct 29 11:47:06.298268 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 29 11:47:06.300203 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 11:47:06.461247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 11:47:06.464643 (kubelet)[2287]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 29 11:47:06.495424 kubelet[2287]: E1029 11:47:06.495370 2287 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 11:47:06.497853 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 11:47:06.497994 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 11:47:06.500015 systemd[1]: kubelet.service: Consumed 130ms CPU time, 106.7M memory peak. Oct 29 11:47:06.530036 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 11:47:06.530182 systemd[1]: kubelet.service: Consumed 130ms CPU time, 106.7M memory peak. Oct 29 11:47:06.531884 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 11:47:06.550264 systemd[1]: Reload requested from client PID 2300 ('systemctl') (unit session-7.scope)... Oct 29 11:47:06.550279 systemd[1]: Reloading... Oct 29 11:47:06.624988 zram_generator::config[2345]: No configuration found. Oct 29 11:47:06.974384 systemd[1]: Reloading finished in 423 ms. Oct 29 11:47:07.030014 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 11:47:07.031613 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 11:47:07.034518 systemd[1]: kubelet.service: Deactivated successfully. Oct 29 11:47:07.034722 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 11:47:07.034761 systemd[1]: kubelet.service: Consumed 92ms CPU time, 95.1M memory peak. Oct 29 11:47:07.036109 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 11:47:07.182647 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 11:47:07.186636 (kubelet)[2392]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 29 11:47:07.216777 kubelet[2392]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 11:47:07.216777 kubelet[2392]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 29 11:47:07.216777 kubelet[2392]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 11:47:07.217081 kubelet[2392]: I1029 11:47:07.216812 2392 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 29 11:47:08.217682 kubelet[2392]: I1029 11:47:08.217633 2392 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 29 11:47:08.217682 kubelet[2392]: I1029 11:47:08.217667 2392 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 29 11:47:08.218041 kubelet[2392]: I1029 11:47:08.217879 2392 server.go:956] "Client rotation is on, will bootstrap in background" Oct 29 11:47:08.240085 kubelet[2392]: E1029 11:47:08.240043 2392 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.75:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 29 11:47:08.242716 kubelet[2392]: I1029 11:47:08.242627 2392 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 29 11:47:08.250631 kubelet[2392]: I1029 11:47:08.250606 2392 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 29 11:47:08.253175 kubelet[2392]: I1029 11:47:08.253153 2392 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 29 11:47:08.253548 kubelet[2392]: I1029 11:47:08.253520 2392 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 29 11:47:08.253744 kubelet[2392]: I1029 11:47:08.253604 2392 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 29 11:47:08.253940 kubelet[2392]: I1029 11:47:08.253925 2392 topology_manager.go:138] "Creating topology manager with none policy" Oct 29 11:47:08.254014 kubelet[2392]: I1029 11:47:08.254004 2392 container_manager_linux.go:303] "Creating device plugin manager" Oct 29 11:47:08.254760 kubelet[2392]: I1029 11:47:08.254739 2392 state_mem.go:36] "Initialized new in-memory state store" Oct 29 11:47:08.257283 kubelet[2392]: I1029 11:47:08.257260 2392 kubelet.go:480] "Attempting to sync node with API server" Oct 29 11:47:08.257360 kubelet[2392]: I1029 11:47:08.257349 2392 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 29 11:47:08.259016 kubelet[2392]: I1029 11:47:08.258997 2392 kubelet.go:386] "Adding apiserver pod source" Oct 29 11:47:08.260884 kubelet[2392]: I1029 11:47:08.260863 2392 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 29 11:47:08.261982 kubelet[2392]: I1029 11:47:08.261938 2392 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 29 11:47:08.262169 kubelet[2392]: E1029 11:47:08.262127 2392 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.75:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 29 11:47:08.262569 kubelet[2392]: E1029 11:47:08.262542 2392 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.75:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 29 11:47:08.262636 kubelet[2392]: I1029 11:47:08.262617 2392 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 29 11:47:08.262753 kubelet[2392]: W1029 11:47:08.262741 2392 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 29 11:47:08.265114 kubelet[2392]: I1029 11:47:08.265092 2392 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 29 11:47:08.265173 kubelet[2392]: I1029 11:47:08.265143 2392 server.go:1289] "Started kubelet" Oct 29 11:47:08.265352 kubelet[2392]: I1029 11:47:08.265260 2392 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 29 11:47:08.266356 kubelet[2392]: I1029 11:47:08.266335 2392 server.go:317] "Adding debug handlers to kubelet server" Oct 29 11:47:08.266666 kubelet[2392]: I1029 11:47:08.266592 2392 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 29 11:47:08.267006 kubelet[2392]: I1029 11:47:08.266979 2392 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 29 11:47:08.270917 kubelet[2392]: I1029 11:47:08.270786 2392 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 29 11:47:08.271053 kubelet[2392]: I1029 11:47:08.271028 2392 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 29 11:47:08.272161 kubelet[2392]: E1029 11:47:08.271059 2392 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.75:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.75:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1872f3c433e59d53 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-29 11:47:08.265110867 +0000 UTC m=+1.075486604,LastTimestamp:2025-10-29 11:47:08.265110867 +0000 UTC m=+1.075486604,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 29 11:47:08.272161 kubelet[2392]: E1029 11:47:08.272189 2392 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 11:47:08.272161 kubelet[2392]: I1029 11:47:08.272267 2392 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 29 11:47:08.272586 kubelet[2392]: I1029 11:47:08.272471 2392 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 29 11:47:08.272586 kubelet[2392]: I1029 11:47:08.272526 2392 reconciler.go:26] "Reconciler: start to sync state" Oct 29 11:47:08.273174 kubelet[2392]: E1029 11:47:08.273140 2392 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.75:6443: connect: connection refused" interval="200ms" Oct 29 11:47:08.273174 kubelet[2392]: E1029 11:47:08.273230 2392 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.75:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 29 11:47:08.274498 kubelet[2392]: I1029 11:47:08.274256 2392 factory.go:223] Registration of the containerd container factory successfully Oct 29 11:47:08.274498 kubelet[2392]: I1029 11:47:08.274285 2392 factory.go:223] Registration of the systemd container factory successfully Oct 29 11:47:08.274498 kubelet[2392]: I1029 11:47:08.274369 2392 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 29 11:47:08.278989 kubelet[2392]: E1029 11:47:08.278562 2392 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 29 11:47:08.287686 kubelet[2392]: I1029 11:47:08.287663 2392 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 29 11:47:08.287686 kubelet[2392]: I1029 11:47:08.287699 2392 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 29 11:47:08.287833 kubelet[2392]: I1029 11:47:08.287741 2392 state_mem.go:36] "Initialized new in-memory state store" Oct 29 11:47:08.291995 kubelet[2392]: I1029 11:47:08.291958 2392 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 29 11:47:08.365719 kubelet[2392]: I1029 11:47:08.293165 2392 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 29 11:47:08.365719 kubelet[2392]: I1029 11:47:08.293189 2392 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 29 11:47:08.365719 kubelet[2392]: I1029 11:47:08.293211 2392 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 29 11:47:08.365719 kubelet[2392]: I1029 11:47:08.293218 2392 kubelet.go:2436] "Starting kubelet main sync loop" Oct 29 11:47:08.365719 kubelet[2392]: E1029 11:47:08.293260 2392 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 29 11:47:08.365719 kubelet[2392]: E1029 11:47:08.293783 2392 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.75:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 29 11:47:08.365719 kubelet[2392]: I1029 11:47:08.365696 2392 policy_none.go:49] "None policy: Start" Oct 29 11:47:08.365719 kubelet[2392]: I1029 11:47:08.365726 2392 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 29 11:47:08.365918 kubelet[2392]: I1029 11:47:08.365741 2392 state_mem.go:35] "Initializing new in-memory state store" Oct 29 11:47:08.370980 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 29 11:47:08.372782 kubelet[2392]: E1029 11:47:08.372740 2392 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 11:47:08.381397 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 29 11:47:08.383938 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 29 11:47:08.393811 kubelet[2392]: E1029 11:47:08.393778 2392 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 29 11:47:08.401741 kubelet[2392]: E1029 11:47:08.401703 2392 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 29 11:47:08.401930 kubelet[2392]: I1029 11:47:08.401909 2392 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 29 11:47:08.402013 kubelet[2392]: I1029 11:47:08.401929 2392 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 29 11:47:08.402359 kubelet[2392]: I1029 11:47:08.402167 2392 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 29 11:47:08.402987 kubelet[2392]: E1029 11:47:08.402965 2392 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 29 11:47:08.403054 kubelet[2392]: E1029 11:47:08.403001 2392 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 29 11:47:08.474642 kubelet[2392]: E1029 11:47:08.474522 2392 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.75:6443: connect: connection refused" interval="400ms" Oct 29 11:47:08.503961 kubelet[2392]: I1029 11:47:08.503908 2392 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 11:47:08.504393 kubelet[2392]: E1029 11:47:08.504359 2392 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.75:6443/api/v1/nodes\": dial tcp 10.0.0.75:6443: connect: connection refused" node="localhost" Oct 29 11:47:08.603935 systemd[1]: Created slice kubepods-burstable-pod71d0cf000e2174a1f6268ecdc0940e1b.slice - libcontainer container kubepods-burstable-pod71d0cf000e2174a1f6268ecdc0940e1b.slice. Oct 29 11:47:08.631455 kubelet[2392]: E1029 11:47:08.631410 2392 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 11:47:08.635143 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice - libcontainer container kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Oct 29 11:47:08.636739 kubelet[2392]: E1029 11:47:08.636709 2392 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 11:47:08.639227 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice - libcontainer container kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Oct 29 11:47:08.640640 kubelet[2392]: E1029 11:47:08.640610 2392 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 11:47:08.673915 kubelet[2392]: I1029 11:47:08.673889 2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d0cf000e2174a1f6268ecdc0940e1b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"71d0cf000e2174a1f6268ecdc0940e1b\") " pod="kube-system/kube-apiserver-localhost" Oct 29 11:47:08.674015 kubelet[2392]: I1029 11:47:08.673923 2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d0cf000e2174a1f6268ecdc0940e1b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"71d0cf000e2174a1f6268ecdc0940e1b\") " pod="kube-system/kube-apiserver-localhost" Oct 29 11:47:08.674015 kubelet[2392]: I1029 11:47:08.673972 2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 11:47:08.674015 kubelet[2392]: I1029 11:47:08.673987 2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 11:47:08.674015 kubelet[2392]: I1029 11:47:08.674003 2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 11:47:08.674015 kubelet[2392]: I1029 11:47:08.674016 2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 11:47:08.674167 kubelet[2392]: I1029 11:47:08.674032 2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d0cf000e2174a1f6268ecdc0940e1b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"71d0cf000e2174a1f6268ecdc0940e1b\") " pod="kube-system/kube-apiserver-localhost" Oct 29 11:47:08.674167 kubelet[2392]: I1029 11:47:08.674048 2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 11:47:08.674167 kubelet[2392]: I1029 11:47:08.674063 2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Oct 29 11:47:08.705970 kubelet[2392]: I1029 11:47:08.705930 2392 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 11:47:08.706250 kubelet[2392]: E1029 11:47:08.706211 2392 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.75:6443/api/v1/nodes\": dial tcp 10.0.0.75:6443: connect: connection refused" node="localhost" Oct 29 11:47:08.876048 kubelet[2392]: E1029 11:47:08.876002 2392 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.75:6443: connect: connection refused" interval="800ms" Oct 29 11:47:08.932708 kubelet[2392]: E1029 11:47:08.932649 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:47:08.933352 containerd[1601]: time="2025-10-29T11:47:08.933304930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:71d0cf000e2174a1f6268ecdc0940e1b,Namespace:kube-system,Attempt:0,}" Oct 29 11:47:08.937545 kubelet[2392]: E1029 11:47:08.937508 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:47:08.937976 containerd[1601]: time="2025-10-29T11:47:08.937923010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Oct 29 11:47:08.941481 kubelet[2392]: E1029 11:47:08.941452 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:47:08.942097 containerd[1601]: time="2025-10-29T11:47:08.941980873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Oct 29 11:47:08.958112 containerd[1601]: time="2025-10-29T11:47:08.958069930Z" level=info msg="connecting to shim 522ec46f9041d09f5ee39e905540602edf314ac12fd82b717e8e889c31b2ea0c" address="unix:///run/containerd/s/0e6f9b25fc2a44dfd84934d9c32b53c16d98181aa4b6efb30a51094847ac3596" namespace=k8s.io protocol=ttrpc version=3 Oct 29 11:47:08.967937 containerd[1601]: time="2025-10-29T11:47:08.967892904Z" level=info msg="connecting to shim 26bef5c3daa16cf28e47a2c8a88c1131fad75bc1822e229840398646c0301ec9" address="unix:///run/containerd/s/a52d71a693d12b724f0db8cd75895ae95a831491301f8db05a1a93028f20b307" namespace=k8s.io protocol=ttrpc version=3 Oct 29 11:47:08.982928 containerd[1601]: time="2025-10-29T11:47:08.982881315Z" level=info msg="connecting to shim a9d26be0d760a7aa6965998a8a9f8164c4fc506b60995fef4174229ee37472bd" address="unix:///run/containerd/s/048958d39ac3994d7fcf8b10830fe7b02a1c919e4c8a53f7a8c29f0e5265eae0" namespace=k8s.io protocol=ttrpc version=3 Oct 29 11:47:08.991149 systemd[1]: Started cri-containerd-522ec46f9041d09f5ee39e905540602edf314ac12fd82b717e8e889c31b2ea0c.scope - libcontainer container 522ec46f9041d09f5ee39e905540602edf314ac12fd82b717e8e889c31b2ea0c. Oct 29 11:47:08.994489 systemd[1]: Started cri-containerd-26bef5c3daa16cf28e47a2c8a88c1131fad75bc1822e229840398646c0301ec9.scope - libcontainer container 26bef5c3daa16cf28e47a2c8a88c1131fad75bc1822e229840398646c0301ec9. Oct 29 11:47:09.014148 systemd[1]: Started cri-containerd-a9d26be0d760a7aa6965998a8a9f8164c4fc506b60995fef4174229ee37472bd.scope - libcontainer container a9d26be0d760a7aa6965998a8a9f8164c4fc506b60995fef4174229ee37472bd. Oct 29 11:47:09.029288 containerd[1601]: time="2025-10-29T11:47:09.029234460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:71d0cf000e2174a1f6268ecdc0940e1b,Namespace:kube-system,Attempt:0,} returns sandbox id \"522ec46f9041d09f5ee39e905540602edf314ac12fd82b717e8e889c31b2ea0c\"" Oct 29 11:47:09.030537 kubelet[2392]: E1029 11:47:09.030513 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:47:09.034407 containerd[1601]: time="2025-10-29T11:47:09.034364201Z" level=info msg="CreateContainer within sandbox \"522ec46f9041d09f5ee39e905540602edf314ac12fd82b717e8e889c31b2ea0c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 29 11:47:09.043970 containerd[1601]: time="2025-10-29T11:47:09.043863963Z" level=info msg="Container b9ab8a75711a4edb8a16a1637f0729e4bb9731f599d7a947f211921041d25491: CDI devices from CRI Config.CDIDevices: []" Oct 29 11:47:09.051842 containerd[1601]: time="2025-10-29T11:47:09.051786011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"26bef5c3daa16cf28e47a2c8a88c1131fad75bc1822e229840398646c0301ec9\"" Oct 29 11:47:09.052798 containerd[1601]: time="2025-10-29T11:47:09.052744654Z" level=info msg="CreateContainer within sandbox \"522ec46f9041d09f5ee39e905540602edf314ac12fd82b717e8e889c31b2ea0c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b9ab8a75711a4edb8a16a1637f0729e4bb9731f599d7a947f211921041d25491\"" Oct 29 11:47:09.053526 kubelet[2392]: E1029 11:47:09.053493 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:47:09.055048 containerd[1601]: time="2025-10-29T11:47:09.055012605Z" level=info msg="StartContainer for \"b9ab8a75711a4edb8a16a1637f0729e4bb9731f599d7a947f211921041d25491\"" Oct 29 11:47:09.056843 containerd[1601]: time="2025-10-29T11:47:09.056806620Z" level=info msg="connecting to shim b9ab8a75711a4edb8a16a1637f0729e4bb9731f599d7a947f211921041d25491" address="unix:///run/containerd/s/0e6f9b25fc2a44dfd84934d9c32b53c16d98181aa4b6efb30a51094847ac3596" protocol=ttrpc version=3 Oct 29 11:47:09.064327 containerd[1601]: time="2025-10-29T11:47:09.064297185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"a9d26be0d760a7aa6965998a8a9f8164c4fc506b60995fef4174229ee37472bd\"" Oct 29 11:47:09.064967 containerd[1601]: time="2025-10-29T11:47:09.064776887Z" level=info msg="CreateContainer within sandbox \"26bef5c3daa16cf28e47a2c8a88c1131fad75bc1822e229840398646c0301ec9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 29 11:47:09.065049 kubelet[2392]: E1029 11:47:09.064961 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:47:09.070264 containerd[1601]: time="2025-10-29T11:47:09.070227418Z" level=info msg="CreateContainer within sandbox \"a9d26be0d760a7aa6965998a8a9f8164c4fc506b60995fef4174229ee37472bd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 29 11:47:09.081229 containerd[1601]: time="2025-10-29T11:47:09.081192396Z" level=info msg="Container 1ac1ee73c2efa0ef5728af5ba52c4b216049c994126f0affcbbb579c66d078ae: CDI devices from CRI Config.CDIDevices: []" Oct 29 11:47:09.082215 systemd[1]: Started cri-containerd-b9ab8a75711a4edb8a16a1637f0729e4bb9731f599d7a947f211921041d25491.scope - libcontainer container b9ab8a75711a4edb8a16a1637f0729e4bb9731f599d7a947f211921041d25491. Oct 29 11:47:09.084197 containerd[1601]: time="2025-10-29T11:47:09.083563752Z" level=info msg="Container 05722d6d3945a6c4d2416d461fb928a680f1fdc821b5ee03ae365553ea68ea64: CDI devices from CRI Config.CDIDevices: []" Oct 29 11:47:09.086612 containerd[1601]: time="2025-10-29T11:47:09.086573603Z" level=info msg="CreateContainer within sandbox \"26bef5c3daa16cf28e47a2c8a88c1131fad75bc1822e229840398646c0301ec9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1ac1ee73c2efa0ef5728af5ba52c4b216049c994126f0affcbbb579c66d078ae\"" Oct 29 11:47:09.087179 containerd[1601]: time="2025-10-29T11:47:09.087147579Z" level=info msg="StartContainer for \"1ac1ee73c2efa0ef5728af5ba52c4b216049c994126f0affcbbb579c66d078ae\"" Oct 29 11:47:09.088407 containerd[1601]: time="2025-10-29T11:47:09.088380114Z" level=info msg="connecting to shim 1ac1ee73c2efa0ef5728af5ba52c4b216049c994126f0affcbbb579c66d078ae" address="unix:///run/containerd/s/a52d71a693d12b724f0db8cd75895ae95a831491301f8db05a1a93028f20b307" protocol=ttrpc version=3 Oct 29 11:47:09.093418 containerd[1601]: time="2025-10-29T11:47:09.093384624Z" level=info msg="CreateContainer within sandbox \"a9d26be0d760a7aa6965998a8a9f8164c4fc506b60995fef4174229ee37472bd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"05722d6d3945a6c4d2416d461fb928a680f1fdc821b5ee03ae365553ea68ea64\"" Oct 29 11:47:09.093914 containerd[1601]: time="2025-10-29T11:47:09.093891719Z" level=info msg="StartContainer for \"05722d6d3945a6c4d2416d461fb928a680f1fdc821b5ee03ae365553ea68ea64\"" Oct 29 11:47:09.096278 containerd[1601]: time="2025-10-29T11:47:09.096250299Z" level=info msg="connecting to shim 05722d6d3945a6c4d2416d461fb928a680f1fdc821b5ee03ae365553ea68ea64" address="unix:///run/containerd/s/048958d39ac3994d7fcf8b10830fe7b02a1c919e4c8a53f7a8c29f0e5265eae0" protocol=ttrpc version=3 Oct 29 11:47:09.107098 systemd[1]: Started cri-containerd-1ac1ee73c2efa0ef5728af5ba52c4b216049c994126f0affcbbb579c66d078ae.scope - libcontainer container 1ac1ee73c2efa0ef5728af5ba52c4b216049c994126f0affcbbb579c66d078ae. Oct 29 11:47:09.108614 kubelet[2392]: I1029 11:47:09.108208 2392 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 11:47:09.109242 kubelet[2392]: E1029 11:47:09.109199 2392 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.75:6443/api/v1/nodes\": dial tcp 10.0.0.75:6443: connect: connection refused" node="localhost" Oct 29 11:47:09.121192 systemd[1]: Started cri-containerd-05722d6d3945a6c4d2416d461fb928a680f1fdc821b5ee03ae365553ea68ea64.scope - libcontainer container 05722d6d3945a6c4d2416d461fb928a680f1fdc821b5ee03ae365553ea68ea64. Oct 29 11:47:09.125190 kubelet[2392]: E1029 11:47:09.125142 2392 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.75:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 29 11:47:09.131430 containerd[1601]: time="2025-10-29T11:47:09.131211301Z" level=info msg="StartContainer for \"b9ab8a75711a4edb8a16a1637f0729e4bb9731f599d7a947f211921041d25491\" returns successfully" Oct 29 11:47:09.166476 containerd[1601]: time="2025-10-29T11:47:09.166437264Z" level=info msg="StartContainer for \"1ac1ee73c2efa0ef5728af5ba52c4b216049c994126f0affcbbb579c66d078ae\" returns successfully" Oct 29 11:47:09.170771 containerd[1601]: time="2025-10-29T11:47:09.170744368Z" level=info msg="StartContainer for \"05722d6d3945a6c4d2416d461fb928a680f1fdc821b5ee03ae365553ea68ea64\" returns successfully" Oct 29 11:47:09.299223 kubelet[2392]: E1029 11:47:09.299190 2392 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 11:47:09.299541 kubelet[2392]: E1029 11:47:09.299328 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:47:09.301800 kubelet[2392]: E1029 11:47:09.301780 2392 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 11:47:09.302379 kubelet[2392]: E1029 11:47:09.301888 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:47:09.303480 kubelet[2392]: E1029 11:47:09.303460 2392 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 11:47:09.303576 kubelet[2392]: E1029 11:47:09.303559 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:47:09.910778 kubelet[2392]: I1029 11:47:09.910742 2392 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 11:47:10.308955 kubelet[2392]: E1029 11:47:10.306471 2392 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 11:47:10.308955 kubelet[2392]: E1029 11:47:10.306595 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:47:10.308955 kubelet[2392]: E1029 11:47:10.306829 2392 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 11:47:10.308955 kubelet[2392]: E1029 11:47:10.306916 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:47:10.308955 kubelet[2392]: E1029 11:47:10.306933 2392 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 11:47:10.308955 kubelet[2392]: E1029 11:47:10.307045 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:47:11.397012 kubelet[2392]: E1029 11:47:11.396973 2392 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 11:47:11.397333 kubelet[2392]: E1029 11:47:11.397104 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:47:11.898066 kubelet[2392]: E1029 11:47:11.898026 2392 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 29 11:47:11.941340 kubelet[2392]: I1029 11:47:11.941304 2392 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 29 11:47:11.941742 kubelet[2392]: E1029 11:47:11.941686 2392 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 29 11:47:11.973171 kubelet[2392]: I1029 11:47:11.973135 2392 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 29 11:47:11.981002 kubelet[2392]: E1029 11:47:11.980968 2392 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 29 11:47:11.981002 kubelet[2392]: I1029 11:47:11.980999 2392 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 29 11:47:11.982550 kubelet[2392]: E1029 11:47:11.982526 2392 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 29 11:47:11.982550 kubelet[2392]: I1029 11:47:11.982550 2392 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 29 11:47:11.984093 kubelet[2392]: E1029 11:47:11.984057 2392 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 29 11:47:12.264618 kubelet[2392]: I1029 11:47:12.264507 2392 apiserver.go:52] "Watching apiserver" Oct 29 11:47:12.272555 kubelet[2392]: I1029 11:47:12.272529 2392 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 29 11:47:14.137378 systemd[1]: Reload requested from client PID 2677 ('systemctl') (unit session-7.scope)... Oct 29 11:47:14.137396 systemd[1]: Reloading... Oct 29 11:47:14.201012 zram_generator::config[2721]: No configuration found. Oct 29 11:47:14.364148 systemd[1]: Reloading finished in 226 ms. Oct 29 11:47:14.403550 kubelet[2392]: I1029 11:47:14.403451 2392 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 29 11:47:14.403656 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 11:47:14.415713 systemd[1]: kubelet.service: Deactivated successfully. Oct 29 11:47:14.416019 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 11:47:14.416074 systemd[1]: kubelet.service: Consumed 1.457s CPU time, 128.9M memory peak. Oct 29 11:47:14.417643 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 11:47:14.564123 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 11:47:14.580299 (kubelet)[2763]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 29 11:47:14.616412 kubelet[2763]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 11:47:14.616412 kubelet[2763]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 29 11:47:14.616412 kubelet[2763]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 11:47:14.616734 kubelet[2763]: I1029 11:47:14.616445 2763 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 29 11:47:14.621975 kubelet[2763]: I1029 11:47:14.621715 2763 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 29 11:47:14.621975 kubelet[2763]: I1029 11:47:14.621745 2763 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 29 11:47:14.622148 kubelet[2763]: I1029 11:47:14.622133 2763 server.go:956] "Client rotation is on, will bootstrap in background" Oct 29 11:47:14.623780 kubelet[2763]: I1029 11:47:14.623750 2763 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 29 11:47:14.626816 kubelet[2763]: I1029 11:47:14.626786 2763 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 29 11:47:14.630674 kubelet[2763]: I1029 11:47:14.630653 2763 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 29 11:47:14.633299 kubelet[2763]: I1029 11:47:14.633262 2763 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 29 11:47:14.633505 kubelet[2763]: I1029 11:47:14.633481 2763 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 29 11:47:14.633645 kubelet[2763]: I1029 11:47:14.633503 2763 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 29 11:47:14.633717 kubelet[2763]: I1029 11:47:14.633656 2763 topology_manager.go:138] "Creating topology manager with none policy" Oct 29 11:47:14.633717 kubelet[2763]: I1029 11:47:14.633664 2763 container_manager_linux.go:303] "Creating device plugin manager" Oct 29 11:47:14.633717 kubelet[2763]: I1029 11:47:14.633702 2763 state_mem.go:36] "Initialized new in-memory state store" Oct 29 11:47:14.633856 kubelet[2763]: I1029 11:47:14.633845 2763 kubelet.go:480] "Attempting to sync node with API server" Oct 29 11:47:14.633884 kubelet[2763]: I1029 11:47:14.633859 2763 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 29 11:47:14.633912 kubelet[2763]: I1029 11:47:14.633900 2763 kubelet.go:386] "Adding apiserver pod source" Oct 29 11:47:14.633940 kubelet[2763]: I1029 11:47:14.633914 2763 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 29 11:47:14.634903 kubelet[2763]: I1029 11:47:14.634706 2763 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 29 11:47:14.635347 kubelet[2763]: I1029 11:47:14.635323 2763 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 29 11:47:14.637751 kubelet[2763]: I1029 11:47:14.637716 2763 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 29 11:47:14.637751 kubelet[2763]: I1029 11:47:14.637752 2763 server.go:1289] "Started kubelet" Oct 29 11:47:14.639603 kubelet[2763]: I1029 11:47:14.639523 2763 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 29 11:47:14.639887 kubelet[2763]: I1029 11:47:14.639856 2763 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 29 11:47:14.640170 kubelet[2763]: I1029 11:47:14.640137 2763 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 29 11:47:14.645752 kubelet[2763]: I1029 11:47:14.645304 2763 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 29 11:47:14.647009 kubelet[2763]: I1029 11:47:14.646398 2763 server.go:317] "Adding debug handlers to kubelet server" Oct 29 11:47:14.651176 kubelet[2763]: I1029 11:47:14.651106 2763 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 29 11:47:14.651821 kubelet[2763]: I1029 11:47:14.651441 2763 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 29 11:47:14.651821 kubelet[2763]: E1029 11:47:14.651596 2763 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 11:47:14.652705 kubelet[2763]: I1029 11:47:14.652684 2763 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 29 11:47:14.652824 kubelet[2763]: I1029 11:47:14.652811 2763 reconciler.go:26] "Reconciler: start to sync state" Oct 29 11:47:14.657681 kubelet[2763]: I1029 11:47:14.657062 2763 factory.go:223] Registration of the systemd container factory successfully Oct 29 11:47:14.657681 kubelet[2763]: I1029 11:47:14.657164 2763 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 29 11:47:14.658401 kubelet[2763]: I1029 11:47:14.658368 2763 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 29 11:47:14.658971 kubelet[2763]: E1029 11:47:14.658932 2763 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 29 11:47:14.659280 kubelet[2763]: I1029 11:47:14.659242 2763 factory.go:223] Registration of the containerd container factory successfully Oct 29 11:47:14.670004 kubelet[2763]: I1029 11:47:14.669977 2763 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 29 11:47:14.670004 kubelet[2763]: I1029 11:47:14.670002 2763 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 29 11:47:14.670114 kubelet[2763]: I1029 11:47:14.670030 2763 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 29 11:47:14.670114 kubelet[2763]: I1029 11:47:14.670039 2763 kubelet.go:2436] "Starting kubelet main sync loop" Oct 29 11:47:14.670114 kubelet[2763]: E1029 11:47:14.670085 2763 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 29 11:47:14.699089 kubelet[2763]: I1029 11:47:14.699065 2763 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 29 11:47:14.699089 kubelet[2763]: I1029 11:47:14.699083 2763 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 29 11:47:14.699228 kubelet[2763]: I1029 11:47:14.699105 2763 state_mem.go:36] "Initialized new in-memory state store" Oct 29 11:47:14.699228 kubelet[2763]: I1029 11:47:14.699214 2763 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 29 11:47:14.699267 kubelet[2763]: I1029 11:47:14.699225 2763 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 29 11:47:14.699267 kubelet[2763]: I1029 11:47:14.699240 2763 policy_none.go:49] "None policy: Start" Oct 29 11:47:14.699267 kubelet[2763]: I1029 11:47:14.699250 2763 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 29 11:47:14.699267 kubelet[2763]: I1029 11:47:14.699258 2763 state_mem.go:35] "Initializing new in-memory state store" Oct 29 11:47:14.699342 kubelet[2763]: I1029 11:47:14.699328 2763 state_mem.go:75] "Updated machine memory state" Oct 29 11:47:14.705219 kubelet[2763]: E1029 11:47:14.704708 2763 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 29 11:47:14.705219 kubelet[2763]: I1029 11:47:14.704872 2763 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 29 11:47:14.705219 kubelet[2763]: I1029 11:47:14.704883 2763 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 29 11:47:14.705568 kubelet[2763]: I1029 11:47:14.705408 2763 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 29 11:47:14.707054 kubelet[2763]: E1029 11:47:14.707018 2763 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 29 11:47:14.771661 kubelet[2763]: I1029 11:47:14.771610 2763 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 29 11:47:14.771778 kubelet[2763]: I1029 11:47:14.771636 2763 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 29 11:47:14.772669 kubelet[2763]: I1029 11:47:14.772648 2763 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 29 11:47:14.810153 kubelet[2763]: I1029 11:47:14.810129 2763 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 11:47:14.816905 kubelet[2763]: I1029 11:47:14.816883 2763 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 29 11:47:14.817053 kubelet[2763]: I1029 11:47:14.817042 2763 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 29 11:47:14.954135 kubelet[2763]: I1029 11:47:14.953824 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d0cf000e2174a1f6268ecdc0940e1b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"71d0cf000e2174a1f6268ecdc0940e1b\") " pod="kube-system/kube-apiserver-localhost" Oct 29 11:47:14.954135 kubelet[2763]: I1029 11:47:14.953896 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 11:47:14.954135 kubelet[2763]: I1029 11:47:14.953964 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 11:47:14.954135 kubelet[2763]: I1029 11:47:14.954009 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 11:47:14.954135 kubelet[2763]: I1029 11:47:14.954063 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 11:47:14.954816 kubelet[2763]: I1029 11:47:14.954189 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Oct 29 11:47:14.955034 kubelet[2763]: I1029 11:47:14.954923 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d0cf000e2174a1f6268ecdc0940e1b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"71d0cf000e2174a1f6268ecdc0940e1b\") " pod="kube-system/kube-apiserver-localhost" Oct 29 11:47:14.955218 kubelet[2763]: I1029 11:47:14.955164 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d0cf000e2174a1f6268ecdc0940e1b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"71d0cf000e2174a1f6268ecdc0940e1b\") " pod="kube-system/kube-apiserver-localhost" Oct 29 11:47:14.955341 kubelet[2763]: I1029 11:47:14.955317 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 11:47:15.077749 kubelet[2763]: E1029 11:47:15.077595 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:47:15.077749 kubelet[2763]: E1029 11:47:15.077677 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:47:15.078015 kubelet[2763]: E1029 11:47:15.077922 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:47:15.635224 kubelet[2763]: I1029 11:47:15.634587 2763 apiserver.go:52] "Watching apiserver" Oct 29 11:47:15.653480 kubelet[2763]: I1029 11:47:15.653455 2763 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 29 11:47:15.690484 kubelet[2763]: I1029 11:47:15.690452 2763 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 29 11:47:15.690747 kubelet[2763]: I1029 11:47:15.690716 2763 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 29 11:47:15.691084 kubelet[2763]: I1029 11:47:15.691019 2763 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 29 11:47:15.696883 kubelet[2763]: E1029 11:47:15.696822 2763 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 29 11:47:15.697015 kubelet[2763]: E1029 11:47:15.696995 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:47:15.697458 kubelet[2763]: E1029 11:47:15.697262 2763 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 29 11:47:15.697458 kubelet[2763]: E1029 11:47:15.697402 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:47:15.700419 kubelet[2763]: E1029 11:47:15.700388 2763 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 29 11:47:15.700678 kubelet[2763]: E1029 11:47:15.700580 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:47:15.728054 kubelet[2763]: I1029 11:47:15.727987 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.7279711720000002 podStartE2EDuration="1.727971172s" podCreationTimestamp="2025-10-29 11:47:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 11:47:15.717502981 +0000 UTC m=+1.133680636" watchObservedRunningTime="2025-10-29 11:47:15.727971172 +0000 UTC m=+1.144148827" Oct 29 11:47:15.736313 kubelet[2763]: I1029 11:47:15.736271 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.736257583 podStartE2EDuration="1.736257583s" podCreationTimestamp="2025-10-29 11:47:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 11:47:15.728118543 +0000 UTC m=+1.144296198" watchObservedRunningTime="2025-10-29 11:47:15.736257583 +0000 UTC m=+1.152435238" Oct 29 11:47:15.745728 kubelet[2763]: I1029 11:47:15.745679 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.745666644 podStartE2EDuration="1.745666644s" podCreationTimestamp="2025-10-29 11:47:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 11:47:15.737284694 +0000 UTC m=+1.153462349" watchObservedRunningTime="2025-10-29 11:47:15.745666644 +0000 UTC m=+1.161844299" Oct 29 11:47:16.692924 kubelet[2763]: E1029 11:47:16.692889 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:47:16.693318 kubelet[2763]: E1029 11:47:16.692993 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:47:16.693318 kubelet[2763]: E1029 11:47:16.693106 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:47:20.081689 kubelet[2763]: E1029 11:47:20.081609 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:47:20.454289 kubelet[2763]: I1029 11:47:20.454144 2763 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 29 11:47:20.454688 containerd[1601]: time="2025-10-29T11:47:20.454656704Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 29 11:47:20.455235 kubelet[2763]: I1029 11:47:20.454818 2763 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 29 11:47:20.651952 kubelet[2763]: E1029 11:47:20.651904 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:47:20.698202 kubelet[2763]: E1029 11:47:20.698166 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:47:20.699103 kubelet[2763]: E1029 11:47:20.699014 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:47:21.497715 systemd[1]: Created slice kubepods-besteffort-pod2395d8b3_2fcf_4da8_a88a_34407aae41c4.slice - libcontainer container kubepods-besteffort-pod2395d8b3_2fcf_4da8_a88a_34407aae41c4.slice. Oct 29 11:47:21.607324 kubelet[2763]: I1029 11:47:21.607276 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2395d8b3-2fcf-4da8-a88a-34407aae41c4-lib-modules\") pod \"kube-proxy-l99b6\" (UID: \"2395d8b3-2fcf-4da8-a88a-34407aae41c4\") " pod="kube-system/kube-proxy-l99b6" Oct 29 11:47:21.607324 kubelet[2763]: I1029 11:47:21.607312 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkfx5\" (UniqueName: \"kubernetes.io/projected/2395d8b3-2fcf-4da8-a88a-34407aae41c4-kube-api-access-vkfx5\") pod \"kube-proxy-l99b6\" (UID: \"2395d8b3-2fcf-4da8-a88a-34407aae41c4\") " pod="kube-system/kube-proxy-l99b6" Oct 29 11:47:21.607670 kubelet[2763]: I1029 11:47:21.607336 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2395d8b3-2fcf-4da8-a88a-34407aae41c4-kube-proxy\") pod \"kube-proxy-l99b6\" (UID: \"2395d8b3-2fcf-4da8-a88a-34407aae41c4\") " pod="kube-system/kube-proxy-l99b6" Oct 29 11:47:21.607670 kubelet[2763]: I1029 11:47:21.607379 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2395d8b3-2fcf-4da8-a88a-34407aae41c4-xtables-lock\") pod \"kube-proxy-l99b6\" (UID: \"2395d8b3-2fcf-4da8-a88a-34407aae41c4\") " pod="kube-system/kube-proxy-l99b6" Oct 29 11:47:21.700438 kubelet[2763]: E1029 11:47:21.700406 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:47:21.713681 systemd[1]: Created slice kubepods-besteffort-pod8cb8d866_f3c7_44a6_88cd_d81256dcaa24.slice - libcontainer container kubepods-besteffort-pod8cb8d866_f3c7_44a6_88cd_d81256dcaa24.slice. Oct 29 11:47:21.808392 kubelet[2763]: I1029 11:47:21.808341 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8cb8d866-f3c7-44a6-88cd-d81256dcaa24-var-lib-calico\") pod \"tigera-operator-7dcd859c48-rqs2x\" (UID: \"8cb8d866-f3c7-44a6-88cd-d81256dcaa24\") " pod="tigera-operator/tigera-operator-7dcd859c48-rqs2x" Oct 29 11:47:21.808522 kubelet[2763]: I1029 11:47:21.808404 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22bfb\" (UniqueName: \"kubernetes.io/projected/8cb8d866-f3c7-44a6-88cd-d81256dcaa24-kube-api-access-22bfb\") pod \"tigera-operator-7dcd859c48-rqs2x\" (UID: \"8cb8d866-f3c7-44a6-88cd-d81256dcaa24\") " pod="tigera-operator/tigera-operator-7dcd859c48-rqs2x" Oct 29 11:47:21.808522 kubelet[2763]: E1029 11:47:21.808366 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:47:21.808940 containerd[1601]: time="2025-10-29T11:47:21.808901077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l99b6,Uid:2395d8b3-2fcf-4da8-a88a-34407aae41c4,Namespace:kube-system,Attempt:0,}" Oct 29 11:47:21.825483 containerd[1601]: time="2025-10-29T11:47:21.825446514Z" level=info msg="connecting to shim c95054e99b82bcbec17c9c4a5d8888f63a7c3d7bb913135498717603604f7cf8" address="unix:///run/containerd/s/affe0fb4b620134702ae9aff48e7f86a375856b4e2a754aed67aec2d6e81e531" namespace=k8s.io protocol=ttrpc version=3 Oct 29 11:47:21.856104 systemd[1]: Started cri-containerd-c95054e99b82bcbec17c9c4a5d8888f63a7c3d7bb913135498717603604f7cf8.scope - libcontainer container c95054e99b82bcbec17c9c4a5d8888f63a7c3d7bb913135498717603604f7cf8. Oct 29 11:47:21.880973 containerd[1601]: time="2025-10-29T11:47:21.880927592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l99b6,Uid:2395d8b3-2fcf-4da8-a88a-34407aae41c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"c95054e99b82bcbec17c9c4a5d8888f63a7c3d7bb913135498717603604f7cf8\"" Oct 29 11:47:21.881701 kubelet[2763]: E1029 11:47:21.881677 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:47:21.886035 containerd[1601]: time="2025-10-29T11:47:21.885995513Z" level=info msg="CreateContainer within sandbox \"c95054e99b82bcbec17c9c4a5d8888f63a7c3d7bb913135498717603604f7cf8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 29 11:47:21.893842 containerd[1601]: time="2025-10-29T11:47:21.893813376Z" level=info msg="Container 08a21c2e84894867c73bbec96f1301374c56420a2d75eb35a9b0b50d3cb9e74e: CDI devices from CRI Config.CDIDevices: []" Oct 29 11:47:21.902210 containerd[1601]: time="2025-10-29T11:47:21.902174416Z" level=info msg="CreateContainer within sandbox \"c95054e99b82bcbec17c9c4a5d8888f63a7c3d7bb913135498717603604f7cf8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"08a21c2e84894867c73bbec96f1301374c56420a2d75eb35a9b0b50d3cb9e74e\"" Oct 29 11:47:21.903005 containerd[1601]: time="2025-10-29T11:47:21.902930374Z" level=info msg="StartContainer for \"08a21c2e84894867c73bbec96f1301374c56420a2d75eb35a9b0b50d3cb9e74e\"" Oct 29 11:47:21.904567 containerd[1601]: time="2025-10-29T11:47:21.904542257Z" level=info msg="connecting to shim 08a21c2e84894867c73bbec96f1301374c56420a2d75eb35a9b0b50d3cb9e74e" address="unix:///run/containerd/s/affe0fb4b620134702ae9aff48e7f86a375856b4e2a754aed67aec2d6e81e531" protocol=ttrpc version=3 Oct 29 11:47:21.939199 systemd[1]: Started cri-containerd-08a21c2e84894867c73bbec96f1301374c56420a2d75eb35a9b0b50d3cb9e74e.scope - libcontainer container 08a21c2e84894867c73bbec96f1301374c56420a2d75eb35a9b0b50d3cb9e74e. Oct 29 11:47:21.981520 containerd[1601]: time="2025-10-29T11:47:21.981489022Z" level=info msg="StartContainer for \"08a21c2e84894867c73bbec96f1301374c56420a2d75eb35a9b0b50d3cb9e74e\" returns successfully" Oct 29 11:47:22.017989 containerd[1601]: time="2025-10-29T11:47:22.017836012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-rqs2x,Uid:8cb8d866-f3c7-44a6-88cd-d81256dcaa24,Namespace:tigera-operator,Attempt:0,}" Oct 29 11:47:22.034645 containerd[1601]: time="2025-10-29T11:47:22.034600169Z" level=info msg="connecting to shim 09d6d5c786bcab007e08cb845fac8fc8be4ee96f78b2d9904c573b3ffae648ea" address="unix:///run/containerd/s/b4a8f41bb63554fa3567e58d345acc05a4af6340f34571c7fae9f9ae47cce38f" namespace=k8s.io protocol=ttrpc version=3 Oct 29 11:47:22.056119 systemd[1]: Started cri-containerd-09d6d5c786bcab007e08cb845fac8fc8be4ee96f78b2d9904c573b3ffae648ea.scope - libcontainer container 09d6d5c786bcab007e08cb845fac8fc8be4ee96f78b2d9904c573b3ffae648ea. Oct 29 11:47:22.091427 containerd[1601]: time="2025-10-29T11:47:22.091094181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-rqs2x,Uid:8cb8d866-f3c7-44a6-88cd-d81256dcaa24,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"09d6d5c786bcab007e08cb845fac8fc8be4ee96f78b2d9904c573b3ffae648ea\"" Oct 29 11:47:22.094293 containerd[1601]: time="2025-10-29T11:47:22.094052868Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 29 11:47:22.705633 kubelet[2763]: E1029 11:47:22.705596 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:47:22.719657 kubelet[2763]: I1029 11:47:22.719577 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-l99b6" podStartSLOduration=1.719563433 podStartE2EDuration="1.719563433s" podCreationTimestamp="2025-10-29 11:47:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 11:47:22.719183302 +0000 UTC m=+8.135360997" watchObservedRunningTime="2025-10-29 11:47:22.719563433 +0000 UTC m=+8.135741048" Oct 29 11:47:23.410157 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1171246805.mount: Deactivated successfully. Oct 29 11:47:25.386233 kubelet[2763]: E1029 11:47:25.386150 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:47:25.898397 containerd[1601]: time="2025-10-29T11:47:25.898355039Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 11:47:25.899480 containerd[1601]: time="2025-10-29T11:47:25.899451218Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Oct 29 11:47:25.900434 containerd[1601]: time="2025-10-29T11:47:25.900387175Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 11:47:25.902390 containerd[1601]: time="2025-10-29T11:47:25.902347363Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 11:47:25.903042 containerd[1601]: time="2025-10-29T11:47:25.903019820Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 3.808931977s" Oct 29 11:47:25.903127 containerd[1601]: time="2025-10-29T11:47:25.903110255Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Oct 29 11:47:25.909006 containerd[1601]: time="2025-10-29T11:47:25.908732641Z" level=info msg="CreateContainer within sandbox \"09d6d5c786bcab007e08cb845fac8fc8be4ee96f78b2d9904c573b3ffae648ea\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 29 11:47:25.916168 containerd[1601]: time="2025-10-29T11:47:25.915639358Z" level=info msg="Container 83f0c2484803284107aa8b4e61f008d46cffd9b7af7d177f3506d8295e4391d8: CDI devices from CRI Config.CDIDevices: []" Oct 29 11:47:25.918769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1992814646.mount: Deactivated successfully. Oct 29 11:47:25.922236 containerd[1601]: time="2025-10-29T11:47:25.922198942Z" level=info msg="CreateContainer within sandbox \"09d6d5c786bcab007e08cb845fac8fc8be4ee96f78b2d9904c573b3ffae648ea\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"83f0c2484803284107aa8b4e61f008d46cffd9b7af7d177f3506d8295e4391d8\"" Oct 29 11:47:25.922839 containerd[1601]: time="2025-10-29T11:47:25.922819059Z" level=info msg="StartContainer for \"83f0c2484803284107aa8b4e61f008d46cffd9b7af7d177f3506d8295e4391d8\"" Oct 29 11:47:25.923788 containerd[1601]: time="2025-10-29T11:47:25.923758017Z" level=info msg="connecting to shim 83f0c2484803284107aa8b4e61f008d46cffd9b7af7d177f3506d8295e4391d8" address="unix:///run/containerd/s/b4a8f41bb63554fa3567e58d345acc05a4af6340f34571c7fae9f9ae47cce38f" protocol=ttrpc version=3 Oct 29 11:47:25.974120 systemd[1]: Started cri-containerd-83f0c2484803284107aa8b4e61f008d46cffd9b7af7d177f3506d8295e4391d8.scope - libcontainer container 83f0c2484803284107aa8b4e61f008d46cffd9b7af7d177f3506d8295e4391d8. Oct 29 11:47:26.000982 containerd[1601]: time="2025-10-29T11:47:26.000928679Z" level=info msg="StartContainer for \"83f0c2484803284107aa8b4e61f008d46cffd9b7af7d177f3506d8295e4391d8\" returns successfully" Oct 29 11:47:28.885081 update_engine[1587]: I20251029 11:47:28.884989 1587 update_attempter.cc:509] Updating boot flags... Oct 29 11:47:31.428026 sudo[1812]: pam_unix(sudo:session): session closed for user root Oct 29 11:47:31.431040 sshd[1811]: Connection closed by 10.0.0.1 port 55570 Oct 29 11:47:31.431518 sshd-session[1808]: pam_unix(sshd:session): session closed for user core Oct 29 11:47:31.436496 systemd[1]: sshd@6-10.0.0.75:22-10.0.0.1:55570.service: Deactivated successfully. Oct 29 11:47:31.441197 systemd[1]: session-7.scope: Deactivated successfully. Oct 29 11:47:31.441463 systemd[1]: session-7.scope: Consumed 7.382s CPU time, 208.8M memory peak. Oct 29 11:47:31.443230 systemd-logind[1586]: Session 7 logged out. Waiting for processes to exit. Oct 29 11:47:31.444703 systemd-logind[1586]: Removed session 7. Oct 29 11:47:39.034502 kubelet[2763]: I1029 11:47:39.033990 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-rqs2x" podStartSLOduration=14.221669239 podStartE2EDuration="18.033973884s" podCreationTimestamp="2025-10-29 11:47:21 +0000 UTC" firstStartedPulling="2025-10-29 11:47:22.093771542 +0000 UTC m=+7.509949157" lastFinishedPulling="2025-10-29 11:47:25.906076187 +0000 UTC m=+11.322253802" observedRunningTime="2025-10-29 11:47:26.724271852 +0000 UTC m=+12.140449507" watchObservedRunningTime="2025-10-29 11:47:39.033973884 +0000 UTC m=+24.450151539" Oct 29 11:47:39.050991 systemd[1]: Created slice kubepods-besteffort-pod238a293b_27dd_4c71_8684_2780dbb57e8c.slice - libcontainer container kubepods-besteffort-pod238a293b_27dd_4c71_8684_2780dbb57e8c.slice. Oct 29 11:47:39.131402 kubelet[2763]: I1029 11:47:39.131346 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/238a293b-27dd-4c71-8684-2780dbb57e8c-typha-certs\") pod \"calico-typha-7df98b6bcd-6k2wr\" (UID: \"238a293b-27dd-4c71-8684-2780dbb57e8c\") " pod="calico-system/calico-typha-7df98b6bcd-6k2wr" Oct 29 11:47:39.131402 kubelet[2763]: I1029 11:47:39.131414 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlh5n\" (UniqueName: \"kubernetes.io/projected/238a293b-27dd-4c71-8684-2780dbb57e8c-kube-api-access-wlh5n\") pod \"calico-typha-7df98b6bcd-6k2wr\" (UID: \"238a293b-27dd-4c71-8684-2780dbb57e8c\") " pod="calico-system/calico-typha-7df98b6bcd-6k2wr" Oct 29 11:47:39.131612 kubelet[2763]: I1029 11:47:39.131437 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/238a293b-27dd-4c71-8684-2780dbb57e8c-tigera-ca-bundle\") pod \"calico-typha-7df98b6bcd-6k2wr\" (UID: \"238a293b-27dd-4c71-8684-2780dbb57e8c\") " pod="calico-system/calico-typha-7df98b6bcd-6k2wr" Oct 29 11:47:39.245244 systemd[1]: Created slice kubepods-besteffort-pod095845b8_9fa0_4e64_9a19_39768b778b0c.slice - libcontainer container kubepods-besteffort-pod095845b8_9fa0_4e64_9a19_39768b778b0c.slice. Oct 29 11:47:39.332971 kubelet[2763]: I1029 11:47:39.332912 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/095845b8-9fa0-4e64-9a19-39768b778b0c-flexvol-driver-host\") pod \"calico-node-dgwc8\" (UID: \"095845b8-9fa0-4e64-9a19-39768b778b0c\") " pod="calico-system/calico-node-dgwc8" Oct 29 11:47:39.332971 kubelet[2763]: I1029 11:47:39.332972 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/095845b8-9fa0-4e64-9a19-39768b778b0c-node-certs\") pod \"calico-node-dgwc8\" (UID: \"095845b8-9fa0-4e64-9a19-39768b778b0c\") " pod="calico-system/calico-node-dgwc8" Oct 29 11:47:39.333128 kubelet[2763]: I1029 11:47:39.332992 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k222s\" (UniqueName: \"kubernetes.io/projected/095845b8-9fa0-4e64-9a19-39768b778b0c-kube-api-access-k222s\") pod \"calico-node-dgwc8\" (UID: \"095845b8-9fa0-4e64-9a19-39768b778b0c\") " pod="calico-system/calico-node-dgwc8" Oct 29 11:47:39.333128 kubelet[2763]: I1029 11:47:39.333006 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/095845b8-9fa0-4e64-9a19-39768b778b0c-lib-modules\") pod \"calico-node-dgwc8\" (UID: \"095845b8-9fa0-4e64-9a19-39768b778b0c\") " pod="calico-system/calico-node-dgwc8" Oct 29 11:47:39.333128 kubelet[2763]: I1029 11:47:39.333026 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/095845b8-9fa0-4e64-9a19-39768b778b0c-tigera-ca-bundle\") pod \"calico-node-dgwc8\" (UID: \"095845b8-9fa0-4e64-9a19-39768b778b0c\") " pod="calico-system/calico-node-dgwc8" Oct 29 11:47:39.333128 kubelet[2763]: I1029 11:47:39.333043 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/095845b8-9fa0-4e64-9a19-39768b778b0c-cni-log-dir\") pod \"calico-node-dgwc8\" (UID: \"095845b8-9fa0-4e64-9a19-39768b778b0c\") " pod="calico-system/calico-node-dgwc8" Oct 29 11:47:39.333128 kubelet[2763]: I1029 11:47:39.333056 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/095845b8-9fa0-4e64-9a19-39768b778b0c-cni-net-dir\") pod \"calico-node-dgwc8\" (UID: \"095845b8-9fa0-4e64-9a19-39768b778b0c\") " pod="calico-system/calico-node-dgwc8" Oct 29 11:47:39.333278 kubelet[2763]: I1029 11:47:39.333070 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/095845b8-9fa0-4e64-9a19-39768b778b0c-xtables-lock\") pod \"calico-node-dgwc8\" (UID: \"095845b8-9fa0-4e64-9a19-39768b778b0c\") " pod="calico-system/calico-node-dgwc8" Oct 29 11:47:39.333278 kubelet[2763]: I1029 11:47:39.333086 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/095845b8-9fa0-4e64-9a19-39768b778b0c-cni-bin-dir\") pod \"calico-node-dgwc8\" (UID: \"095845b8-9fa0-4e64-9a19-39768b778b0c\") " pod="calico-system/calico-node-dgwc8" Oct 29 11:47:39.333278 kubelet[2763]: I1029 11:47:39.333102 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/095845b8-9fa0-4e64-9a19-39768b778b0c-policysync\") pod \"calico-node-dgwc8\" (UID: \"095845b8-9fa0-4e64-9a19-39768b778b0c\") " pod="calico-system/calico-node-dgwc8" Oct 29 11:47:39.333278 kubelet[2763]: I1029 11:47:39.333115 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/095845b8-9fa0-4e64-9a19-39768b778b0c-var-run-calico\") pod \"calico-node-dgwc8\" (UID: \"095845b8-9fa0-4e64-9a19-39768b778b0c\") " pod="calico-system/calico-node-dgwc8" Oct 29 11:47:39.333278 kubelet[2763]: I1029 11:47:39.333128 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/095845b8-9fa0-4e64-9a19-39768b778b0c-var-lib-calico\") pod \"calico-node-dgwc8\" (UID: \"095845b8-9fa0-4e64-9a19-39768b778b0c\") " pod="calico-system/calico-node-dgwc8" Oct 29 11:47:39.355349 kubelet[2763]: E1029 11:47:39.355091 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:47:39.355655 containerd[1601]: time="2025-10-29T11:47:39.355601902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7df98b6bcd-6k2wr,Uid:238a293b-27dd-4c71-8684-2780dbb57e8c,Namespace:calico-system,Attempt:0,}" Oct 29 11:47:39.417203 containerd[1601]: time="2025-10-29T11:47:39.417044596Z" level=info msg="connecting to shim 569c97e4eef900b78bfe303b01eaf75cc0f26a5686e21bff72fe6e3b3f7c9765" address="unix:///run/containerd/s/6a8d73785363646287d0d8c57c1fd294fd1201df172a7f35bef8d68a752eae82" namespace=k8s.io protocol=ttrpc version=3 Oct 29 11:47:39.435654 kubelet[2763]: E1029 11:47:39.435617 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.435654 kubelet[2763]: W1029 11:47:39.435637 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.438176 kubelet[2763]: E1029 11:47:39.437597 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.438390 kubelet[2763]: E1029 11:47:39.438368 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.438390 kubelet[2763]: W1029 11:47:39.438386 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.438469 kubelet[2763]: E1029 11:47:39.438401 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.438657 kubelet[2763]: E1029 11:47:39.438640 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.438657 kubelet[2763]: W1029 11:47:39.438653 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.438822 kubelet[2763]: E1029 11:47:39.438663 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.439287 kubelet[2763]: E1029 11:47:39.439258 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.439287 kubelet[2763]: W1029 11:47:39.439273 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.439287 kubelet[2763]: E1029 11:47:39.439283 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.440953 kubelet[2763]: E1029 11:47:39.440807 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.440953 kubelet[2763]: W1029 11:47:39.440825 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.440953 kubelet[2763]: E1029 11:47:39.440839 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.441139 systemd[1]: Started cri-containerd-569c97e4eef900b78bfe303b01eaf75cc0f26a5686e21bff72fe6e3b3f7c9765.scope - libcontainer container 569c97e4eef900b78bfe303b01eaf75cc0f26a5686e21bff72fe6e3b3f7c9765. Oct 29 11:47:39.443595 kubelet[2763]: E1029 11:47:39.443574 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.443595 kubelet[2763]: W1029 11:47:39.443590 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.444019 kubelet[2763]: E1029 11:47:39.443621 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.444019 kubelet[2763]: E1029 11:47:39.443824 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.444019 kubelet[2763]: W1029 11:47:39.443832 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.444019 kubelet[2763]: E1029 11:47:39.443841 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.444019 kubelet[2763]: E1029 11:47:39.443988 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.444019 kubelet[2763]: W1029 11:47:39.443996 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.444019 kubelet[2763]: E1029 11:47:39.444004 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.444527 kubelet[2763]: E1029 11:47:39.444149 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.444527 kubelet[2763]: W1029 11:47:39.444159 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.444527 kubelet[2763]: E1029 11:47:39.444167 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.444527 kubelet[2763]: E1029 11:47:39.444306 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.444527 kubelet[2763]: W1029 11:47:39.444320 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.444527 kubelet[2763]: E1029 11:47:39.444328 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.444527 kubelet[2763]: E1029 11:47:39.444488 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.444527 kubelet[2763]: W1029 11:47:39.444496 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.444527 kubelet[2763]: E1029 11:47:39.444509 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.444727 kubelet[2763]: E1029 11:47:39.444696 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.444727 kubelet[2763]: W1029 11:47:39.444703 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.444727 kubelet[2763]: E1029 11:47:39.444711 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.445970 kubelet[2763]: E1029 11:47:39.445084 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.445970 kubelet[2763]: W1029 11:47:39.445097 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.445970 kubelet[2763]: E1029 11:47:39.445106 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.445970 kubelet[2763]: E1029 11:47:39.445321 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.445970 kubelet[2763]: W1029 11:47:39.445329 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.445970 kubelet[2763]: E1029 11:47:39.445338 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.445970 kubelet[2763]: E1029 11:47:39.445487 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.445970 kubelet[2763]: W1029 11:47:39.445494 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.445970 kubelet[2763]: E1029 11:47:39.445502 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.445970 kubelet[2763]: E1029 11:47:39.445657 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.446525 kubelet[2763]: W1029 11:47:39.445665 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.446525 kubelet[2763]: E1029 11:47:39.445672 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.446525 kubelet[2763]: E1029 11:47:39.445881 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.446525 kubelet[2763]: W1029 11:47:39.445890 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.446525 kubelet[2763]: E1029 11:47:39.445906 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.446525 kubelet[2763]: E1029 11:47:39.446121 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.446525 kubelet[2763]: W1029 11:47:39.446131 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.446525 kubelet[2763]: E1029 11:47:39.446139 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.453611 kubelet[2763]: E1029 11:47:39.453592 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.453611 kubelet[2763]: W1029 11:47:39.453608 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.453713 kubelet[2763]: E1029 11:47:39.453621 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.478986 containerd[1601]: time="2025-10-29T11:47:39.478532859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7df98b6bcd-6k2wr,Uid:238a293b-27dd-4c71-8684-2780dbb57e8c,Namespace:calico-system,Attempt:0,} returns sandbox id \"569c97e4eef900b78bfe303b01eaf75cc0f26a5686e21bff72fe6e3b3f7c9765\"" Oct 29 11:47:39.488196 kubelet[2763]: E1029 11:47:39.488174 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:47:39.490390 containerd[1601]: time="2025-10-29T11:47:39.490102369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 29 11:47:39.517596 kubelet[2763]: E1029 11:47:39.517534 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tpbbb" podUID="19d4b9de-b24e-493e-a2fd-91157fcb3c0a" Oct 29 11:47:39.530711 kubelet[2763]: E1029 11:47:39.530685 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.530711 kubelet[2763]: W1029 11:47:39.530705 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.530711 kubelet[2763]: E1029 11:47:39.530723 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.530873 kubelet[2763]: E1029 11:47:39.530859 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.530909 kubelet[2763]: W1029 11:47:39.530865 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.530909 kubelet[2763]: E1029 11:47:39.530903 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.531124 kubelet[2763]: E1029 11:47:39.531110 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.531124 kubelet[2763]: W1029 11:47:39.531123 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.531188 kubelet[2763]: E1029 11:47:39.531132 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.531278 kubelet[2763]: E1029 11:47:39.531267 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.531278 kubelet[2763]: W1029 11:47:39.531276 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.531346 kubelet[2763]: E1029 11:47:39.531284 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.531431 kubelet[2763]: E1029 11:47:39.531420 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.531431 kubelet[2763]: W1029 11:47:39.531430 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.531484 kubelet[2763]: E1029 11:47:39.531439 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.531554 kubelet[2763]: E1029 11:47:39.531544 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.531582 kubelet[2763]: W1029 11:47:39.531554 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.531582 kubelet[2763]: E1029 11:47:39.531562 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.531676 kubelet[2763]: E1029 11:47:39.531665 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.531676 kubelet[2763]: W1029 11:47:39.531673 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.531737 kubelet[2763]: E1029 11:47:39.531680 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.531795 kubelet[2763]: E1029 11:47:39.531783 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.531795 kubelet[2763]: W1029 11:47:39.531793 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.531843 kubelet[2763]: E1029 11:47:39.531800 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.531936 kubelet[2763]: E1029 11:47:39.531924 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.531936 kubelet[2763]: W1029 11:47:39.531934 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.532000 kubelet[2763]: E1029 11:47:39.531957 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.532075 kubelet[2763]: E1029 11:47:39.532065 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.532075 kubelet[2763]: W1029 11:47:39.532074 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.532126 kubelet[2763]: E1029 11:47:39.532082 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.532192 kubelet[2763]: E1029 11:47:39.532182 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.532192 kubelet[2763]: W1029 11:47:39.532192 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.532246 kubelet[2763]: E1029 11:47:39.532199 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.532315 kubelet[2763]: E1029 11:47:39.532305 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.532315 kubelet[2763]: W1029 11:47:39.532314 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.532368 kubelet[2763]: E1029 11:47:39.532321 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.532455 kubelet[2763]: E1029 11:47:39.532444 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.532455 kubelet[2763]: W1029 11:47:39.532453 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.532520 kubelet[2763]: E1029 11:47:39.532460 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.532572 kubelet[2763]: E1029 11:47:39.532562 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.532603 kubelet[2763]: W1029 11:47:39.532574 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.532603 kubelet[2763]: E1029 11:47:39.532581 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.532692 kubelet[2763]: E1029 11:47:39.532681 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.532692 kubelet[2763]: W1029 11:47:39.532689 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.532765 kubelet[2763]: E1029 11:47:39.532696 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.532814 kubelet[2763]: E1029 11:47:39.532797 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.532814 kubelet[2763]: W1029 11:47:39.532805 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.532814 kubelet[2763]: E1029 11:47:39.532811 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.532967 kubelet[2763]: E1029 11:47:39.532924 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.532967 kubelet[2763]: W1029 11:47:39.532931 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.532967 kubelet[2763]: E1029 11:47:39.532938 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.533069 kubelet[2763]: E1029 11:47:39.533058 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.533069 kubelet[2763]: W1029 11:47:39.533067 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.533121 kubelet[2763]: E1029 11:47:39.533074 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.533186 kubelet[2763]: E1029 11:47:39.533177 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.533186 kubelet[2763]: W1029 11:47:39.533185 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.533233 kubelet[2763]: E1029 11:47:39.533192 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.533311 kubelet[2763]: E1029 11:47:39.533301 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.533311 kubelet[2763]: W1029 11:47:39.533309 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.533363 kubelet[2763]: E1029 11:47:39.533317 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.536329 kubelet[2763]: E1029 11:47:39.536310 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.536374 kubelet[2763]: W1029 11:47:39.536333 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.536374 kubelet[2763]: E1029 11:47:39.536346 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.536418 kubelet[2763]: I1029 11:47:39.536376 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/19d4b9de-b24e-493e-a2fd-91157fcb3c0a-socket-dir\") pod \"csi-node-driver-tpbbb\" (UID: \"19d4b9de-b24e-493e-a2fd-91157fcb3c0a\") " pod="calico-system/csi-node-driver-tpbbb" Oct 29 11:47:39.536560 kubelet[2763]: E1029 11:47:39.536546 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.536560 kubelet[2763]: W1029 11:47:39.536558 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.536637 kubelet[2763]: E1029 11:47:39.536569 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.536637 kubelet[2763]: I1029 11:47:39.536588 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/19d4b9de-b24e-493e-a2fd-91157fcb3c0a-varrun\") pod \"csi-node-driver-tpbbb\" (UID: \"19d4b9de-b24e-493e-a2fd-91157fcb3c0a\") " pod="calico-system/csi-node-driver-tpbbb" Oct 29 11:47:39.536781 kubelet[2763]: E1029 11:47:39.536766 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.536781 kubelet[2763]: W1029 11:47:39.536779 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.536849 kubelet[2763]: E1029 11:47:39.536789 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.536849 kubelet[2763]: I1029 11:47:39.536807 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/19d4b9de-b24e-493e-a2fd-91157fcb3c0a-kubelet-dir\") pod \"csi-node-driver-tpbbb\" (UID: \"19d4b9de-b24e-493e-a2fd-91157fcb3c0a\") " pod="calico-system/csi-node-driver-tpbbb" Oct 29 11:47:39.537056 kubelet[2763]: E1029 11:47:39.537038 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.537082 kubelet[2763]: W1029 11:47:39.537055 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.537082 kubelet[2763]: E1029 11:47:39.537069 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.537237 kubelet[2763]: E1029 11:47:39.537224 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.537237 kubelet[2763]: W1029 11:47:39.537235 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.537301 kubelet[2763]: E1029 11:47:39.537245 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.537423 kubelet[2763]: E1029 11:47:39.537410 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.537423 kubelet[2763]: W1029 11:47:39.537422 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.537468 kubelet[2763]: E1029 11:47:39.537430 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.537632 kubelet[2763]: E1029 11:47:39.537582 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.537632 kubelet[2763]: W1029 11:47:39.537626 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.537683 kubelet[2763]: E1029 11:47:39.537637 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.537842 kubelet[2763]: E1029 11:47:39.537828 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.537842 kubelet[2763]: W1029 11:47:39.537841 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.537904 kubelet[2763]: E1029 11:47:39.537850 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.538023 kubelet[2763]: E1029 11:47:39.537998 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.538023 kubelet[2763]: W1029 11:47:39.538020 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.538067 kubelet[2763]: E1029 11:47:39.538029 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.538275 kubelet[2763]: E1029 11:47:39.538261 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.538275 kubelet[2763]: W1029 11:47:39.538273 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.538322 kubelet[2763]: E1029 11:47:39.538283 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.538322 kubelet[2763]: I1029 11:47:39.538304 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/19d4b9de-b24e-493e-a2fd-91157fcb3c0a-registration-dir\") pod \"csi-node-driver-tpbbb\" (UID: \"19d4b9de-b24e-493e-a2fd-91157fcb3c0a\") " pod="calico-system/csi-node-driver-tpbbb" Oct 29 11:47:39.538519 kubelet[2763]: E1029 11:47:39.538504 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.538519 kubelet[2763]: W1029 11:47:39.538517 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.538714 kubelet[2763]: E1029 11:47:39.538526 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.538830 kubelet[2763]: I1029 11:47:39.538783 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4st4\" (UniqueName: \"kubernetes.io/projected/19d4b9de-b24e-493e-a2fd-91157fcb3c0a-kube-api-access-b4st4\") pod \"csi-node-driver-tpbbb\" (UID: \"19d4b9de-b24e-493e-a2fd-91157fcb3c0a\") " pod="calico-system/csi-node-driver-tpbbb" Oct 29 11:47:39.539128 kubelet[2763]: E1029 11:47:39.539111 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.539155 kubelet[2763]: W1029 11:47:39.539131 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.539155 kubelet[2763]: E1029 11:47:39.539142 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.539323 kubelet[2763]: E1029 11:47:39.539310 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.539386 kubelet[2763]: W1029 11:47:39.539321 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.539386 kubelet[2763]: E1029 11:47:39.539345 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.539574 kubelet[2763]: E1029 11:47:39.539560 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.539605 kubelet[2763]: W1029 11:47:39.539574 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.539605 kubelet[2763]: E1029 11:47:39.539583 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.539737 kubelet[2763]: E1029 11:47:39.539719 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.539737 kubelet[2763]: W1029 11:47:39.539730 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.539737 kubelet[2763]: E1029 11:47:39.539737 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.548186 kubelet[2763]: E1029 11:47:39.548160 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:47:39.549363 containerd[1601]: time="2025-10-29T11:47:39.549308144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dgwc8,Uid:095845b8-9fa0-4e64-9a19-39768b778b0c,Namespace:calico-system,Attempt:0,}" Oct 29 11:47:39.575880 containerd[1601]: time="2025-10-29T11:47:39.565261634Z" level=info msg="connecting to shim eb133f1dd64fc5f9192e6395c989e0c080febf0e68760c50ae85090cf976e6f3" address="unix:///run/containerd/s/9415bd04822be97e749cf921464080be7dcf105d8b86c19cc5c2b47aa5b88d8c" namespace=k8s.io protocol=ttrpc version=3 Oct 29 11:47:39.601105 systemd[1]: Started cri-containerd-eb133f1dd64fc5f9192e6395c989e0c080febf0e68760c50ae85090cf976e6f3.scope - libcontainer container eb133f1dd64fc5f9192e6395c989e0c080febf0e68760c50ae85090cf976e6f3. Oct 29 11:47:39.623838 containerd[1601]: time="2025-10-29T11:47:39.623805039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dgwc8,Uid:095845b8-9fa0-4e64-9a19-39768b778b0c,Namespace:calico-system,Attempt:0,} returns sandbox id \"eb133f1dd64fc5f9192e6395c989e0c080febf0e68760c50ae85090cf976e6f3\"" Oct 29 11:47:39.624486 kubelet[2763]: E1029 11:47:39.624464 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:47:39.640183 kubelet[2763]: E1029 11:47:39.640157 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.640498 kubelet[2763]: W1029 11:47:39.640478 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.640662 kubelet[2763]: E1029 11:47:39.640647 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.641542 kubelet[2763]: E1029 11:47:39.641332 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.641542 kubelet[2763]: W1029 11:47:39.641360 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.641542 kubelet[2763]: E1029 11:47:39.641374 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.642030 kubelet[2763]: E1029 11:47:39.641830 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.642030 kubelet[2763]: W1029 11:47:39.641844 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.642030 kubelet[2763]: E1029 11:47:39.641856 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.642309 kubelet[2763]: E1029 11:47:39.642288 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.642309 kubelet[2763]: W1029 11:47:39.642305 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.642717 kubelet[2763]: E1029 11:47:39.642317 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.642717 kubelet[2763]: E1029 11:47:39.642547 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.642717 kubelet[2763]: W1029 11:47:39.642567 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.642717 kubelet[2763]: E1029 11:47:39.642592 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.643111 kubelet[2763]: E1029 11:47:39.642837 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.643111 kubelet[2763]: W1029 11:47:39.642852 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.643111 kubelet[2763]: E1029 11:47:39.642862 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.643111 kubelet[2763]: E1029 11:47:39.643125 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.643111 kubelet[2763]: W1029 11:47:39.643134 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.643111 kubelet[2763]: E1029 11:47:39.643144 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.644136 kubelet[2763]: E1029 11:47:39.643428 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.644136 kubelet[2763]: W1029 11:47:39.643439 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.644136 kubelet[2763]: E1029 11:47:39.643448 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.644136 kubelet[2763]: E1029 11:47:39.643578 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.644136 kubelet[2763]: W1029 11:47:39.643585 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.644136 kubelet[2763]: E1029 11:47:39.643592 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.644136 kubelet[2763]: E1029 11:47:39.643764 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.644136 kubelet[2763]: W1029 11:47:39.643771 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.644136 kubelet[2763]: E1029 11:47:39.643779 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.644136 kubelet[2763]: E1029 11:47:39.643926 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.645008 kubelet[2763]: W1029 11:47:39.643933 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.645008 kubelet[2763]: E1029 11:47:39.643951 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.645008 kubelet[2763]: E1029 11:47:39.644084 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.645008 kubelet[2763]: W1029 11:47:39.644091 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.645008 kubelet[2763]: E1029 11:47:39.644099 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.645008 kubelet[2763]: E1029 11:47:39.644269 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.645008 kubelet[2763]: W1029 11:47:39.644277 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.645008 kubelet[2763]: E1029 11:47:39.644285 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.645008 kubelet[2763]: E1029 11:47:39.644463 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.645008 kubelet[2763]: W1029 11:47:39.644471 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.645291 kubelet[2763]: E1029 11:47:39.644480 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.645291 kubelet[2763]: E1029 11:47:39.644619 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.645291 kubelet[2763]: W1029 11:47:39.644626 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.645291 kubelet[2763]: E1029 11:47:39.644651 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.645291 kubelet[2763]: E1029 11:47:39.644798 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.645291 kubelet[2763]: W1029 11:47:39.644806 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.645291 kubelet[2763]: E1029 11:47:39.644814 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.645291 kubelet[2763]: E1029 11:47:39.645113 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.645291 kubelet[2763]: W1029 11:47:39.645122 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.645291 kubelet[2763]: E1029 11:47:39.645131 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.646110 kubelet[2763]: E1029 11:47:39.646092 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.646110 kubelet[2763]: W1029 11:47:39.646108 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.646224 kubelet[2763]: E1029 11:47:39.646121 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.646306 kubelet[2763]: E1029 11:47:39.646291 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.646306 kubelet[2763]: W1029 11:47:39.646305 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.646371 kubelet[2763]: E1029 11:47:39.646315 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.646646 kubelet[2763]: E1029 11:47:39.646628 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.646646 kubelet[2763]: W1029 11:47:39.646642 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.647011 kubelet[2763]: E1029 11:47:39.646653 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.647011 kubelet[2763]: E1029 11:47:39.647020 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.647165 kubelet[2763]: W1029 11:47:39.647032 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.647375 kubelet[2763]: E1029 11:47:39.647205 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.647576 kubelet[2763]: E1029 11:47:39.647560 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.647576 kubelet[2763]: W1029 11:47:39.647573 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.647632 kubelet[2763]: E1029 11:47:39.647586 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.648092 kubelet[2763]: E1029 11:47:39.648068 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.648092 kubelet[2763]: W1029 11:47:39.648085 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.648236 kubelet[2763]: E1029 11:47:39.648096 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.648341 kubelet[2763]: E1029 11:47:39.648323 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.648341 kubelet[2763]: W1029 11:47:39.648336 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.648399 kubelet[2763]: E1029 11:47:39.648346 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.648646 kubelet[2763]: E1029 11:47:39.648573 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.648646 kubelet[2763]: W1029 11:47:39.648587 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.648646 kubelet[2763]: E1029 11:47:39.648596 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:39.662867 kubelet[2763]: E1029 11:47:39.662805 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:39.662867 kubelet[2763]: W1029 11:47:39.662823 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:39.662867 kubelet[2763]: E1029 11:47:39.662837 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:40.631737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount254594582.mount: Deactivated successfully. Oct 29 11:47:40.670963 kubelet[2763]: E1029 11:47:40.670893 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tpbbb" podUID="19d4b9de-b24e-493e-a2fd-91157fcb3c0a" Oct 29 11:47:41.762876 containerd[1601]: time="2025-10-29T11:47:41.762824603Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 11:47:41.763954 containerd[1601]: time="2025-10-29T11:47:41.763895797Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Oct 29 11:47:41.764749 containerd[1601]: time="2025-10-29T11:47:41.764700143Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 11:47:41.766683 containerd[1601]: time="2025-10-29T11:47:41.766650215Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 11:47:41.767191 containerd[1601]: time="2025-10-29T11:47:41.767159227Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.276493868s" Oct 29 11:47:41.767229 containerd[1601]: time="2025-10-29T11:47:41.767188833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Oct 29 11:47:41.768143 containerd[1601]: time="2025-10-29T11:47:41.768101518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 29 11:47:41.809546 containerd[1601]: time="2025-10-29T11:47:41.809498966Z" level=info msg="CreateContainer within sandbox \"569c97e4eef900b78bfe303b01eaf75cc0f26a5686e21bff72fe6e3b3f7c9765\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 29 11:47:41.821642 containerd[1601]: time="2025-10-29T11:47:41.821607716Z" level=info msg="Container 4c3ea8ee143c91d3bc4b13e303feaef13270e7f514ba04bef8ce90c8aad7cdc4: CDI devices from CRI Config.CDIDevices: []" Oct 29 11:47:41.828823 containerd[1601]: time="2025-10-29T11:47:41.828766531Z" level=info msg="CreateContainer within sandbox \"569c97e4eef900b78bfe303b01eaf75cc0f26a5686e21bff72fe6e3b3f7c9765\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4c3ea8ee143c91d3bc4b13e303feaef13270e7f514ba04bef8ce90c8aad7cdc4\"" Oct 29 11:47:41.829678 containerd[1601]: time="2025-10-29T11:47:41.829638008Z" level=info msg="StartContainer for \"4c3ea8ee143c91d3bc4b13e303feaef13270e7f514ba04bef8ce90c8aad7cdc4\"" Oct 29 11:47:41.831070 containerd[1601]: time="2025-10-29T11:47:41.831044703Z" level=info msg="connecting to shim 4c3ea8ee143c91d3bc4b13e303feaef13270e7f514ba04bef8ce90c8aad7cdc4" address="unix:///run/containerd/s/6a8d73785363646287d0d8c57c1fd294fd1201df172a7f35bef8d68a752eae82" protocol=ttrpc version=3 Oct 29 11:47:41.858115 systemd[1]: Started cri-containerd-4c3ea8ee143c91d3bc4b13e303feaef13270e7f514ba04bef8ce90c8aad7cdc4.scope - libcontainer container 4c3ea8ee143c91d3bc4b13e303feaef13270e7f514ba04bef8ce90c8aad7cdc4. Oct 29 11:47:41.906955 containerd[1601]: time="2025-10-29T11:47:41.906915506Z" level=info msg="StartContainer for \"4c3ea8ee143c91d3bc4b13e303feaef13270e7f514ba04bef8ce90c8aad7cdc4\" returns successfully" Oct 29 11:47:42.670919 kubelet[2763]: E1029 11:47:42.670779 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tpbbb" podUID="19d4b9de-b24e-493e-a2fd-91157fcb3c0a" Oct 29 11:47:42.760617 kubelet[2763]: E1029 11:47:42.760524 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:47:42.779026 kubelet[2763]: I1029 11:47:42.778774 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7df98b6bcd-6k2wr" podStartSLOduration=1.499818262 podStartE2EDuration="3.778758399s" podCreationTimestamp="2025-10-29 11:47:39 +0000 UTC" firstStartedPulling="2025-10-29 11:47:39.488898173 +0000 UTC m=+24.905075828" lastFinishedPulling="2025-10-29 11:47:41.76783831 +0000 UTC m=+27.184015965" observedRunningTime="2025-10-29 11:47:42.778572406 +0000 UTC m=+28.194750061" watchObservedRunningTime="2025-10-29 11:47:42.778758399 +0000 UTC m=+28.194936054" Oct 29 11:47:42.853296 containerd[1601]: time="2025-10-29T11:47:42.853242634Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 11:47:42.853774 containerd[1601]: time="2025-10-29T11:47:42.853744321Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Oct 29 11:47:42.854624 containerd[1601]: time="2025-10-29T11:47:42.854579826Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 11:47:42.855262 kubelet[2763]: E1029 11:47:42.855242 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:42.855323 kubelet[2763]: W1029 11:47:42.855262 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:42.855323 kubelet[2763]: E1029 11:47:42.855280 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:42.855463 kubelet[2763]: E1029 11:47:42.855452 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:42.855503 kubelet[2763]: W1029 11:47:42.855463 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:42.855540 kubelet[2763]: E1029 11:47:42.855505 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:42.855684 kubelet[2763]: E1029 11:47:42.855673 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:42.855709 kubelet[2763]: W1029 11:47:42.855684 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:42.855732 kubelet[2763]: E1029 11:47:42.855704 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:42.856490 kubelet[2763]: E1029 11:47:42.856474 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:42.856544 kubelet[2763]: W1029 11:47:42.856491 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:42.856544 kubelet[2763]: E1029 11:47:42.856511 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:42.856700 containerd[1601]: time="2025-10-29T11:47:42.856672350Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 11:47:42.856791 kubelet[2763]: E1029 11:47:42.856777 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:42.856821 kubelet[2763]: W1029 11:47:42.856791 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:42.856821 kubelet[2763]: E1029 11:47:42.856803 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:42.857497 kubelet[2763]: E1029 11:47:42.856997 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:42.857497 kubelet[2763]: W1029 11:47:42.857011 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:42.857497 kubelet[2763]: E1029 11:47:42.857021 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:42.857497 kubelet[2763]: E1029 11:47:42.857187 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:42.857497 kubelet[2763]: W1029 11:47:42.857195 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:42.857497 kubelet[2763]: E1029 11:47:42.857203 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:42.857497 kubelet[2763]: E1029 11:47:42.857406 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:42.857497 kubelet[2763]: W1029 11:47:42.857413 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:42.857497 kubelet[2763]: E1029 11:47:42.857422 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:42.857715 containerd[1601]: time="2025-10-29T11:47:42.857113667Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.088983424s" Oct 29 11:47:42.857715 containerd[1601]: time="2025-10-29T11:47:42.857141232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Oct 29 11:47:42.857760 kubelet[2763]: E1029 11:47:42.857556 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:42.857760 kubelet[2763]: W1029 11:47:42.857563 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:42.857760 kubelet[2763]: E1029 11:47:42.857571 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:42.857760 kubelet[2763]: E1029 11:47:42.857674 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:42.857760 kubelet[2763]: W1029 11:47:42.857680 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:42.857760 kubelet[2763]: E1029 11:47:42.857690 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:42.858230 kubelet[2763]: E1029 11:47:42.858214 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:42.858230 kubelet[2763]: W1029 11:47:42.858227 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:42.858310 kubelet[2763]: E1029 11:47:42.858237 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:42.858429 kubelet[2763]: E1029 11:47:42.858396 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:42.858460 kubelet[2763]: W1029 11:47:42.858430 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:42.858460 kubelet[2763]: E1029 11:47:42.858442 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:42.859102 kubelet[2763]: E1029 11:47:42.859086 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:42.859102 kubelet[2763]: W1029 11:47:42.859100 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:42.859201 kubelet[2763]: E1029 11:47:42.859111 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:42.859281 kubelet[2763]: E1029 11:47:42.859271 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:42.859316 kubelet[2763]: W1029 11:47:42.859281 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:42.859316 kubelet[2763]: E1029 11:47:42.859290 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:42.859808 kubelet[2763]: E1029 11:47:42.859795 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:42.859808 kubelet[2763]: W1029 11:47:42.859807 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:42.859886 kubelet[2763]: E1029 11:47:42.859816 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:42.869491 kubelet[2763]: E1029 11:47:42.869466 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:42.881140 kubelet[2763]: W1029 11:47:42.869604 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:42.881140 kubelet[2763]: E1029 11:47:42.869623 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:42.881140 kubelet[2763]: E1029 11:47:42.869855 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:42.881140 kubelet[2763]: W1029 11:47:42.869865 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:42.881140 kubelet[2763]: E1029 11:47:42.869875 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:42.881140 kubelet[2763]: E1029 11:47:42.870089 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:42.881140 kubelet[2763]: W1029 11:47:42.870103 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:42.881140 kubelet[2763]: E1029 11:47:42.870115 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:42.881140 kubelet[2763]: E1029 11:47:42.870297 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:42.881140 kubelet[2763]: W1029 11:47:42.870304 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:42.883302 kubelet[2763]: E1029 11:47:42.870312 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:42.883302 kubelet[2763]: E1029 11:47:42.870445 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:42.883302 kubelet[2763]: W1029 11:47:42.870452 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:42.883302 kubelet[2763]: E1029 11:47:42.870459 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:42.883302 kubelet[2763]: E1029 11:47:42.870617 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:42.883302 kubelet[2763]: W1029 11:47:42.870624 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:42.883302 kubelet[2763]: E1029 11:47:42.870632 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:42.883302 kubelet[2763]: E1029 11:47:42.870866 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:42.883302 kubelet[2763]: W1029 11:47:42.870878 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:42.883302 kubelet[2763]: E1029 11:47:42.870890 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:42.883495 kubelet[2763]: E1029 11:47:42.871101 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:42.883495 kubelet[2763]: W1029 11:47:42.871110 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:42.883495 kubelet[2763]: E1029 11:47:42.871121 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:42.883495 kubelet[2763]: E1029 11:47:42.871277 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:42.883495 kubelet[2763]: W1029 11:47:42.871285 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:42.883495 kubelet[2763]: E1029 11:47:42.871293 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:42.883495 kubelet[2763]: E1029 11:47:42.871473 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:42.883495 kubelet[2763]: W1029 11:47:42.871483 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:42.883495 kubelet[2763]: E1029 11:47:42.871493 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:42.883495 kubelet[2763]: E1029 11:47:42.871656 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:42.883677 kubelet[2763]: W1029 11:47:42.871663 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:42.883677 kubelet[2763]: E1029 11:47:42.871671 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:42.883677 kubelet[2763]: E1029 11:47:42.871826 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:42.883677 kubelet[2763]: W1029 11:47:42.871834 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:42.883677 kubelet[2763]: E1029 11:47:42.871842 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:42.883677 kubelet[2763]: E1029 11:47:42.872259 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:42.883677 kubelet[2763]: W1029 11:47:42.872277 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:42.883677 kubelet[2763]: E1029 11:47:42.872287 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:42.883677 kubelet[2763]: E1029 11:47:42.872451 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:42.883677 kubelet[2763]: W1029 11:47:42.872459 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:42.888884 kubelet[2763]: E1029 11:47:42.872468 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:42.888884 kubelet[2763]: E1029 11:47:42.872679 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:42.888884 kubelet[2763]: W1029 11:47:42.872688 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:42.888884 kubelet[2763]: E1029 11:47:42.872698 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:42.888884 kubelet[2763]: E1029 11:47:42.872969 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:42.888884 kubelet[2763]: W1029 11:47:42.872984 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:42.888884 kubelet[2763]: E1029 11:47:42.872996 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:42.888884 kubelet[2763]: E1029 11:47:42.873172 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:42.888884 kubelet[2763]: W1029 11:47:42.873180 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:42.888884 kubelet[2763]: E1029 11:47:42.873190 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:42.889230 kubelet[2763]: E1029 11:47:42.883344 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 11:47:42.889230 kubelet[2763]: W1029 11:47:42.883359 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 11:47:42.889230 kubelet[2763]: E1029 11:47:42.883373 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 11:47:42.895551 containerd[1601]: time="2025-10-29T11:47:42.895506185Z" level=info msg="CreateContainer within sandbox \"eb133f1dd64fc5f9192e6395c989e0c080febf0e68760c50ae85090cf976e6f3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 29 11:47:43.005534 containerd[1601]: time="2025-10-29T11:47:43.005062933Z" level=info msg="Container 6d7a81ca78a77cb52953b2b89e908f5f9e76e5b84cc898a668857936de36d2d5: CDI devices from CRI Config.CDIDevices: []" Oct 29 11:47:43.031247 containerd[1601]: time="2025-10-29T11:47:43.031193788Z" level=info msg="CreateContainer within sandbox \"eb133f1dd64fc5f9192e6395c989e0c080febf0e68760c50ae85090cf976e6f3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6d7a81ca78a77cb52953b2b89e908f5f9e76e5b84cc898a668857936de36d2d5\"" Oct 29 11:47:43.032064 containerd[1601]: time="2025-10-29T11:47:43.031940553Z" level=info msg="StartContainer for \"6d7a81ca78a77cb52953b2b89e908f5f9e76e5b84cc898a668857936de36d2d5\"" Oct 29 11:47:43.033653 containerd[1601]: time="2025-10-29T11:47:43.033617834Z" level=info msg="connecting to shim 6d7a81ca78a77cb52953b2b89e908f5f9e76e5b84cc898a668857936de36d2d5" address="unix:///run/containerd/s/9415bd04822be97e749cf921464080be7dcf105d8b86c19cc5c2b47aa5b88d8c" protocol=ttrpc version=3 Oct 29 11:47:43.061109 systemd[1]: Started cri-containerd-6d7a81ca78a77cb52953b2b89e908f5f9e76e5b84cc898a668857936de36d2d5.scope - libcontainer container 6d7a81ca78a77cb52953b2b89e908f5f9e76e5b84cc898a668857936de36d2d5. Oct 29 11:47:43.096440 containerd[1601]: time="2025-10-29T11:47:43.096406186Z" level=info msg="StartContainer for \"6d7a81ca78a77cb52953b2b89e908f5f9e76e5b84cc898a668857936de36d2d5\" returns successfully" Oct 29 11:47:43.105747 systemd[1]: cri-containerd-6d7a81ca78a77cb52953b2b89e908f5f9e76e5b84cc898a668857936de36d2d5.scope: Deactivated successfully. Oct 29 11:47:43.110902 containerd[1601]: time="2025-10-29T11:47:43.110852924Z" level=info msg="received exit event container_id:\"6d7a81ca78a77cb52953b2b89e908f5f9e76e5b84cc898a668857936de36d2d5\" id:\"6d7a81ca78a77cb52953b2b89e908f5f9e76e5b84cc898a668857936de36d2d5\" pid:3498 exited_at:{seconds:1761738463 nanos:107417069}" Oct 29 11:47:43.111121 containerd[1601]: time="2025-10-29T11:47:43.110937699Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6d7a81ca78a77cb52953b2b89e908f5f9e76e5b84cc898a668857936de36d2d5\" id:\"6d7a81ca78a77cb52953b2b89e908f5f9e76e5b84cc898a668857936de36d2d5\" pid:3498 exited_at:{seconds:1761738463 nanos:107417069}" Oct 29 11:47:43.148883 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d7a81ca78a77cb52953b2b89e908f5f9e76e5b84cc898a668857936de36d2d5-rootfs.mount: Deactivated successfully. Oct 29 11:47:43.774108 kubelet[2763]: E1029 11:47:43.774070 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:47:43.775297 containerd[1601]: time="2025-10-29T11:47:43.775266600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 29 11:47:43.778466 kubelet[2763]: I1029 11:47:43.778424 2763 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 29 11:47:43.778955 kubelet[2763]: E1029 11:47:43.778902 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:47:44.670842 kubelet[2763]: E1029 11:47:44.670794 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tpbbb" podUID="19d4b9de-b24e-493e-a2fd-91157fcb3c0a" Oct 29 11:47:46.529612 containerd[1601]: time="2025-10-29T11:47:46.529567750Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 11:47:46.531346 containerd[1601]: time="2025-10-29T11:47:46.531225079Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Oct 29 11:47:46.532353 containerd[1601]: time="2025-10-29T11:47:46.532104851Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 11:47:46.534524 containerd[1601]: time="2025-10-29T11:47:46.534486009Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 11:47:46.535093 containerd[1601]: time="2025-10-29T11:47:46.535064216Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.759606904s" Oct 29 11:47:46.535195 containerd[1601]: time="2025-10-29T11:47:46.535180233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Oct 29 11:47:46.540279 containerd[1601]: time="2025-10-29T11:47:46.540236873Z" level=info msg="CreateContainer within sandbox \"eb133f1dd64fc5f9192e6395c989e0c080febf0e68760c50ae85090cf976e6f3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 29 11:47:46.549332 containerd[1601]: time="2025-10-29T11:47:46.549283992Z" level=info msg="Container 15f1cfac8cab00b5cf610a104248c039ed746c3851fad694cbe886a5dc007449: CDI devices from CRI Config.CDIDevices: []" Oct 29 11:47:46.557448 containerd[1601]: time="2025-10-29T11:47:46.557399691Z" level=info msg="CreateContainer within sandbox \"eb133f1dd64fc5f9192e6395c989e0c080febf0e68760c50ae85090cf976e6f3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"15f1cfac8cab00b5cf610a104248c039ed746c3851fad694cbe886a5dc007449\"" Oct 29 11:47:46.558190 containerd[1601]: time="2025-10-29T11:47:46.558163446Z" level=info msg="StartContainer for \"15f1cfac8cab00b5cf610a104248c039ed746c3851fad694cbe886a5dc007449\"" Oct 29 11:47:46.559628 containerd[1601]: time="2025-10-29T11:47:46.559589981Z" level=info msg="connecting to shim 15f1cfac8cab00b5cf610a104248c039ed746c3851fad694cbe886a5dc007449" address="unix:///run/containerd/s/9415bd04822be97e749cf921464080be7dcf105d8b86c19cc5c2b47aa5b88d8c" protocol=ttrpc version=3 Oct 29 11:47:46.592172 systemd[1]: Started cri-containerd-15f1cfac8cab00b5cf610a104248c039ed746c3851fad694cbe886a5dc007449.scope - libcontainer container 15f1cfac8cab00b5cf610a104248c039ed746c3851fad694cbe886a5dc007449. Oct 29 11:47:46.626867 containerd[1601]: time="2025-10-29T11:47:46.626827561Z" level=info msg="StartContainer for \"15f1cfac8cab00b5cf610a104248c039ed746c3851fad694cbe886a5dc007449\" returns successfully" Oct 29 11:47:46.671666 kubelet[2763]: E1029 11:47:46.671622 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tpbbb" podUID="19d4b9de-b24e-493e-a2fd-91157fcb3c0a" Oct 29 11:47:46.781581 kubelet[2763]: E1029 11:47:46.781476 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:47:47.464813 containerd[1601]: time="2025-10-29T11:47:47.464665581Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 29 11:47:47.466800 systemd[1]: cri-containerd-15f1cfac8cab00b5cf610a104248c039ed746c3851fad694cbe886a5dc007449.scope: Deactivated successfully. Oct 29 11:47:47.467124 systemd[1]: cri-containerd-15f1cfac8cab00b5cf610a104248c039ed746c3851fad694cbe886a5dc007449.scope: Consumed 446ms CPU time, 193.4M memory peak, 2.2M read from disk, 165.9M written to disk. Oct 29 11:47:47.468934 containerd[1601]: time="2025-10-29T11:47:47.468902796Z" level=info msg="received exit event container_id:\"15f1cfac8cab00b5cf610a104248c039ed746c3851fad694cbe886a5dc007449\" id:\"15f1cfac8cab00b5cf610a104248c039ed746c3851fad694cbe886a5dc007449\" pid:3559 exited_at:{seconds:1761738467 nanos:468725971}" Oct 29 11:47:47.469229 containerd[1601]: time="2025-10-29T11:47:47.469201160Z" level=info msg="TaskExit event in podsandbox handler container_id:\"15f1cfac8cab00b5cf610a104248c039ed746c3851fad694cbe886a5dc007449\" id:\"15f1cfac8cab00b5cf610a104248c039ed746c3851fad694cbe886a5dc007449\" pid:3559 exited_at:{seconds:1761738467 nanos:468725971}" Oct 29 11:47:47.493459 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-15f1cfac8cab00b5cf610a104248c039ed746c3851fad694cbe886a5dc007449-rootfs.mount: Deactivated successfully. Oct 29 11:47:47.564640 kubelet[2763]: I1029 11:47:47.564411 2763 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 29 11:47:47.571967 kubelet[2763]: I1029 11:47:47.571716 2763 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 29 11:47:47.572427 kubelet[2763]: E1029 11:47:47.572390 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:47:47.621607 systemd[1]: Created slice kubepods-burstable-pod8d12e45e_28f7_4f72_aa9c_a877f939496f.slice - libcontainer container kubepods-burstable-pod8d12e45e_28f7_4f72_aa9c_a877f939496f.slice. Oct 29 11:47:47.628028 systemd[1]: Created slice kubepods-besteffort-podd0c9cc6e_25b8_45f8_aafd_601ef8c53fc7.slice - libcontainer container kubepods-besteffort-podd0c9cc6e_25b8_45f8_aafd_601ef8c53fc7.slice. Oct 29 11:47:47.657408 systemd[1]: Created slice kubepods-besteffort-pod084f74b8_5e73_4bda_886a_d717a73225ab.slice - libcontainer container kubepods-besteffort-pod084f74b8_5e73_4bda_886a_d717a73225ab.slice. Oct 29 11:47:47.669262 systemd[1]: Created slice kubepods-besteffort-podb3b7028a_b276_42fe_9fe7_b12ae54b50d3.slice - libcontainer container kubepods-besteffort-podb3b7028a_b276_42fe_9fe7_b12ae54b50d3.slice. Oct 29 11:47:47.690754 systemd[1]: Created slice kubepods-burstable-pod0e165bbe_8839_4003_b237_6d7afba67d0d.slice - libcontainer container kubepods-burstable-pod0e165bbe_8839_4003_b237_6d7afba67d0d.slice. Oct 29 11:47:47.704584 kubelet[2763]: I1029 11:47:47.704522 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d0c9cc6e-25b8-45f8-aafd-601ef8c53fc7-calico-apiserver-certs\") pod \"calico-apiserver-786f955c4-cnjbr\" (UID: \"d0c9cc6e-25b8-45f8-aafd-601ef8c53fc7\") " pod="calico-apiserver/calico-apiserver-786f955c4-cnjbr" Oct 29 11:47:47.705832 kubelet[2763]: I1029 11:47:47.704873 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq29x\" (UniqueName: \"kubernetes.io/projected/8d12e45e-28f7-4f72-aa9c-a877f939496f-kube-api-access-zq29x\") pod \"coredns-674b8bbfcf-lznm8\" (UID: \"8d12e45e-28f7-4f72-aa9c-a877f939496f\") " pod="kube-system/coredns-674b8bbfcf-lznm8" Oct 29 11:47:47.705832 kubelet[2763]: I1029 11:47:47.704905 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9gpk\" (UniqueName: \"kubernetes.io/projected/d0c9cc6e-25b8-45f8-aafd-601ef8c53fc7-kube-api-access-j9gpk\") pod \"calico-apiserver-786f955c4-cnjbr\" (UID: \"d0c9cc6e-25b8-45f8-aafd-601ef8c53fc7\") " pod="calico-apiserver/calico-apiserver-786f955c4-cnjbr" Oct 29 11:47:47.705832 kubelet[2763]: I1029 11:47:47.704931 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/084f74b8-5e73-4bda-886a-d717a73225ab-whisker-ca-bundle\") pod \"whisker-7d8648c79-k6kzv\" (UID: \"084f74b8-5e73-4bda-886a-d717a73225ab\") " pod="calico-system/whisker-7d8648c79-k6kzv" Oct 29 11:47:47.705832 kubelet[2763]: I1029 11:47:47.704959 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzhl6\" (UniqueName: \"kubernetes.io/projected/084f74b8-5e73-4bda-886a-d717a73225ab-kube-api-access-zzhl6\") pod \"whisker-7d8648c79-k6kzv\" (UID: \"084f74b8-5e73-4bda-886a-d717a73225ab\") " pod="calico-system/whisker-7d8648c79-k6kzv" Oct 29 11:47:47.705832 kubelet[2763]: I1029 11:47:47.704981 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b3b7028a-b276-42fe-9fe7-b12ae54b50d3-calico-apiserver-certs\") pod \"calico-apiserver-786f955c4-g4w2m\" (UID: \"b3b7028a-b276-42fe-9fe7-b12ae54b50d3\") " pod="calico-apiserver/calico-apiserver-786f955c4-g4w2m" Oct 29 11:47:47.706096 kubelet[2763]: I1029 11:47:47.704997 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54zkq\" (UniqueName: \"kubernetes.io/projected/b3b7028a-b276-42fe-9fe7-b12ae54b50d3-kube-api-access-54zkq\") pod \"calico-apiserver-786f955c4-g4w2m\" (UID: \"b3b7028a-b276-42fe-9fe7-b12ae54b50d3\") " pod="calico-apiserver/calico-apiserver-786f955c4-g4w2m" Oct 29 11:47:47.706096 kubelet[2763]: I1029 11:47:47.705045 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/084f74b8-5e73-4bda-886a-d717a73225ab-whisker-backend-key-pair\") pod \"whisker-7d8648c79-k6kzv\" (UID: \"084f74b8-5e73-4bda-886a-d717a73225ab\") " pod="calico-system/whisker-7d8648c79-k6kzv" Oct 29 11:47:47.706096 kubelet[2763]: I1029 11:47:47.705078 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e165bbe-8839-4003-b237-6d7afba67d0d-config-volume\") pod \"coredns-674b8bbfcf-ck2dh\" (UID: \"0e165bbe-8839-4003-b237-6d7afba67d0d\") " pod="kube-system/coredns-674b8bbfcf-ck2dh" Oct 29 11:47:47.706096 kubelet[2763]: I1029 11:47:47.705099 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d12e45e-28f7-4f72-aa9c-a877f939496f-config-volume\") pod \"coredns-674b8bbfcf-lznm8\" (UID: \"8d12e45e-28f7-4f72-aa9c-a877f939496f\") " pod="kube-system/coredns-674b8bbfcf-lznm8" Oct 29 11:47:47.706096 kubelet[2763]: I1029 11:47:47.705140 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmc7v\" (UniqueName: \"kubernetes.io/projected/0e165bbe-8839-4003-b237-6d7afba67d0d-kube-api-access-rmc7v\") pod \"coredns-674b8bbfcf-ck2dh\" (UID: \"0e165bbe-8839-4003-b237-6d7afba67d0d\") " pod="kube-system/coredns-674b8bbfcf-ck2dh" Oct 29 11:47:47.723571 systemd[1]: Created slice kubepods-besteffort-pod571233d6_8903_4d8f_8101_eb09343bdca4.slice - libcontainer container kubepods-besteffort-pod571233d6_8903_4d8f_8101_eb09343bdca4.slice. Oct 29 11:47:47.729864 systemd[1]: Created slice kubepods-besteffort-pod7ef53ca0_e6af_4f13_8298_54b41e79363b.slice - libcontainer container kubepods-besteffort-pod7ef53ca0_e6af_4f13_8298_54b41e79363b.slice. Oct 29 11:47:47.788127 kubelet[2763]: E1029 11:47:47.788077 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:47:47.789794 containerd[1601]: time="2025-10-29T11:47:47.789765025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 29 11:47:47.790096 kubelet[2763]: E1029 11:47:47.789816 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:47:47.806423 kubelet[2763]: I1029 11:47:47.806356 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcnw5\" (UniqueName: \"kubernetes.io/projected/571233d6-8903-4d8f-8101-eb09343bdca4-kube-api-access-jcnw5\") pod \"calico-kube-controllers-7f975cc8d8-hnv7g\" (UID: \"571233d6-8903-4d8f-8101-eb09343bdca4\") " pod="calico-system/calico-kube-controllers-7f975cc8d8-hnv7g" Oct 29 11:47:47.806423 kubelet[2763]: I1029 11:47:47.806392 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7ef53ca0-e6af-4f13-8298-54b41e79363b-goldmane-key-pair\") pod \"goldmane-666569f655-ml2tk\" (UID: \"7ef53ca0-e6af-4f13-8298-54b41e79363b\") " pod="calico-system/goldmane-666569f655-ml2tk" Oct 29 11:47:47.806637 kubelet[2763]: I1029 11:47:47.806579 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wjwp\" (UniqueName: \"kubernetes.io/projected/7ef53ca0-e6af-4f13-8298-54b41e79363b-kube-api-access-7wjwp\") pod \"goldmane-666569f655-ml2tk\" (UID: \"7ef53ca0-e6af-4f13-8298-54b41e79363b\") " pod="calico-system/goldmane-666569f655-ml2tk" Oct 29 11:47:47.806750 kubelet[2763]: I1029 11:47:47.806736 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/571233d6-8903-4d8f-8101-eb09343bdca4-tigera-ca-bundle\") pod \"calico-kube-controllers-7f975cc8d8-hnv7g\" (UID: \"571233d6-8903-4d8f-8101-eb09343bdca4\") " pod="calico-system/calico-kube-controllers-7f975cc8d8-hnv7g" Oct 29 11:47:47.807197 kubelet[2763]: I1029 11:47:47.807121 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ef53ca0-e6af-4f13-8298-54b41e79363b-config\") pod \"goldmane-666569f655-ml2tk\" (UID: \"7ef53ca0-e6af-4f13-8298-54b41e79363b\") " pod="calico-system/goldmane-666569f655-ml2tk" Oct 29 11:47:47.807258 kubelet[2763]: I1029 11:47:47.807194 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ef53ca0-e6af-4f13-8298-54b41e79363b-goldmane-ca-bundle\") pod \"goldmane-666569f655-ml2tk\" (UID: \"7ef53ca0-e6af-4f13-8298-54b41e79363b\") " pod="calico-system/goldmane-666569f655-ml2tk" Oct 29 11:47:47.961840 containerd[1601]: time="2025-10-29T11:47:47.961784802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d8648c79-k6kzv,Uid:084f74b8-5e73-4bda-886a-d717a73225ab,Namespace:calico-system,Attempt:0,}" Oct 29 11:47:47.973410 containerd[1601]: time="2025-10-29T11:47:47.973377805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-786f955c4-g4w2m,Uid:b3b7028a-b276-42fe-9fe7-b12ae54b50d3,Namespace:calico-apiserver,Attempt:0,}" Oct 29 11:47:47.994029 kubelet[2763]: E1029 11:47:47.993628 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:47:47.994824 containerd[1601]: time="2025-10-29T11:47:47.994352051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ck2dh,Uid:0e165bbe-8839-4003-b237-6d7afba67d0d,Namespace:kube-system,Attempt:0,}" Oct 29 11:47:48.027697 containerd[1601]: time="2025-10-29T11:47:48.027645835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f975cc8d8-hnv7g,Uid:571233d6-8903-4d8f-8101-eb09343bdca4,Namespace:calico-system,Attempt:0,}" Oct 29 11:47:48.035458 containerd[1601]: time="2025-10-29T11:47:48.035402365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ml2tk,Uid:7ef53ca0-e6af-4f13-8298-54b41e79363b,Namespace:calico-system,Attempt:0,}" Oct 29 11:47:48.221449 containerd[1601]: time="2025-10-29T11:47:48.221209868Z" level=error msg="Failed to destroy network for sandbox \"db939827ae54a274ffd7e9f2d63a581199e71750c4ae4248642f1d7cb9d13b15\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 11:47:48.222812 containerd[1601]: time="2025-10-29T11:47:48.222761726Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ml2tk,Uid:7ef53ca0-e6af-4f13-8298-54b41e79363b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"db939827ae54a274ffd7e9f2d63a581199e71750c4ae4248642f1d7cb9d13b15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 11:47:48.225896 kubelet[2763]: E1029 11:47:48.225586 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:47:48.226226 containerd[1601]: time="2025-10-29T11:47:48.226194008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lznm8,Uid:8d12e45e-28f7-4f72-aa9c-a877f939496f,Namespace:kube-system,Attempt:0,}" Oct 29 11:47:48.227477 kubelet[2763]: E1029 11:47:48.227339 2763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db939827ae54a274ffd7e9f2d63a581199e71750c4ae4248642f1d7cb9d13b15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 11:47:48.227984 kubelet[2763]: E1029 11:47:48.227954 2763 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db939827ae54a274ffd7e9f2d63a581199e71750c4ae4248642f1d7cb9d13b15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-ml2tk" Oct 29 11:47:48.228232 containerd[1601]: time="2025-10-29T11:47:48.228193009Z" level=error msg="Failed to destroy network for sandbox \"a8b49ae2f13a566a3f33146289d4f84cc2f0b53965e7defaa08e994c081fea12\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 11:47:48.230299 containerd[1601]: time="2025-10-29T11:47:48.230250538Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d8648c79-k6kzv,Uid:084f74b8-5e73-4bda-886a-d717a73225ab,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8b49ae2f13a566a3f33146289d4f84cc2f0b53965e7defaa08e994c081fea12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 11:47:48.231676 kubelet[2763]: E1029 11:47:48.230446 2763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8b49ae2f13a566a3f33146289d4f84cc2f0b53965e7defaa08e994c081fea12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 11:47:48.231676 kubelet[2763]: E1029 11:47:48.230499 2763 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8b49ae2f13a566a3f33146289d4f84cc2f0b53965e7defaa08e994c081fea12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7d8648c79-k6kzv" Oct 29 11:47:48.231676 kubelet[2763]: E1029 11:47:48.230525 2763 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8b49ae2f13a566a3f33146289d4f84cc2f0b53965e7defaa08e994c081fea12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7d8648c79-k6kzv" Oct 29 11:47:48.231816 kubelet[2763]: E1029 11:47:48.230567 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7d8648c79-k6kzv_calico-system(084f74b8-5e73-4bda-886a-d717a73225ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7d8648c79-k6kzv_calico-system(084f74b8-5e73-4bda-886a-d717a73225ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a8b49ae2f13a566a3f33146289d4f84cc2f0b53965e7defaa08e994c081fea12\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7d8648c79-k6kzv" podUID="084f74b8-5e73-4bda-886a-d717a73225ab" Oct 29 11:47:48.232181 kubelet[2763]: E1029 11:47:48.232124 2763 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db939827ae54a274ffd7e9f2d63a581199e71750c4ae4248642f1d7cb9d13b15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-ml2tk" Oct 29 11:47:48.232231 kubelet[2763]: E1029 11:47:48.232213 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-ml2tk_calico-system(7ef53ca0-e6af-4f13-8298-54b41e79363b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-ml2tk_calico-system(7ef53ca0-e6af-4f13-8298-54b41e79363b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db939827ae54a274ffd7e9f2d63a581199e71750c4ae4248642f1d7cb9d13b15\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-ml2tk" podUID="7ef53ca0-e6af-4f13-8298-54b41e79363b" Oct 29 11:47:48.232864 containerd[1601]: time="2025-10-29T11:47:48.232831580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-786f955c4-cnjbr,Uid:d0c9cc6e-25b8-45f8-aafd-601ef8c53fc7,Namespace:calico-apiserver,Attempt:0,}" Oct 29 11:47:48.241617 containerd[1601]: time="2025-10-29T11:47:48.241294409Z" level=error msg="Failed to destroy network for sandbox \"326bcf195086f601586d4b172e48c7dad7c9a8d216a1af5ee921d0605f7dabcb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 11:47:48.243444 containerd[1601]: time="2025-10-29T11:47:48.242878752Z" level=error msg="Failed to destroy network for sandbox \"f68fd856c6d0606d4338943890a8fa720f400e1f2551923072c95a3b6c3401b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 11:47:48.245546 containerd[1601]: time="2025-10-29T11:47:48.244985648Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ck2dh,Uid:0e165bbe-8839-4003-b237-6d7afba67d0d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"326bcf195086f601586d4b172e48c7dad7c9a8d216a1af5ee921d0605f7dabcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 11:47:48.245645 kubelet[2763]: E1029 11:47:48.245226 2763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"326bcf195086f601586d4b172e48c7dad7c9a8d216a1af5ee921d0605f7dabcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 11:47:48.245645 kubelet[2763]: E1029 11:47:48.245288 2763 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"326bcf195086f601586d4b172e48c7dad7c9a8d216a1af5ee921d0605f7dabcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ck2dh" Oct 29 11:47:48.245645 kubelet[2763]: E1029 11:47:48.245308 2763 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"326bcf195086f601586d4b172e48c7dad7c9a8d216a1af5ee921d0605f7dabcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ck2dh" Oct 29 11:47:48.246196 kubelet[2763]: E1029 11:47:48.246016 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-ck2dh_kube-system(0e165bbe-8839-4003-b237-6d7afba67d0d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-ck2dh_kube-system(0e165bbe-8839-4003-b237-6d7afba67d0d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"326bcf195086f601586d4b172e48c7dad7c9a8d216a1af5ee921d0605f7dabcb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-ck2dh" podUID="0e165bbe-8839-4003-b237-6d7afba67d0d" Oct 29 11:47:48.248921 containerd[1601]: time="2025-10-29T11:47:48.248816866Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f975cc8d8-hnv7g,Uid:571233d6-8903-4d8f-8101-eb09343bdca4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f68fd856c6d0606d4338943890a8fa720f400e1f2551923072c95a3b6c3401b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 11:47:48.251025 kubelet[2763]: E1029 11:47:48.249156 2763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f68fd856c6d0606d4338943890a8fa720f400e1f2551923072c95a3b6c3401b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 11:47:48.251025 kubelet[2763]: E1029 11:47:48.249202 2763 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f68fd856c6d0606d4338943890a8fa720f400e1f2551923072c95a3b6c3401b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f975cc8d8-hnv7g" Oct 29 11:47:48.251025 kubelet[2763]: E1029 11:47:48.249222 2763 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f68fd856c6d0606d4338943890a8fa720f400e1f2551923072c95a3b6c3401b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f975cc8d8-hnv7g" Oct 29 11:47:48.251149 kubelet[2763]: E1029 11:47:48.249272 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f975cc8d8-hnv7g_calico-system(571233d6-8903-4d8f-8101-eb09343bdca4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f975cc8d8-hnv7g_calico-system(571233d6-8903-4d8f-8101-eb09343bdca4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f68fd856c6d0606d4338943890a8fa720f400e1f2551923072c95a3b6c3401b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f975cc8d8-hnv7g" podUID="571233d6-8903-4d8f-8101-eb09343bdca4" Oct 29 11:47:48.255753 containerd[1601]: time="2025-10-29T11:47:48.255696633Z" level=error msg="Failed to destroy network for sandbox \"9e10c124476ce73527a10776f38672a389c5384a6f54be1552b9a694ffe1bef7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 11:47:48.257441 containerd[1601]: time="2025-10-29T11:47:48.257314100Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-786f955c4-g4w2m,Uid:b3b7028a-b276-42fe-9fe7-b12ae54b50d3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e10c124476ce73527a10776f38672a389c5384a6f54be1552b9a694ffe1bef7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 11:47:48.258140 kubelet[2763]: E1029 11:47:48.258098 2763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e10c124476ce73527a10776f38672a389c5384a6f54be1552b9a694ffe1bef7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 11:47:48.258205 kubelet[2763]: E1029 11:47:48.258164 2763 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e10c124476ce73527a10776f38672a389c5384a6f54be1552b9a694ffe1bef7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-786f955c4-g4w2m" Oct 29 11:47:48.258205 kubelet[2763]: E1029 11:47:48.258183 2763 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e10c124476ce73527a10776f38672a389c5384a6f54be1552b9a694ffe1bef7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-786f955c4-g4w2m" Oct 29 11:47:48.258270 kubelet[2763]: E1029 11:47:48.258228 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-786f955c4-g4w2m_calico-apiserver(b3b7028a-b276-42fe-9fe7-b12ae54b50d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-786f955c4-g4w2m_calico-apiserver(b3b7028a-b276-42fe-9fe7-b12ae54b50d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9e10c124476ce73527a10776f38672a389c5384a6f54be1552b9a694ffe1bef7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-786f955c4-g4w2m" podUID="b3b7028a-b276-42fe-9fe7-b12ae54b50d3" Oct 29 11:47:48.292870 containerd[1601]: time="2025-10-29T11:47:48.292823528Z" level=error msg="Failed to destroy network for sandbox \"31a6abbc65b278c730f9fd0a81de63a12f390b2cc65b2c5c3687ab691ed887d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 11:47:48.294464 containerd[1601]: time="2025-10-29T11:47:48.294422953Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-786f955c4-cnjbr,Uid:d0c9cc6e-25b8-45f8-aafd-601ef8c53fc7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"31a6abbc65b278c730f9fd0a81de63a12f390b2cc65b2c5c3687ab691ed887d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 11:47:48.295126 kubelet[2763]: E1029 11:47:48.295089 2763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31a6abbc65b278c730f9fd0a81de63a12f390b2cc65b2c5c3687ab691ed887d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 11:47:48.295187 kubelet[2763]: E1029 11:47:48.295155 2763 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31a6abbc65b278c730f9fd0a81de63a12f390b2cc65b2c5c3687ab691ed887d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-786f955c4-cnjbr" Oct 29 11:47:48.295187 kubelet[2763]: E1029 11:47:48.295177 2763 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31a6abbc65b278c730f9fd0a81de63a12f390b2cc65b2c5c3687ab691ed887d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-786f955c4-cnjbr" Oct 29 11:47:48.295366 kubelet[2763]: E1029 11:47:48.295228 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-786f955c4-cnjbr_calico-apiserver(d0c9cc6e-25b8-45f8-aafd-601ef8c53fc7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-786f955c4-cnjbr_calico-apiserver(d0c9cc6e-25b8-45f8-aafd-601ef8c53fc7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31a6abbc65b278c730f9fd0a81de63a12f390b2cc65b2c5c3687ab691ed887d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-786f955c4-cnjbr" podUID="d0c9cc6e-25b8-45f8-aafd-601ef8c53fc7" Oct 29 11:47:48.298495 containerd[1601]: time="2025-10-29T11:47:48.298110391Z" level=error msg="Failed to destroy network for sandbox \"c20d6d40a96b8275da4b646719c087ea4dd869c4464d400f89e3790a3d44af2f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 11:47:48.299828 containerd[1601]: time="2025-10-29T11:47:48.299768544Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lznm8,Uid:8d12e45e-28f7-4f72-aa9c-a877f939496f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c20d6d40a96b8275da4b646719c087ea4dd869c4464d400f89e3790a3d44af2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 11:47:48.300143 kubelet[2763]: E1029 11:47:48.300106 2763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c20d6d40a96b8275da4b646719c087ea4dd869c4464d400f89e3790a3d44af2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 11:47:48.300214 kubelet[2763]: E1029 11:47:48.300165 2763 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c20d6d40a96b8275da4b646719c087ea4dd869c4464d400f89e3790a3d44af2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-lznm8" Oct 29 11:47:48.300214 kubelet[2763]: E1029 11:47:48.300184 2763 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c20d6d40a96b8275da4b646719c087ea4dd869c4464d400f89e3790a3d44af2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-lznm8" Oct 29 11:47:48.300260 kubelet[2763]: E1029 11:47:48.300240 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-lznm8_kube-system(8d12e45e-28f7-4f72-aa9c-a877f939496f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-lznm8_kube-system(8d12e45e-28f7-4f72-aa9c-a877f939496f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c20d6d40a96b8275da4b646719c087ea4dd869c4464d400f89e3790a3d44af2f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-lznm8" podUID="8d12e45e-28f7-4f72-aa9c-a877f939496f" Oct 29 11:47:48.678153 systemd[1]: Created slice kubepods-besteffort-pod19d4b9de_b24e_493e_a2fd_91157fcb3c0a.slice - libcontainer container kubepods-besteffort-pod19d4b9de_b24e_493e_a2fd_91157fcb3c0a.slice. Oct 29 11:47:48.680142 containerd[1601]: time="2025-10-29T11:47:48.680107735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tpbbb,Uid:19d4b9de-b24e-493e-a2fd-91157fcb3c0a,Namespace:calico-system,Attempt:0,}" Oct 29 11:47:48.792981 containerd[1601]: time="2025-10-29T11:47:48.792921824Z" level=error msg="Failed to destroy network for sandbox \"816a702c345ba1cf47d43ff3f3bcd74bf87a6cdcf7527b4cbed32c9b588b15f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 11:47:48.837078 containerd[1601]: time="2025-10-29T11:47:48.837029340Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tpbbb,Uid:19d4b9de-b24e-493e-a2fd-91157fcb3c0a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"816a702c345ba1cf47d43ff3f3bcd74bf87a6cdcf7527b4cbed32c9b588b15f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 11:47:48.837286 kubelet[2763]: E1029 11:47:48.837239 2763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"816a702c345ba1cf47d43ff3f3bcd74bf87a6cdcf7527b4cbed32c9b588b15f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 11:47:48.837506 kubelet[2763]: E1029 11:47:48.837303 2763 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"816a702c345ba1cf47d43ff3f3bcd74bf87a6cdcf7527b4cbed32c9b588b15f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tpbbb" Oct 29 11:47:48.837506 kubelet[2763]: E1029 11:47:48.837325 2763 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"816a702c345ba1cf47d43ff3f3bcd74bf87a6cdcf7527b4cbed32c9b588b15f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tpbbb" Oct 29 11:47:48.837506 kubelet[2763]: E1029 11:47:48.837383 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tpbbb_calico-system(19d4b9de-b24e-493e-a2fd-91157fcb3c0a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tpbbb_calico-system(19d4b9de-b24e-493e-a2fd-91157fcb3c0a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"816a702c345ba1cf47d43ff3f3bcd74bf87a6cdcf7527b4cbed32c9b588b15f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tpbbb" podUID="19d4b9de-b24e-493e-a2fd-91157fcb3c0a" Oct 29 11:47:52.296006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount363108544.mount: Deactivated successfully. Oct 29 11:47:52.633359 containerd[1601]: time="2025-10-29T11:47:52.633297761Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 11:47:52.634076 containerd[1601]: time="2025-10-29T11:47:52.634042374Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Oct 29 11:47:52.635162 containerd[1601]: time="2025-10-29T11:47:52.635112027Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 11:47:52.637400 containerd[1601]: time="2025-10-29T11:47:52.637202247Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 11:47:52.637751 containerd[1601]: time="2025-10-29T11:47:52.637719311Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.847917481s" Oct 29 11:47:52.637751 containerd[1601]: time="2025-10-29T11:47:52.637748635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Oct 29 11:47:52.649049 containerd[1601]: time="2025-10-29T11:47:52.649007875Z" level=info msg="CreateContainer within sandbox \"eb133f1dd64fc5f9192e6395c989e0c080febf0e68760c50ae85090cf976e6f3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 29 11:47:52.658598 containerd[1601]: time="2025-10-29T11:47:52.656264538Z" level=info msg="Container b7891fac55aba6b890cf8a81e9a5a724ecf8309b8790fc6abe816e18762b8b70: CDI devices from CRI Config.CDIDevices: []" Oct 29 11:47:52.666967 containerd[1601]: time="2025-10-29T11:47:52.665916139Z" level=info msg="CreateContainer within sandbox \"eb133f1dd64fc5f9192e6395c989e0c080febf0e68760c50ae85090cf976e6f3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b7891fac55aba6b890cf8a81e9a5a724ecf8309b8790fc6abe816e18762b8b70\"" Oct 29 11:47:52.667652 containerd[1601]: time="2025-10-29T11:47:52.667607589Z" level=info msg="StartContainer for \"b7891fac55aba6b890cf8a81e9a5a724ecf8309b8790fc6abe816e18762b8b70\"" Oct 29 11:47:52.669443 containerd[1601]: time="2025-10-29T11:47:52.669412654Z" level=info msg="connecting to shim b7891fac55aba6b890cf8a81e9a5a724ecf8309b8790fc6abe816e18762b8b70" address="unix:///run/containerd/s/9415bd04822be97e749cf921464080be7dcf105d8b86c19cc5c2b47aa5b88d8c" protocol=ttrpc version=3 Oct 29 11:47:52.690149 systemd[1]: Started cri-containerd-b7891fac55aba6b890cf8a81e9a5a724ecf8309b8790fc6abe816e18762b8b70.scope - libcontainer container b7891fac55aba6b890cf8a81e9a5a724ecf8309b8790fc6abe816e18762b8b70. Oct 29 11:47:52.724619 containerd[1601]: time="2025-10-29T11:47:52.724582796Z" level=info msg="StartContainer for \"b7891fac55aba6b890cf8a81e9a5a724ecf8309b8790fc6abe816e18762b8b70\" returns successfully" Oct 29 11:47:52.817220 kubelet[2763]: E1029 11:47:52.817171 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:47:52.847375 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 29 11:47:52.847628 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 29 11:47:52.848339 kubelet[2763]: I1029 11:47:52.848282 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-dgwc8" podStartSLOduration=0.83454657 podStartE2EDuration="13.848266661s" podCreationTimestamp="2025-10-29 11:47:39 +0000 UTC" firstStartedPulling="2025-10-29 11:47:39.624876169 +0000 UTC m=+25.041053784" lastFinishedPulling="2025-10-29 11:47:52.63859622 +0000 UTC m=+38.054773875" observedRunningTime="2025-10-29 11:47:52.847608219 +0000 UTC m=+38.263785914" watchObservedRunningTime="2025-10-29 11:47:52.848266661 +0000 UTC m=+38.264444316" Oct 29 11:47:53.037394 kubelet[2763]: I1029 11:47:53.037263 2763 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzhl6\" (UniqueName: \"kubernetes.io/projected/084f74b8-5e73-4bda-886a-d717a73225ab-kube-api-access-zzhl6\") pod \"084f74b8-5e73-4bda-886a-d717a73225ab\" (UID: \"084f74b8-5e73-4bda-886a-d717a73225ab\") " Oct 29 11:47:53.037394 kubelet[2763]: I1029 11:47:53.037328 2763 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/084f74b8-5e73-4bda-886a-d717a73225ab-whisker-backend-key-pair\") pod \"084f74b8-5e73-4bda-886a-d717a73225ab\" (UID: \"084f74b8-5e73-4bda-886a-d717a73225ab\") " Oct 29 11:47:53.037394 kubelet[2763]: I1029 11:47:53.037368 2763 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/084f74b8-5e73-4bda-886a-d717a73225ab-whisker-ca-bundle\") pod \"084f74b8-5e73-4bda-886a-d717a73225ab\" (UID: \"084f74b8-5e73-4bda-886a-d717a73225ab\") " Oct 29 11:47:53.046998 kubelet[2763]: I1029 11:47:53.046565 2763 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/084f74b8-5e73-4bda-886a-d717a73225ab-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "084f74b8-5e73-4bda-886a-d717a73225ab" (UID: "084f74b8-5e73-4bda-886a-d717a73225ab"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 29 11:47:53.051176 kubelet[2763]: I1029 11:47:53.051137 2763 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/084f74b8-5e73-4bda-886a-d717a73225ab-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "084f74b8-5e73-4bda-886a-d717a73225ab" (UID: "084f74b8-5e73-4bda-886a-d717a73225ab"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 29 11:47:53.051260 kubelet[2763]: I1029 11:47:53.051177 2763 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/084f74b8-5e73-4bda-886a-d717a73225ab-kube-api-access-zzhl6" (OuterVolumeSpecName: "kube-api-access-zzhl6") pod "084f74b8-5e73-4bda-886a-d717a73225ab" (UID: "084f74b8-5e73-4bda-886a-d717a73225ab"). InnerVolumeSpecName "kube-api-access-zzhl6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 29 11:47:53.138099 kubelet[2763]: I1029 11:47:53.138035 2763 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/084f74b8-5e73-4bda-886a-d717a73225ab-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 29 11:47:53.138099 kubelet[2763]: I1029 11:47:53.138066 2763 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zzhl6\" (UniqueName: \"kubernetes.io/projected/084f74b8-5e73-4bda-886a-d717a73225ab-kube-api-access-zzhl6\") on node \"localhost\" DevicePath \"\"" Oct 29 11:47:53.138099 kubelet[2763]: I1029 11:47:53.138077 2763 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/084f74b8-5e73-4bda-886a-d717a73225ab-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 29 11:47:53.296812 systemd[1]: var-lib-kubelet-pods-084f74b8\x2d5e73\x2d4bda\x2d886a\x2dd717a73225ab-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzzhl6.mount: Deactivated successfully. Oct 29 11:47:53.296914 systemd[1]: var-lib-kubelet-pods-084f74b8\x2d5e73\x2d4bda\x2d886a\x2dd717a73225ab-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 29 11:47:53.818882 kubelet[2763]: I1029 11:47:53.818823 2763 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 29 11:47:53.819403 kubelet[2763]: E1029 11:47:53.819202 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:47:53.823091 systemd[1]: Removed slice kubepods-besteffort-pod084f74b8_5e73_4bda_886a_d717a73225ab.slice - libcontainer container kubepods-besteffort-pod084f74b8_5e73_4bda_886a_d717a73225ab.slice. Oct 29 11:47:53.878130 systemd[1]: Created slice kubepods-besteffort-podfb600d11_592b_4097_9fd7_ec12a58553d8.slice - libcontainer container kubepods-besteffort-podfb600d11_592b_4097_9fd7_ec12a58553d8.slice. Oct 29 11:47:53.943156 kubelet[2763]: I1029 11:47:53.943059 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb600d11-592b-4097-9fd7-ec12a58553d8-whisker-ca-bundle\") pod \"whisker-98cbc6c6c-m8wcp\" (UID: \"fb600d11-592b-4097-9fd7-ec12a58553d8\") " pod="calico-system/whisker-98cbc6c6c-m8wcp" Oct 29 11:47:53.943156 kubelet[2763]: I1029 11:47:53.943107 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fb600d11-592b-4097-9fd7-ec12a58553d8-whisker-backend-key-pair\") pod \"whisker-98cbc6c6c-m8wcp\" (UID: \"fb600d11-592b-4097-9fd7-ec12a58553d8\") " pod="calico-system/whisker-98cbc6c6c-m8wcp" Oct 29 11:47:53.943156 kubelet[2763]: I1029 11:47:53.943130 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtv2r\" (UniqueName: \"kubernetes.io/projected/fb600d11-592b-4097-9fd7-ec12a58553d8-kube-api-access-mtv2r\") pod \"whisker-98cbc6c6c-m8wcp\" (UID: \"fb600d11-592b-4097-9fd7-ec12a58553d8\") " pod="calico-system/whisker-98cbc6c6c-m8wcp" Oct 29 11:47:53.991316 systemd[1]: Started sshd@7-10.0.0.75:22-10.0.0.1:35002.service - OpenSSH per-connection server daemon (10.0.0.1:35002). Oct 29 11:47:54.058074 sshd[3938]: Accepted publickey for core from 10.0.0.1 port 35002 ssh2: RSA SHA256:+91isbynmBIjf6V6jkIkZf2tk+egrDOc6wdtdos75g8 Oct 29 11:47:54.060072 sshd-session[3938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 11:47:54.064271 systemd-logind[1586]: New session 8 of user core. Oct 29 11:47:54.074099 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 29 11:47:54.167320 sshd[3943]: Connection closed by 10.0.0.1 port 35002 Oct 29 11:47:54.168156 sshd-session[3938]: pam_unix(sshd:session): session closed for user core Oct 29 11:47:54.172093 systemd[1]: sshd@7-10.0.0.75:22-10.0.0.1:35002.service: Deactivated successfully. Oct 29 11:47:54.174641 systemd[1]: session-8.scope: Deactivated successfully. Oct 29 11:47:54.175387 systemd-logind[1586]: Session 8 logged out. Waiting for processes to exit. Oct 29 11:47:54.176231 systemd-logind[1586]: Removed session 8. Oct 29 11:47:54.182874 containerd[1601]: time="2025-10-29T11:47:54.182600342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-98cbc6c6c-m8wcp,Uid:fb600d11-592b-4097-9fd7-ec12a58553d8,Namespace:calico-system,Attempt:0,}" Oct 29 11:47:54.425309 systemd-networkd[1504]: cali1cf1a1a051f: Link UP Oct 29 11:47:54.425900 systemd-networkd[1504]: cali1cf1a1a051f: Gained carrier Oct 29 11:47:54.444176 containerd[1601]: 2025-10-29 11:47:54.224 [INFO][3976] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 11:47:54.444176 containerd[1601]: 2025-10-29 11:47:54.270 [INFO][3976] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--98cbc6c6c--m8wcp-eth0 whisker-98cbc6c6c- calico-system fb600d11-592b-4097-9fd7-ec12a58553d8 959 0 2025-10-29 11:47:53 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:98cbc6c6c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-98cbc6c6c-m8wcp eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1cf1a1a051f [] [] }} ContainerID="1c9e801ac4778bb8f308da3b0b53ed6187c30d8994ccc7bc2687a27d6f1843b0" Namespace="calico-system" Pod="whisker-98cbc6c6c-m8wcp" WorkloadEndpoint="localhost-k8s-whisker--98cbc6c6c--m8wcp-" Oct 29 11:47:54.444176 containerd[1601]: 2025-10-29 11:47:54.270 [INFO][3976] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1c9e801ac4778bb8f308da3b0b53ed6187c30d8994ccc7bc2687a27d6f1843b0" Namespace="calico-system" Pod="whisker-98cbc6c6c-m8wcp" WorkloadEndpoint="localhost-k8s-whisker--98cbc6c6c--m8wcp-eth0" Oct 29 11:47:54.444176 containerd[1601]: 2025-10-29 11:47:54.363 [INFO][4067] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1c9e801ac4778bb8f308da3b0b53ed6187c30d8994ccc7bc2687a27d6f1843b0" HandleID="k8s-pod-network.1c9e801ac4778bb8f308da3b0b53ed6187c30d8994ccc7bc2687a27d6f1843b0" Workload="localhost-k8s-whisker--98cbc6c6c--m8wcp-eth0" Oct 29 11:47:54.444416 containerd[1601]: 2025-10-29 11:47:54.363 [INFO][4067] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1c9e801ac4778bb8f308da3b0b53ed6187c30d8994ccc7bc2687a27d6f1843b0" HandleID="k8s-pod-network.1c9e801ac4778bb8f308da3b0b53ed6187c30d8994ccc7bc2687a27d6f1843b0" Workload="localhost-k8s-whisker--98cbc6c6c--m8wcp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000126240), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-98cbc6c6c-m8wcp", "timestamp":"2025-10-29 11:47:54.363582497 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 11:47:54.444416 containerd[1601]: 2025-10-29 11:47:54.363 [INFO][4067] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 11:47:54.444416 containerd[1601]: 2025-10-29 11:47:54.364 [INFO][4067] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 11:47:54.444416 containerd[1601]: 2025-10-29 11:47:54.364 [INFO][4067] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 11:47:54.444416 containerd[1601]: 2025-10-29 11:47:54.381 [INFO][4067] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1c9e801ac4778bb8f308da3b0b53ed6187c30d8994ccc7bc2687a27d6f1843b0" host="localhost" Oct 29 11:47:54.444416 containerd[1601]: 2025-10-29 11:47:54.388 [INFO][4067] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 11:47:54.444416 containerd[1601]: 2025-10-29 11:47:54.393 [INFO][4067] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 11:47:54.444416 containerd[1601]: 2025-10-29 11:47:54.398 [INFO][4067] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 11:47:54.444416 containerd[1601]: 2025-10-29 11:47:54.401 [INFO][4067] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 11:47:54.444416 containerd[1601]: 2025-10-29 11:47:54.401 [INFO][4067] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1c9e801ac4778bb8f308da3b0b53ed6187c30d8994ccc7bc2687a27d6f1843b0" host="localhost" Oct 29 11:47:54.444621 containerd[1601]: 2025-10-29 11:47:54.403 [INFO][4067] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1c9e801ac4778bb8f308da3b0b53ed6187c30d8994ccc7bc2687a27d6f1843b0 Oct 29 11:47:54.444621 containerd[1601]: 2025-10-29 11:47:54.407 [INFO][4067] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1c9e801ac4778bb8f308da3b0b53ed6187c30d8994ccc7bc2687a27d6f1843b0" host="localhost" Oct 29 11:47:54.444621 containerd[1601]: 2025-10-29 11:47:54.413 [INFO][4067] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.1c9e801ac4778bb8f308da3b0b53ed6187c30d8994ccc7bc2687a27d6f1843b0" host="localhost" Oct 29 11:47:54.444621 containerd[1601]: 2025-10-29 11:47:54.413 [INFO][4067] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.1c9e801ac4778bb8f308da3b0b53ed6187c30d8994ccc7bc2687a27d6f1843b0" host="localhost" Oct 29 11:47:54.444621 containerd[1601]: 2025-10-29 11:47:54.413 [INFO][4067] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 11:47:54.444621 containerd[1601]: 2025-10-29 11:47:54.413 [INFO][4067] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="1c9e801ac4778bb8f308da3b0b53ed6187c30d8994ccc7bc2687a27d6f1843b0" HandleID="k8s-pod-network.1c9e801ac4778bb8f308da3b0b53ed6187c30d8994ccc7bc2687a27d6f1843b0" Workload="localhost-k8s-whisker--98cbc6c6c--m8wcp-eth0" Oct 29 11:47:54.444733 containerd[1601]: 2025-10-29 11:47:54.416 [INFO][3976] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1c9e801ac4778bb8f308da3b0b53ed6187c30d8994ccc7bc2687a27d6f1843b0" Namespace="calico-system" Pod="whisker-98cbc6c6c-m8wcp" WorkloadEndpoint="localhost-k8s-whisker--98cbc6c6c--m8wcp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--98cbc6c6c--m8wcp-eth0", GenerateName:"whisker-98cbc6c6c-", Namespace:"calico-system", SelfLink:"", UID:"fb600d11-592b-4097-9fd7-ec12a58553d8", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 11, 47, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"98cbc6c6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-98cbc6c6c-m8wcp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1cf1a1a051f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 11:47:54.444733 containerd[1601]: 2025-10-29 11:47:54.416 [INFO][3976] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="1c9e801ac4778bb8f308da3b0b53ed6187c30d8994ccc7bc2687a27d6f1843b0" Namespace="calico-system" Pod="whisker-98cbc6c6c-m8wcp" WorkloadEndpoint="localhost-k8s-whisker--98cbc6c6c--m8wcp-eth0" Oct 29 11:47:54.444807 containerd[1601]: 2025-10-29 11:47:54.416 [INFO][3976] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1cf1a1a051f ContainerID="1c9e801ac4778bb8f308da3b0b53ed6187c30d8994ccc7bc2687a27d6f1843b0" Namespace="calico-system" Pod="whisker-98cbc6c6c-m8wcp" WorkloadEndpoint="localhost-k8s-whisker--98cbc6c6c--m8wcp-eth0" Oct 29 11:47:54.444807 containerd[1601]: 2025-10-29 11:47:54.426 [INFO][3976] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1c9e801ac4778bb8f308da3b0b53ed6187c30d8994ccc7bc2687a27d6f1843b0" Namespace="calico-system" Pod="whisker-98cbc6c6c-m8wcp" WorkloadEndpoint="localhost-k8s-whisker--98cbc6c6c--m8wcp-eth0" Oct 29 11:47:54.444845 containerd[1601]: 2025-10-29 11:47:54.426 [INFO][3976] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1c9e801ac4778bb8f308da3b0b53ed6187c30d8994ccc7bc2687a27d6f1843b0" Namespace="calico-system" Pod="whisker-98cbc6c6c-m8wcp" WorkloadEndpoint="localhost-k8s-whisker--98cbc6c6c--m8wcp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--98cbc6c6c--m8wcp-eth0", GenerateName:"whisker-98cbc6c6c-", Namespace:"calico-system", SelfLink:"", UID:"fb600d11-592b-4097-9fd7-ec12a58553d8", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 11, 47, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"98cbc6c6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1c9e801ac4778bb8f308da3b0b53ed6187c30d8994ccc7bc2687a27d6f1843b0", Pod:"whisker-98cbc6c6c-m8wcp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1cf1a1a051f", MAC:"fa:42:da:7b:ac:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 11:47:54.444893 containerd[1601]: 2025-10-29 11:47:54.441 [INFO][3976] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1c9e801ac4778bb8f308da3b0b53ed6187c30d8994ccc7bc2687a27d6f1843b0" Namespace="calico-system" Pod="whisker-98cbc6c6c-m8wcp" WorkloadEndpoint="localhost-k8s-whisker--98cbc6c6c--m8wcp-eth0" Oct 29 11:47:54.519640 containerd[1601]: time="2025-10-29T11:47:54.519587550Z" level=info msg="connecting to shim 1c9e801ac4778bb8f308da3b0b53ed6187c30d8994ccc7bc2687a27d6f1843b0" address="unix:///run/containerd/s/cba5f96784658be50f0688b16d017397dcc65db0d0148855429522ef9fea4737" namespace=k8s.io protocol=ttrpc version=3 Oct 29 11:47:54.552154 systemd[1]: Started cri-containerd-1c9e801ac4778bb8f308da3b0b53ed6187c30d8994ccc7bc2687a27d6f1843b0.scope - libcontainer container 1c9e801ac4778bb8f308da3b0b53ed6187c30d8994ccc7bc2687a27d6f1843b0. Oct 29 11:47:54.563866 systemd-resolved[1286]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 11:47:54.591722 containerd[1601]: time="2025-10-29T11:47:54.591681721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-98cbc6c6c-m8wcp,Uid:fb600d11-592b-4097-9fd7-ec12a58553d8,Namespace:calico-system,Attempt:0,} returns sandbox id \"1c9e801ac4778bb8f308da3b0b53ed6187c30d8994ccc7bc2687a27d6f1843b0\"" Oct 29 11:47:54.600385 containerd[1601]: time="2025-10-29T11:47:54.600349462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 29 11:47:54.666341 systemd-networkd[1504]: vxlan.calico: Link UP Oct 29 11:47:54.666349 systemd-networkd[1504]: vxlan.calico: Gained carrier Oct 29 11:47:54.682144 kubelet[2763]: I1029 11:47:54.681983 2763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="084f74b8-5e73-4bda-886a-d717a73225ab" path="/var/lib/kubelet/pods/084f74b8-5e73-4bda-886a-d717a73225ab/volumes" Oct 29 11:47:54.814886 containerd[1601]: time="2025-10-29T11:47:54.814839603Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 11:47:54.815744 containerd[1601]: time="2025-10-29T11:47:54.815695544Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 29 11:47:54.815779 containerd[1601]: time="2025-10-29T11:47:54.815717546Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 29 11:47:54.820027 kubelet[2763]: E1029 11:47:54.819921 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 11:47:54.822695 kubelet[2763]: E1029 11:47:54.822511 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 11:47:54.825715 kubelet[2763]: E1029 11:47:54.825653 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:63d9a4c1b08c40218229a9d7e57cb0f1,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mtv2r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-98cbc6c6c-m8wcp_calico-system(fb600d11-592b-4097-9fd7-ec12a58553d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 29 11:47:54.828683 containerd[1601]: time="2025-10-29T11:47:54.828649069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 29 11:47:55.045841 containerd[1601]: time="2025-10-29T11:47:55.045691258Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 11:47:55.046731 containerd[1601]: time="2025-10-29T11:47:55.046655409Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 29 11:47:55.046731 containerd[1601]: time="2025-10-29T11:47:55.046696213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 29 11:47:55.047000 kubelet[2763]: E1029 11:47:55.046901 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 11:47:55.047068 kubelet[2763]: E1029 11:47:55.047013 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 11:47:55.047254 kubelet[2763]: E1029 11:47:55.047160 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mtv2r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-98cbc6c6c-m8wcp_calico-system(fb600d11-592b-4097-9fd7-ec12a58553d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 29 11:47:55.048415 kubelet[2763]: E1029 11:47:55.048349 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-98cbc6c6c-m8wcp" podUID="fb600d11-592b-4097-9fd7-ec12a58553d8" Oct 29 11:47:55.595075 systemd-networkd[1504]: cali1cf1a1a051f: Gained IPv6LL Oct 29 11:47:55.828028 kubelet[2763]: E1029 11:47:55.827542 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-98cbc6c6c-m8wcp" podUID="fb600d11-592b-4097-9fd7-ec12a58553d8" Oct 29 11:47:55.852039 systemd-networkd[1504]: vxlan.calico: Gained IPv6LL Oct 29 11:47:58.680304 containerd[1601]: time="2025-10-29T11:47:58.680234358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-786f955c4-cnjbr,Uid:d0c9cc6e-25b8-45f8-aafd-601ef8c53fc7,Namespace:calico-apiserver,Attempt:0,}" Oct 29 11:47:58.810027 systemd-networkd[1504]: calid52e414438e: Link UP Oct 29 11:47:58.810365 systemd-networkd[1504]: calid52e414438e: Gained carrier Oct 29 11:47:58.824588 containerd[1601]: 2025-10-29 11:47:58.739 [INFO][4241] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--786f955c4--cnjbr-eth0 calico-apiserver-786f955c4- calico-apiserver d0c9cc6e-25b8-45f8-aafd-601ef8c53fc7 847 0 2025-10-29 11:47:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:786f955c4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-786f955c4-cnjbr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid52e414438e [] [] }} ContainerID="dbc5d7bad0260615b850ccfe3bbb84eacfee8626f54f374047112cd33b592083" Namespace="calico-apiserver" Pod="calico-apiserver-786f955c4-cnjbr" WorkloadEndpoint="localhost-k8s-calico--apiserver--786f955c4--cnjbr-" Oct 29 11:47:58.824588 containerd[1601]: 2025-10-29 11:47:58.739 [INFO][4241] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dbc5d7bad0260615b850ccfe3bbb84eacfee8626f54f374047112cd33b592083" Namespace="calico-apiserver" Pod="calico-apiserver-786f955c4-cnjbr" WorkloadEndpoint="localhost-k8s-calico--apiserver--786f955c4--cnjbr-eth0" Oct 29 11:47:58.824588 containerd[1601]: 2025-10-29 11:47:58.774 [INFO][4256] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dbc5d7bad0260615b850ccfe3bbb84eacfee8626f54f374047112cd33b592083" HandleID="k8s-pod-network.dbc5d7bad0260615b850ccfe3bbb84eacfee8626f54f374047112cd33b592083" Workload="localhost-k8s-calico--apiserver--786f955c4--cnjbr-eth0" Oct 29 11:47:58.824859 containerd[1601]: 2025-10-29 11:47:58.774 [INFO][4256] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="dbc5d7bad0260615b850ccfe3bbb84eacfee8626f54f374047112cd33b592083" HandleID="k8s-pod-network.dbc5d7bad0260615b850ccfe3bbb84eacfee8626f54f374047112cd33b592083" Workload="localhost-k8s-calico--apiserver--786f955c4--cnjbr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001364a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-786f955c4-cnjbr", "timestamp":"2025-10-29 11:47:58.774645526 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 11:47:58.824859 containerd[1601]: 2025-10-29 11:47:58.774 [INFO][4256] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 11:47:58.824859 containerd[1601]: 2025-10-29 11:47:58.774 [INFO][4256] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 11:47:58.824859 containerd[1601]: 2025-10-29 11:47:58.774 [INFO][4256] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 11:47:58.824859 containerd[1601]: 2025-10-29 11:47:58.784 [INFO][4256] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dbc5d7bad0260615b850ccfe3bbb84eacfee8626f54f374047112cd33b592083" host="localhost" Oct 29 11:47:58.824859 containerd[1601]: 2025-10-29 11:47:58.788 [INFO][4256] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 11:47:58.824859 containerd[1601]: 2025-10-29 11:47:58.792 [INFO][4256] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 11:47:58.824859 containerd[1601]: 2025-10-29 11:47:58.793 [INFO][4256] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 11:47:58.824859 containerd[1601]: 2025-10-29 11:47:58.795 [INFO][4256] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 11:47:58.824859 containerd[1601]: 2025-10-29 11:47:58.796 [INFO][4256] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dbc5d7bad0260615b850ccfe3bbb84eacfee8626f54f374047112cd33b592083" host="localhost" Oct 29 11:47:58.825145 containerd[1601]: 2025-10-29 11:47:58.797 [INFO][4256] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.dbc5d7bad0260615b850ccfe3bbb84eacfee8626f54f374047112cd33b592083 Oct 29 11:47:58.825145 containerd[1601]: 2025-10-29 11:47:58.800 [INFO][4256] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dbc5d7bad0260615b850ccfe3bbb84eacfee8626f54f374047112cd33b592083" host="localhost" Oct 29 11:47:58.825145 containerd[1601]: 2025-10-29 11:47:58.806 [INFO][4256] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.dbc5d7bad0260615b850ccfe3bbb84eacfee8626f54f374047112cd33b592083" host="localhost" Oct 29 11:47:58.825145 containerd[1601]: 2025-10-29 11:47:58.806 [INFO][4256] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.dbc5d7bad0260615b850ccfe3bbb84eacfee8626f54f374047112cd33b592083" host="localhost" Oct 29 11:47:58.825145 containerd[1601]: 2025-10-29 11:47:58.806 [INFO][4256] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 11:47:58.825145 containerd[1601]: 2025-10-29 11:47:58.806 [INFO][4256] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="dbc5d7bad0260615b850ccfe3bbb84eacfee8626f54f374047112cd33b592083" HandleID="k8s-pod-network.dbc5d7bad0260615b850ccfe3bbb84eacfee8626f54f374047112cd33b592083" Workload="localhost-k8s-calico--apiserver--786f955c4--cnjbr-eth0" Oct 29 11:47:58.825259 containerd[1601]: 2025-10-29 11:47:58.808 [INFO][4241] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dbc5d7bad0260615b850ccfe3bbb84eacfee8626f54f374047112cd33b592083" Namespace="calico-apiserver" Pod="calico-apiserver-786f955c4-cnjbr" WorkloadEndpoint="localhost-k8s-calico--apiserver--786f955c4--cnjbr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--786f955c4--cnjbr-eth0", GenerateName:"calico-apiserver-786f955c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"d0c9cc6e-25b8-45f8-aafd-601ef8c53fc7", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 11, 47, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"786f955c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-786f955c4-cnjbr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid52e414438e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 11:47:58.825313 containerd[1601]: 2025-10-29 11:47:58.808 [INFO][4241] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="dbc5d7bad0260615b850ccfe3bbb84eacfee8626f54f374047112cd33b592083" Namespace="calico-apiserver" Pod="calico-apiserver-786f955c4-cnjbr" WorkloadEndpoint="localhost-k8s-calico--apiserver--786f955c4--cnjbr-eth0" Oct 29 11:47:58.825313 containerd[1601]: 2025-10-29 11:47:58.808 [INFO][4241] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid52e414438e ContainerID="dbc5d7bad0260615b850ccfe3bbb84eacfee8626f54f374047112cd33b592083" Namespace="calico-apiserver" Pod="calico-apiserver-786f955c4-cnjbr" WorkloadEndpoint="localhost-k8s-calico--apiserver--786f955c4--cnjbr-eth0" Oct 29 11:47:58.825313 containerd[1601]: 2025-10-29 11:47:58.810 [INFO][4241] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dbc5d7bad0260615b850ccfe3bbb84eacfee8626f54f374047112cd33b592083" Namespace="calico-apiserver" Pod="calico-apiserver-786f955c4-cnjbr" WorkloadEndpoint="localhost-k8s-calico--apiserver--786f955c4--cnjbr-eth0" Oct 29 11:47:58.825376 containerd[1601]: 2025-10-29 11:47:58.811 [INFO][4241] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dbc5d7bad0260615b850ccfe3bbb84eacfee8626f54f374047112cd33b592083" Namespace="calico-apiserver" Pod="calico-apiserver-786f955c4-cnjbr" WorkloadEndpoint="localhost-k8s-calico--apiserver--786f955c4--cnjbr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--786f955c4--cnjbr-eth0", GenerateName:"calico-apiserver-786f955c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"d0c9cc6e-25b8-45f8-aafd-601ef8c53fc7", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 11, 47, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"786f955c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dbc5d7bad0260615b850ccfe3bbb84eacfee8626f54f374047112cd33b592083", Pod:"calico-apiserver-786f955c4-cnjbr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid52e414438e", MAC:"46:33:7b:3d:be:37", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 11:47:58.825423 containerd[1601]: 2025-10-29 11:47:58.819 [INFO][4241] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dbc5d7bad0260615b850ccfe3bbb84eacfee8626f54f374047112cd33b592083" Namespace="calico-apiserver" Pod="calico-apiserver-786f955c4-cnjbr" WorkloadEndpoint="localhost-k8s-calico--apiserver--786f955c4--cnjbr-eth0" Oct 29 11:47:58.878969 containerd[1601]: time="2025-10-29T11:47:58.878909826Z" level=info msg="connecting to shim dbc5d7bad0260615b850ccfe3bbb84eacfee8626f54f374047112cd33b592083" address="unix:///run/containerd/s/a0ab7cc2c35e6fc6c135452abd61a2ec1cac14c8c7a6e278c9dc63fb167fb8bf" namespace=k8s.io protocol=ttrpc version=3 Oct 29 11:47:58.910153 systemd[1]: Started cri-containerd-dbc5d7bad0260615b850ccfe3bbb84eacfee8626f54f374047112cd33b592083.scope - libcontainer container dbc5d7bad0260615b850ccfe3bbb84eacfee8626f54f374047112cd33b592083. Oct 29 11:47:58.921573 systemd-resolved[1286]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 11:47:58.946150 containerd[1601]: time="2025-10-29T11:47:58.946031718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-786f955c4-cnjbr,Uid:d0c9cc6e-25b8-45f8-aafd-601ef8c53fc7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"dbc5d7bad0260615b850ccfe3bbb84eacfee8626f54f374047112cd33b592083\"" Oct 29 11:47:58.951108 containerd[1601]: time="2025-10-29T11:47:58.951075577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 11:47:59.183057 systemd[1]: Started sshd@8-10.0.0.75:22-10.0.0.1:35012.service - OpenSSH per-connection server daemon (10.0.0.1:35012). Oct 29 11:47:59.271633 sshd[4322]: Accepted publickey for core from 10.0.0.1 port 35012 ssh2: RSA SHA256:+91isbynmBIjf6V6jkIkZf2tk+egrDOc6wdtdos75g8 Oct 29 11:47:59.273799 sshd-session[4322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 11:47:59.279284 systemd-logind[1586]: New session 9 of user core. Oct 29 11:47:59.285271 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 29 11:47:59.369432 containerd[1601]: time="2025-10-29T11:47:59.369366616Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 11:47:59.370265 containerd[1601]: time="2025-10-29T11:47:59.370212105Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 11:47:59.370420 containerd[1601]: time="2025-10-29T11:47:59.370245948Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 11:47:59.370523 kubelet[2763]: E1029 11:47:59.370491 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 11:47:59.370878 kubelet[2763]: E1029 11:47:59.370537 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 11:47:59.370878 kubelet[2763]: E1029 11:47:59.370670 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j9gpk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-786f955c4-cnjbr_calico-apiserver(d0c9cc6e-25b8-45f8-aafd-601ef8c53fc7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 11:47:59.371998 kubelet[2763]: E1029 11:47:59.371919 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-786f955c4-cnjbr" podUID="d0c9cc6e-25b8-45f8-aafd-601ef8c53fc7" Oct 29 11:47:59.419033 sshd[4325]: Connection closed by 10.0.0.1 port 35012 Oct 29 11:47:59.419507 sshd-session[4322]: pam_unix(sshd:session): session closed for user core Oct 29 11:47:59.423535 systemd-logind[1586]: Session 9 logged out. Waiting for processes to exit. Oct 29 11:47:59.423704 systemd[1]: sshd@8-10.0.0.75:22-10.0.0.1:35012.service: Deactivated successfully. Oct 29 11:47:59.426201 systemd[1]: session-9.scope: Deactivated successfully. Oct 29 11:47:59.427695 systemd-logind[1586]: Removed session 9. Oct 29 11:47:59.671786 containerd[1601]: time="2025-10-29T11:47:59.671652494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tpbbb,Uid:19d4b9de-b24e-493e-a2fd-91157fcb3c0a,Namespace:calico-system,Attempt:0,}" Oct 29 11:47:59.766836 systemd-networkd[1504]: calib13e52e0f52: Link UP Oct 29 11:47:59.767252 systemd-networkd[1504]: calib13e52e0f52: Gained carrier Oct 29 11:47:59.778691 containerd[1601]: 2025-10-29 11:47:59.707 [INFO][4345] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--tpbbb-eth0 csi-node-driver- calico-system 19d4b9de-b24e-493e-a2fd-91157fcb3c0a 745 0 2025-10-29 11:47:39 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-tpbbb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib13e52e0f52 [] [] }} ContainerID="1800444089935ab9015251cfaa550d916eb4ec145897ba1c863aed6cb5db1b0e" Namespace="calico-system" Pod="csi-node-driver-tpbbb" WorkloadEndpoint="localhost-k8s-csi--node--driver--tpbbb-" Oct 29 11:47:59.778691 containerd[1601]: 2025-10-29 11:47:59.707 [INFO][4345] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1800444089935ab9015251cfaa550d916eb4ec145897ba1c863aed6cb5db1b0e" Namespace="calico-system" Pod="csi-node-driver-tpbbb" WorkloadEndpoint="localhost-k8s-csi--node--driver--tpbbb-eth0" Oct 29 11:47:59.778691 containerd[1601]: 2025-10-29 11:47:59.731 [INFO][4354] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1800444089935ab9015251cfaa550d916eb4ec145897ba1c863aed6cb5db1b0e" HandleID="k8s-pod-network.1800444089935ab9015251cfaa550d916eb4ec145897ba1c863aed6cb5db1b0e" Workload="localhost-k8s-csi--node--driver--tpbbb-eth0" Oct 29 11:47:59.779284 containerd[1601]: 2025-10-29 11:47:59.731 [INFO][4354] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1800444089935ab9015251cfaa550d916eb4ec145897ba1c863aed6cb5db1b0e" HandleID="k8s-pod-network.1800444089935ab9015251cfaa550d916eb4ec145897ba1c863aed6cb5db1b0e" Workload="localhost-k8s-csi--node--driver--tpbbb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ca080), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-tpbbb", "timestamp":"2025-10-29 11:47:59.731022779 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 11:47:59.779284 containerd[1601]: 2025-10-29 11:47:59.731 [INFO][4354] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 11:47:59.779284 containerd[1601]: 2025-10-29 11:47:59.731 [INFO][4354] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 11:47:59.779284 containerd[1601]: 2025-10-29 11:47:59.731 [INFO][4354] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 11:47:59.779284 containerd[1601]: 2025-10-29 11:47:59.740 [INFO][4354] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1800444089935ab9015251cfaa550d916eb4ec145897ba1c863aed6cb5db1b0e" host="localhost" Oct 29 11:47:59.779284 containerd[1601]: 2025-10-29 11:47:59.745 [INFO][4354] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 11:47:59.779284 containerd[1601]: 2025-10-29 11:47:59.749 [INFO][4354] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 11:47:59.779284 containerd[1601]: 2025-10-29 11:47:59.750 [INFO][4354] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 11:47:59.779284 containerd[1601]: 2025-10-29 11:47:59.752 [INFO][4354] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 11:47:59.779284 containerd[1601]: 2025-10-29 11:47:59.752 [INFO][4354] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1800444089935ab9015251cfaa550d916eb4ec145897ba1c863aed6cb5db1b0e" host="localhost" Oct 29 11:47:59.779670 containerd[1601]: 2025-10-29 11:47:59.754 [INFO][4354] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1800444089935ab9015251cfaa550d916eb4ec145897ba1c863aed6cb5db1b0e Oct 29 11:47:59.779670 containerd[1601]: 2025-10-29 11:47:59.757 [INFO][4354] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1800444089935ab9015251cfaa550d916eb4ec145897ba1c863aed6cb5db1b0e" host="localhost" Oct 29 11:47:59.779670 containerd[1601]: 2025-10-29 11:47:59.763 [INFO][4354] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.1800444089935ab9015251cfaa550d916eb4ec145897ba1c863aed6cb5db1b0e" host="localhost" Oct 29 11:47:59.779670 containerd[1601]: 2025-10-29 11:47:59.763 [INFO][4354] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.1800444089935ab9015251cfaa550d916eb4ec145897ba1c863aed6cb5db1b0e" host="localhost" Oct 29 11:47:59.779670 containerd[1601]: 2025-10-29 11:47:59.763 [INFO][4354] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 11:47:59.779670 containerd[1601]: 2025-10-29 11:47:59.763 [INFO][4354] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="1800444089935ab9015251cfaa550d916eb4ec145897ba1c863aed6cb5db1b0e" HandleID="k8s-pod-network.1800444089935ab9015251cfaa550d916eb4ec145897ba1c863aed6cb5db1b0e" Workload="localhost-k8s-csi--node--driver--tpbbb-eth0" Oct 29 11:47:59.779810 containerd[1601]: 2025-10-29 11:47:59.765 [INFO][4345] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1800444089935ab9015251cfaa550d916eb4ec145897ba1c863aed6cb5db1b0e" Namespace="calico-system" Pod="csi-node-driver-tpbbb" WorkloadEndpoint="localhost-k8s-csi--node--driver--tpbbb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--tpbbb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"19d4b9de-b24e-493e-a2fd-91157fcb3c0a", ResourceVersion:"745", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 11, 47, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-tpbbb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib13e52e0f52", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 11:47:59.779865 containerd[1601]: 2025-10-29 11:47:59.765 [INFO][4345] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="1800444089935ab9015251cfaa550d916eb4ec145897ba1c863aed6cb5db1b0e" Namespace="calico-system" Pod="csi-node-driver-tpbbb" WorkloadEndpoint="localhost-k8s-csi--node--driver--tpbbb-eth0" Oct 29 11:47:59.779865 containerd[1601]: 2025-10-29 11:47:59.765 [INFO][4345] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib13e52e0f52 ContainerID="1800444089935ab9015251cfaa550d916eb4ec145897ba1c863aed6cb5db1b0e" Namespace="calico-system" Pod="csi-node-driver-tpbbb" WorkloadEndpoint="localhost-k8s-csi--node--driver--tpbbb-eth0" Oct 29 11:47:59.779865 containerd[1601]: 2025-10-29 11:47:59.767 [INFO][4345] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1800444089935ab9015251cfaa550d916eb4ec145897ba1c863aed6cb5db1b0e" Namespace="calico-system" Pod="csi-node-driver-tpbbb" WorkloadEndpoint="localhost-k8s-csi--node--driver--tpbbb-eth0" Oct 29 11:47:59.779994 containerd[1601]: 2025-10-29 11:47:59.767 [INFO][4345] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1800444089935ab9015251cfaa550d916eb4ec145897ba1c863aed6cb5db1b0e" Namespace="calico-system" Pod="csi-node-driver-tpbbb" WorkloadEndpoint="localhost-k8s-csi--node--driver--tpbbb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--tpbbb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"19d4b9de-b24e-493e-a2fd-91157fcb3c0a", ResourceVersion:"745", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 11, 47, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1800444089935ab9015251cfaa550d916eb4ec145897ba1c863aed6cb5db1b0e", Pod:"csi-node-driver-tpbbb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib13e52e0f52", MAC:"46:c7:e0:12:f4:85", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 11:47:59.780133 containerd[1601]: 2025-10-29 11:47:59.775 [INFO][4345] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1800444089935ab9015251cfaa550d916eb4ec145897ba1c863aed6cb5db1b0e" Namespace="calico-system" Pod="csi-node-driver-tpbbb" WorkloadEndpoint="localhost-k8s-csi--node--driver--tpbbb-eth0" Oct 29 11:47:59.797182 containerd[1601]: time="2025-10-29T11:47:59.797136610Z" level=info msg="connecting to shim 1800444089935ab9015251cfaa550d916eb4ec145897ba1c863aed6cb5db1b0e" address="unix:///run/containerd/s/c6f2f40564013b213a61312a6739d52fe150ce4d2cb97cec6f2bf3a69badb101" namespace=k8s.io protocol=ttrpc version=3 Oct 29 11:47:59.837151 systemd[1]: Started cri-containerd-1800444089935ab9015251cfaa550d916eb4ec145897ba1c863aed6cb5db1b0e.scope - libcontainer container 1800444089935ab9015251cfaa550d916eb4ec145897ba1c863aed6cb5db1b0e. Oct 29 11:47:59.848245 systemd-resolved[1286]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 11:47:59.869418 kubelet[2763]: E1029 11:47:59.869290 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-786f955c4-cnjbr" podUID="d0c9cc6e-25b8-45f8-aafd-601ef8c53fc7" Oct 29 11:47:59.871359 containerd[1601]: time="2025-10-29T11:47:59.871281480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tpbbb,Uid:19d4b9de-b24e-493e-a2fd-91157fcb3c0a,Namespace:calico-system,Attempt:0,} returns sandbox id \"1800444089935ab9015251cfaa550d916eb4ec145897ba1c863aed6cb5db1b0e\"" Oct 29 11:47:59.874683 containerd[1601]: time="2025-10-29T11:47:59.874645552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 29 11:48:00.249636 containerd[1601]: time="2025-10-29T11:48:00.249584445Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 11:48:00.250473 containerd[1601]: time="2025-10-29T11:48:00.250438532Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 29 11:48:00.250541 containerd[1601]: time="2025-10-29T11:48:00.250518060Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 29 11:48:00.250707 kubelet[2763]: E1029 11:48:00.250656 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 11:48:00.250707 kubelet[2763]: E1029 11:48:00.250705 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 11:48:00.250899 kubelet[2763]: E1029 11:48:00.250863 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b4st4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tpbbb_calico-system(19d4b9de-b24e-493e-a2fd-91157fcb3c0a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 29 11:48:00.253005 containerd[1601]: time="2025-10-29T11:48:00.252976232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 29 11:48:00.267093 systemd-networkd[1504]: calid52e414438e: Gained IPv6LL Oct 29 11:48:00.330959 kubelet[2763]: I1029 11:48:00.330061 2763 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 29 11:48:00.330959 kubelet[2763]: E1029 11:48:00.330483 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:48:00.446808 containerd[1601]: time="2025-10-29T11:48:00.446766107Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b7891fac55aba6b890cf8a81e9a5a724ecf8309b8790fc6abe816e18762b8b70\" id:\"f6591f267645d949f81bb0f88b98e72c7309d39230b5eeaa7a9f9ba94ae324db\" pid:4434 exited_at:{seconds:1761738480 nanos:446426832}" Oct 29 11:48:00.476421 containerd[1601]: time="2025-10-29T11:48:00.476369377Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 11:48:00.477306 containerd[1601]: time="2025-10-29T11:48:00.477271269Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 29 11:48:00.477464 containerd[1601]: time="2025-10-29T11:48:00.477296552Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 29 11:48:00.478337 kubelet[2763]: E1029 11:48:00.477721 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 11:48:00.478337 kubelet[2763]: E1029 11:48:00.477835 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 11:48:00.479081 kubelet[2763]: E1029 11:48:00.478988 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b4st4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tpbbb_calico-system(19d4b9de-b24e-493e-a2fd-91157fcb3c0a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 29 11:48:00.480226 kubelet[2763]: E1029 11:48:00.480188 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tpbbb" podUID="19d4b9de-b24e-493e-a2fd-91157fcb3c0a" Oct 29 11:48:00.531251 containerd[1601]: time="2025-10-29T11:48:00.531053614Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b7891fac55aba6b890cf8a81e9a5a724ecf8309b8790fc6abe816e18762b8b70\" id:\"2a1e2198902beb33cae3e8ade255b05b22e15020f2c075a190618d72d1101949\" pid:4457 exited_at:{seconds:1761738480 nanos:530653893}" Oct 29 11:48:00.671035 containerd[1601]: time="2025-10-29T11:48:00.670984457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ml2tk,Uid:7ef53ca0-e6af-4f13-8298-54b41e79363b,Namespace:calico-system,Attempt:0,}" Oct 29 11:48:00.767868 systemd-networkd[1504]: cali3c00acf7f88: Link UP Oct 29 11:48:00.768132 systemd-networkd[1504]: cali3c00acf7f88: Gained carrier Oct 29 11:48:00.780041 systemd-networkd[1504]: calib13e52e0f52: Gained IPv6LL Oct 29 11:48:00.783677 containerd[1601]: 2025-10-29 11:48:00.707 [INFO][4471] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--ml2tk-eth0 goldmane-666569f655- calico-system 7ef53ca0-e6af-4f13-8298-54b41e79363b 855 0 2025-10-29 11:47:37 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-ml2tk eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali3c00acf7f88 [] [] }} ContainerID="8a5a58a1baaa2b15d375b9cc90deef5e7ecaa121c236cc74440187e880fb6a63" Namespace="calico-system" Pod="goldmane-666569f655-ml2tk" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ml2tk-" Oct 29 11:48:00.783677 containerd[1601]: 2025-10-29 11:48:00.707 [INFO][4471] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8a5a58a1baaa2b15d375b9cc90deef5e7ecaa121c236cc74440187e880fb6a63" Namespace="calico-system" Pod="goldmane-666569f655-ml2tk" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ml2tk-eth0" Oct 29 11:48:00.783677 containerd[1601]: 2025-10-29 11:48:00.730 [INFO][4486] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a5a58a1baaa2b15d375b9cc90deef5e7ecaa121c236cc74440187e880fb6a63" HandleID="k8s-pod-network.8a5a58a1baaa2b15d375b9cc90deef5e7ecaa121c236cc74440187e880fb6a63" Workload="localhost-k8s-goldmane--666569f655--ml2tk-eth0" Oct 29 11:48:00.784043 containerd[1601]: 2025-10-29 11:48:00.730 [INFO][4486] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8a5a58a1baaa2b15d375b9cc90deef5e7ecaa121c236cc74440187e880fb6a63" HandleID="k8s-pod-network.8a5a58a1baaa2b15d375b9cc90deef5e7ecaa121c236cc74440187e880fb6a63" Workload="localhost-k8s-goldmane--666569f655--ml2tk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c2fd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-ml2tk", "timestamp":"2025-10-29 11:48:00.730572836 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 11:48:00.784043 containerd[1601]: 2025-10-29 11:48:00.730 [INFO][4486] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 11:48:00.784043 containerd[1601]: 2025-10-29 11:48:00.730 [INFO][4486] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 11:48:00.784043 containerd[1601]: 2025-10-29 11:48:00.730 [INFO][4486] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 11:48:00.784043 containerd[1601]: 2025-10-29 11:48:00.740 [INFO][4486] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8a5a58a1baaa2b15d375b9cc90deef5e7ecaa121c236cc74440187e880fb6a63" host="localhost" Oct 29 11:48:00.784043 containerd[1601]: 2025-10-29 11:48:00.744 [INFO][4486] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 11:48:00.784043 containerd[1601]: 2025-10-29 11:48:00.748 [INFO][4486] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 11:48:00.784043 containerd[1601]: 2025-10-29 11:48:00.750 [INFO][4486] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 11:48:00.784043 containerd[1601]: 2025-10-29 11:48:00.752 [INFO][4486] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 11:48:00.784043 containerd[1601]: 2025-10-29 11:48:00.753 [INFO][4486] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8a5a58a1baaa2b15d375b9cc90deef5e7ecaa121c236cc74440187e880fb6a63" host="localhost" Oct 29 11:48:00.784260 containerd[1601]: 2025-10-29 11:48:00.754 [INFO][4486] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8a5a58a1baaa2b15d375b9cc90deef5e7ecaa121c236cc74440187e880fb6a63 Oct 29 11:48:00.784260 containerd[1601]: 2025-10-29 11:48:00.758 [INFO][4486] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8a5a58a1baaa2b15d375b9cc90deef5e7ecaa121c236cc74440187e880fb6a63" host="localhost" Oct 29 11:48:00.784260 containerd[1601]: 2025-10-29 11:48:00.763 [INFO][4486] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.8a5a58a1baaa2b15d375b9cc90deef5e7ecaa121c236cc74440187e880fb6a63" host="localhost" Oct 29 11:48:00.784260 containerd[1601]: 2025-10-29 11:48:00.763 [INFO][4486] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.8a5a58a1baaa2b15d375b9cc90deef5e7ecaa121c236cc74440187e880fb6a63" host="localhost" Oct 29 11:48:00.784260 containerd[1601]: 2025-10-29 11:48:00.763 [INFO][4486] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 11:48:00.784260 containerd[1601]: 2025-10-29 11:48:00.763 [INFO][4486] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="8a5a58a1baaa2b15d375b9cc90deef5e7ecaa121c236cc74440187e880fb6a63" HandleID="k8s-pod-network.8a5a58a1baaa2b15d375b9cc90deef5e7ecaa121c236cc74440187e880fb6a63" Workload="localhost-k8s-goldmane--666569f655--ml2tk-eth0" Oct 29 11:48:00.784367 containerd[1601]: 2025-10-29 11:48:00.765 [INFO][4471] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8a5a58a1baaa2b15d375b9cc90deef5e7ecaa121c236cc74440187e880fb6a63" Namespace="calico-system" Pod="goldmane-666569f655-ml2tk" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ml2tk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--ml2tk-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"7ef53ca0-e6af-4f13-8298-54b41e79363b", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 11, 47, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-ml2tk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3c00acf7f88", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 11:48:00.784367 containerd[1601]: 2025-10-29 11:48:00.766 [INFO][4471] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="8a5a58a1baaa2b15d375b9cc90deef5e7ecaa121c236cc74440187e880fb6a63" Namespace="calico-system" Pod="goldmane-666569f655-ml2tk" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ml2tk-eth0" Oct 29 11:48:00.784433 containerd[1601]: 2025-10-29 11:48:00.766 [INFO][4471] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3c00acf7f88 ContainerID="8a5a58a1baaa2b15d375b9cc90deef5e7ecaa121c236cc74440187e880fb6a63" Namespace="calico-system" Pod="goldmane-666569f655-ml2tk" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ml2tk-eth0" Oct 29 11:48:00.784433 containerd[1601]: 2025-10-29 11:48:00.768 [INFO][4471] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8a5a58a1baaa2b15d375b9cc90deef5e7ecaa121c236cc74440187e880fb6a63" Namespace="calico-system" Pod="goldmane-666569f655-ml2tk" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ml2tk-eth0" Oct 29 11:48:00.784476 containerd[1601]: 2025-10-29 11:48:00.768 [INFO][4471] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8a5a58a1baaa2b15d375b9cc90deef5e7ecaa121c236cc74440187e880fb6a63" Namespace="calico-system" Pod="goldmane-666569f655-ml2tk" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ml2tk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--ml2tk-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"7ef53ca0-e6af-4f13-8298-54b41e79363b", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 11, 47, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8a5a58a1baaa2b15d375b9cc90deef5e7ecaa121c236cc74440187e880fb6a63", Pod:"goldmane-666569f655-ml2tk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3c00acf7f88", MAC:"62:d4:ed:9d:ae:f5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 11:48:00.784524 containerd[1601]: 2025-10-29 11:48:00.778 [INFO][4471] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8a5a58a1baaa2b15d375b9cc90deef5e7ecaa121c236cc74440187e880fb6a63" Namespace="calico-system" Pod="goldmane-666569f655-ml2tk" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ml2tk-eth0" Oct 29 11:48:00.803784 containerd[1601]: time="2025-10-29T11:48:00.803725323Z" level=info msg="connecting to shim 8a5a58a1baaa2b15d375b9cc90deef5e7ecaa121c236cc74440187e880fb6a63" address="unix:///run/containerd/s/235fcf7277ffbcb355cb8d74634e19a1d486950bcaa15787abf75d19c097b721" namespace=k8s.io protocol=ttrpc version=3 Oct 29 11:48:00.826133 systemd[1]: Started cri-containerd-8a5a58a1baaa2b15d375b9cc90deef5e7ecaa121c236cc74440187e880fb6a63.scope - libcontainer container 8a5a58a1baaa2b15d375b9cc90deef5e7ecaa121c236cc74440187e880fb6a63. Oct 29 11:48:00.837416 systemd-resolved[1286]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 11:48:00.857670 containerd[1601]: time="2025-10-29T11:48:00.857623080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ml2tk,Uid:7ef53ca0-e6af-4f13-8298-54b41e79363b,Namespace:calico-system,Attempt:0,} returns sandbox id \"8a5a58a1baaa2b15d375b9cc90deef5e7ecaa121c236cc74440187e880fb6a63\"" Oct 29 11:48:00.859629 containerd[1601]: time="2025-10-29T11:48:00.859599122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 29 11:48:00.872732 kubelet[2763]: E1029 11:48:00.872690 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tpbbb" podUID="19d4b9de-b24e-493e-a2fd-91157fcb3c0a" Oct 29 11:48:00.873935 kubelet[2763]: E1029 11:48:00.873906 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:48:00.875597 kubelet[2763]: E1029 11:48:00.875570 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-786f955c4-cnjbr" podUID="d0c9cc6e-25b8-45f8-aafd-601ef8c53fc7" Oct 29 11:48:01.079614 containerd[1601]: time="2025-10-29T11:48:01.079523639Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 11:48:01.080504 containerd[1601]: time="2025-10-29T11:48:01.080466014Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 29 11:48:01.080678 containerd[1601]: time="2025-10-29T11:48:01.080527620Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 29 11:48:01.081052 kubelet[2763]: E1029 11:48:01.080762 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 11:48:01.081052 kubelet[2763]: E1029 11:48:01.080809 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 11:48:01.081052 kubelet[2763]: E1029 11:48:01.080988 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7wjwp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-ml2tk_calico-system(7ef53ca0-e6af-4f13-8298-54b41e79363b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 29 11:48:01.082335 kubelet[2763]: E1029 11:48:01.082273 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ml2tk" podUID="7ef53ca0-e6af-4f13-8298-54b41e79363b" Oct 29 11:48:01.671617 kubelet[2763]: E1029 11:48:01.671560 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:48:01.672727 containerd[1601]: time="2025-10-29T11:48:01.672009035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lznm8,Uid:8d12e45e-28f7-4f72-aa9c-a877f939496f,Namespace:kube-system,Attempt:0,}" Oct 29 11:48:01.673063 kubelet[2763]: E1029 11:48:01.671466 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:48:01.673597 containerd[1601]: time="2025-10-29T11:48:01.673562071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ck2dh,Uid:0e165bbe-8839-4003-b237-6d7afba67d0d,Namespace:kube-system,Attempt:0,}" Oct 29 11:48:01.792122 systemd-networkd[1504]: calid95f1013396: Link UP Oct 29 11:48:01.793047 systemd-networkd[1504]: calid95f1013396: Gained carrier Oct 29 11:48:01.814608 containerd[1601]: 2025-10-29 11:48:01.724 [INFO][4551] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--lznm8-eth0 coredns-674b8bbfcf- kube-system 8d12e45e-28f7-4f72-aa9c-a877f939496f 843 0 2025-10-29 11:47:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-lznm8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid95f1013396 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9aade93800c951f761dca64d658d8e9329fb7f173df6dc6d12859f91341c9889" Namespace="kube-system" Pod="coredns-674b8bbfcf-lznm8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lznm8-" Oct 29 11:48:01.814608 containerd[1601]: 2025-10-29 11:48:01.725 [INFO][4551] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9aade93800c951f761dca64d658d8e9329fb7f173df6dc6d12859f91341c9889" Namespace="kube-system" Pod="coredns-674b8bbfcf-lznm8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lznm8-eth0" Oct 29 11:48:01.814608 containerd[1601]: 2025-10-29 11:48:01.749 [INFO][4586] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9aade93800c951f761dca64d658d8e9329fb7f173df6dc6d12859f91341c9889" HandleID="k8s-pod-network.9aade93800c951f761dca64d658d8e9329fb7f173df6dc6d12859f91341c9889" Workload="localhost-k8s-coredns--674b8bbfcf--lznm8-eth0" Oct 29 11:48:01.816245 containerd[1601]: 2025-10-29 11:48:01.749 [INFO][4586] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9aade93800c951f761dca64d658d8e9329fb7f173df6dc6d12859f91341c9889" HandleID="k8s-pod-network.9aade93800c951f761dca64d658d8e9329fb7f173df6dc6d12859f91341c9889" Workload="localhost-k8s-coredns--674b8bbfcf--lznm8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400013c4b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-lznm8", "timestamp":"2025-10-29 11:48:01.749351354 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 11:48:01.816245 containerd[1601]: 2025-10-29 11:48:01.749 [INFO][4586] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 11:48:01.816245 containerd[1601]: 2025-10-29 11:48:01.749 [INFO][4586] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 11:48:01.816245 containerd[1601]: 2025-10-29 11:48:01.749 [INFO][4586] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 11:48:01.816245 containerd[1601]: 2025-10-29 11:48:01.759 [INFO][4586] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9aade93800c951f761dca64d658d8e9329fb7f173df6dc6d12859f91341c9889" host="localhost" Oct 29 11:48:01.816245 containerd[1601]: 2025-10-29 11:48:01.764 [INFO][4586] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 11:48:01.816245 containerd[1601]: 2025-10-29 11:48:01.769 [INFO][4586] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 11:48:01.816245 containerd[1601]: 2025-10-29 11:48:01.771 [INFO][4586] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 11:48:01.816245 containerd[1601]: 2025-10-29 11:48:01.773 [INFO][4586] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 11:48:01.816245 containerd[1601]: 2025-10-29 11:48:01.773 [INFO][4586] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9aade93800c951f761dca64d658d8e9329fb7f173df6dc6d12859f91341c9889" host="localhost" Oct 29 11:48:01.816527 containerd[1601]: 2025-10-29 11:48:01.775 [INFO][4586] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9aade93800c951f761dca64d658d8e9329fb7f173df6dc6d12859f91341c9889 Oct 29 11:48:01.816527 containerd[1601]: 2025-10-29 11:48:01.779 [INFO][4586] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9aade93800c951f761dca64d658d8e9329fb7f173df6dc6d12859f91341c9889" host="localhost" Oct 29 11:48:01.816527 containerd[1601]: 2025-10-29 11:48:01.786 [INFO][4586] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.9aade93800c951f761dca64d658d8e9329fb7f173df6dc6d12859f91341c9889" host="localhost" Oct 29 11:48:01.816527 containerd[1601]: 2025-10-29 11:48:01.786 [INFO][4586] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.9aade93800c951f761dca64d658d8e9329fb7f173df6dc6d12859f91341c9889" host="localhost" Oct 29 11:48:01.816527 containerd[1601]: 2025-10-29 11:48:01.786 [INFO][4586] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 11:48:01.816527 containerd[1601]: 2025-10-29 11:48:01.786 [INFO][4586] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="9aade93800c951f761dca64d658d8e9329fb7f173df6dc6d12859f91341c9889" HandleID="k8s-pod-network.9aade93800c951f761dca64d658d8e9329fb7f173df6dc6d12859f91341c9889" Workload="localhost-k8s-coredns--674b8bbfcf--lznm8-eth0" Oct 29 11:48:01.816701 containerd[1601]: 2025-10-29 11:48:01.789 [INFO][4551] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9aade93800c951f761dca64d658d8e9329fb7f173df6dc6d12859f91341c9889" Namespace="kube-system" Pod="coredns-674b8bbfcf-lznm8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lznm8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--lznm8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8d12e45e-28f7-4f72-aa9c-a877f939496f", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 11, 47, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-lznm8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid95f1013396", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 11:48:01.816778 containerd[1601]: 2025-10-29 11:48:01.789 [INFO][4551] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="9aade93800c951f761dca64d658d8e9329fb7f173df6dc6d12859f91341c9889" Namespace="kube-system" Pod="coredns-674b8bbfcf-lznm8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lznm8-eth0" Oct 29 11:48:01.816778 containerd[1601]: 2025-10-29 11:48:01.789 [INFO][4551] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid95f1013396 ContainerID="9aade93800c951f761dca64d658d8e9329fb7f173df6dc6d12859f91341c9889" Namespace="kube-system" Pod="coredns-674b8bbfcf-lznm8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lznm8-eth0" Oct 29 11:48:01.816778 containerd[1601]: 2025-10-29 11:48:01.794 [INFO][4551] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9aade93800c951f761dca64d658d8e9329fb7f173df6dc6d12859f91341c9889" Namespace="kube-system" Pod="coredns-674b8bbfcf-lznm8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lznm8-eth0" Oct 29 11:48:01.816863 containerd[1601]: 2025-10-29 11:48:01.794 [INFO][4551] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9aade93800c951f761dca64d658d8e9329fb7f173df6dc6d12859f91341c9889" Namespace="kube-system" Pod="coredns-674b8bbfcf-lznm8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lznm8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--lznm8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8d12e45e-28f7-4f72-aa9c-a877f939496f", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 11, 47, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9aade93800c951f761dca64d658d8e9329fb7f173df6dc6d12859f91341c9889", Pod:"coredns-674b8bbfcf-lznm8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid95f1013396", MAC:"66:05:3a:e5:62:47", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 11:48:01.816863 containerd[1601]: 2025-10-29 11:48:01.813 [INFO][4551] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9aade93800c951f761dca64d658d8e9329fb7f173df6dc6d12859f91341c9889" Namespace="kube-system" Pod="coredns-674b8bbfcf-lznm8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lznm8-eth0" Oct 29 11:48:01.844653 containerd[1601]: time="2025-10-29T11:48:01.844567586Z" level=info msg="connecting to shim 9aade93800c951f761dca64d658d8e9329fb7f173df6dc6d12859f91341c9889" address="unix:///run/containerd/s/796ccfcdc4142a3a47203277a0a309e30b5544befc298f0759dd65119bbb1648" namespace=k8s.io protocol=ttrpc version=3 Oct 29 11:48:01.866172 systemd[1]: Started cri-containerd-9aade93800c951f761dca64d658d8e9329fb7f173df6dc6d12859f91341c9889.scope - libcontainer container 9aade93800c951f761dca64d658d8e9329fb7f173df6dc6d12859f91341c9889. Oct 29 11:48:01.877812 kubelet[2763]: E1029 11:48:01.877522 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ml2tk" podUID="7ef53ca0-e6af-4f13-8298-54b41e79363b" Oct 29 11:48:01.879431 kubelet[2763]: E1029 11:48:01.879381 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tpbbb" podUID="19d4b9de-b24e-493e-a2fd-91157fcb3c0a" Oct 29 11:48:01.883325 systemd-resolved[1286]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 11:48:01.914794 containerd[1601]: time="2025-10-29T11:48:01.914702221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lznm8,Uid:8d12e45e-28f7-4f72-aa9c-a877f939496f,Namespace:kube-system,Attempt:0,} returns sandbox id \"9aade93800c951f761dca64d658d8e9329fb7f173df6dc6d12859f91341c9889\"" Oct 29 11:48:01.917042 kubelet[2763]: E1029 11:48:01.917006 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:48:01.923253 containerd[1601]: time="2025-10-29T11:48:01.923136547Z" level=info msg="CreateContainer within sandbox \"9aade93800c951f761dca64d658d8e9329fb7f173df6dc6d12859f91341c9889\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 29 11:48:01.936968 containerd[1601]: time="2025-10-29T11:48:01.936863684Z" level=info msg="Container 645cba4305b78643a0484d40e850840ea2eb1205b731db963e64e7484a6ced1b: CDI devices from CRI Config.CDIDevices: []" Oct 29 11:48:01.939140 systemd-networkd[1504]: cali27485a5dde7: Link UP Oct 29 11:48:01.939484 systemd-networkd[1504]: cali27485a5dde7: Gained carrier Oct 29 11:48:01.952119 containerd[1601]: time="2025-10-29T11:48:01.951384901Z" level=info msg="CreateContainer within sandbox \"9aade93800c951f761dca64d658d8e9329fb7f173df6dc6d12859f91341c9889\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"645cba4305b78643a0484d40e850840ea2eb1205b731db963e64e7484a6ced1b\"" Oct 29 11:48:01.953074 containerd[1601]: time="2025-10-29T11:48:01.953048388Z" level=info msg="StartContainer for \"645cba4305b78643a0484d40e850840ea2eb1205b731db963e64e7484a6ced1b\"" Oct 29 11:48:01.957727 containerd[1601]: time="2025-10-29T11:48:01.957519837Z" level=info msg="connecting to shim 645cba4305b78643a0484d40e850840ea2eb1205b731db963e64e7484a6ced1b" address="unix:///run/containerd/s/796ccfcdc4142a3a47203277a0a309e30b5544befc298f0759dd65119bbb1648" protocol=ttrpc version=3 Oct 29 11:48:01.965509 containerd[1601]: 2025-10-29 11:48:01.720 [INFO][4566] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--ck2dh-eth0 coredns-674b8bbfcf- kube-system 0e165bbe-8839-4003-b237-6d7afba67d0d 852 0 2025-10-29 11:47:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-ck2dh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali27485a5dde7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="cee73f8b450522634ab8e46befb03a08332d3e0c5c51ba78b5b9c70f3d02f214" Namespace="kube-system" Pod="coredns-674b8bbfcf-ck2dh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ck2dh-" Oct 29 11:48:01.965509 containerd[1601]: 2025-10-29 11:48:01.720 [INFO][4566] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cee73f8b450522634ab8e46befb03a08332d3e0c5c51ba78b5b9c70f3d02f214" Namespace="kube-system" Pod="coredns-674b8bbfcf-ck2dh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ck2dh-eth0" Oct 29 11:48:01.965509 containerd[1601]: 2025-10-29 11:48:01.749 [INFO][4580] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cee73f8b450522634ab8e46befb03a08332d3e0c5c51ba78b5b9c70f3d02f214" HandleID="k8s-pod-network.cee73f8b450522634ab8e46befb03a08332d3e0c5c51ba78b5b9c70f3d02f214" Workload="localhost-k8s-coredns--674b8bbfcf--ck2dh-eth0" Oct 29 11:48:01.965509 containerd[1601]: 2025-10-29 11:48:01.749 [INFO][4580] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cee73f8b450522634ab8e46befb03a08332d3e0c5c51ba78b5b9c70f3d02f214" HandleID="k8s-pod-network.cee73f8b450522634ab8e46befb03a08332d3e0c5c51ba78b5b9c70f3d02f214" Workload="localhost-k8s-coredns--674b8bbfcf--ck2dh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001373f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-ck2dh", "timestamp":"2025-10-29 11:48:01.749351314 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 11:48:01.965509 containerd[1601]: 2025-10-29 11:48:01.749 [INFO][4580] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 11:48:01.965509 containerd[1601]: 2025-10-29 11:48:01.786 [INFO][4580] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 11:48:01.965509 containerd[1601]: 2025-10-29 11:48:01.786 [INFO][4580] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 11:48:01.965509 containerd[1601]: 2025-10-29 11:48:01.860 [INFO][4580] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cee73f8b450522634ab8e46befb03a08332d3e0c5c51ba78b5b9c70f3d02f214" host="localhost" Oct 29 11:48:01.965509 containerd[1601]: 2025-10-29 11:48:01.868 [INFO][4580] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 11:48:01.965509 containerd[1601]: 2025-10-29 11:48:01.874 [INFO][4580] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 11:48:01.965509 containerd[1601]: 2025-10-29 11:48:01.877 [INFO][4580] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 11:48:01.965509 containerd[1601]: 2025-10-29 11:48:01.880 [INFO][4580] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 11:48:01.965509 containerd[1601]: 2025-10-29 11:48:01.882 [INFO][4580] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cee73f8b450522634ab8e46befb03a08332d3e0c5c51ba78b5b9c70f3d02f214" host="localhost" Oct 29 11:48:01.965509 containerd[1601]: 2025-10-29 11:48:01.888 [INFO][4580] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cee73f8b450522634ab8e46befb03a08332d3e0c5c51ba78b5b9c70f3d02f214 Oct 29 11:48:01.965509 containerd[1601]: 2025-10-29 11:48:01.913 [INFO][4580] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cee73f8b450522634ab8e46befb03a08332d3e0c5c51ba78b5b9c70f3d02f214" host="localhost" Oct 29 11:48:01.965509 containerd[1601]: 2025-10-29 11:48:01.925 [INFO][4580] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.cee73f8b450522634ab8e46befb03a08332d3e0c5c51ba78b5b9c70f3d02f214" host="localhost" Oct 29 11:48:01.965509 containerd[1601]: 2025-10-29 11:48:01.925 [INFO][4580] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.cee73f8b450522634ab8e46befb03a08332d3e0c5c51ba78b5b9c70f3d02f214" host="localhost" Oct 29 11:48:01.965509 containerd[1601]: 2025-10-29 11:48:01.925 [INFO][4580] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 11:48:01.965509 containerd[1601]: 2025-10-29 11:48:01.925 [INFO][4580] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="cee73f8b450522634ab8e46befb03a08332d3e0c5c51ba78b5b9c70f3d02f214" HandleID="k8s-pod-network.cee73f8b450522634ab8e46befb03a08332d3e0c5c51ba78b5b9c70f3d02f214" Workload="localhost-k8s-coredns--674b8bbfcf--ck2dh-eth0" Oct 29 11:48:01.966054 containerd[1601]: 2025-10-29 11:48:01.930 [INFO][4566] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cee73f8b450522634ab8e46befb03a08332d3e0c5c51ba78b5b9c70f3d02f214" Namespace="kube-system" Pod="coredns-674b8bbfcf-ck2dh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ck2dh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--ck2dh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0e165bbe-8839-4003-b237-6d7afba67d0d", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 11, 47, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-ck2dh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali27485a5dde7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 11:48:01.966054 containerd[1601]: 2025-10-29 11:48:01.931 [INFO][4566] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="cee73f8b450522634ab8e46befb03a08332d3e0c5c51ba78b5b9c70f3d02f214" Namespace="kube-system" Pod="coredns-674b8bbfcf-ck2dh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ck2dh-eth0" Oct 29 11:48:01.966054 containerd[1601]: 2025-10-29 11:48:01.931 [INFO][4566] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali27485a5dde7 ContainerID="cee73f8b450522634ab8e46befb03a08332d3e0c5c51ba78b5b9c70f3d02f214" Namespace="kube-system" Pod="coredns-674b8bbfcf-ck2dh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ck2dh-eth0" Oct 29 11:48:01.966054 containerd[1601]: 2025-10-29 11:48:01.940 [INFO][4566] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cee73f8b450522634ab8e46befb03a08332d3e0c5c51ba78b5b9c70f3d02f214" Namespace="kube-system" Pod="coredns-674b8bbfcf-ck2dh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ck2dh-eth0" Oct 29 11:48:01.966054 containerd[1601]: 2025-10-29 11:48:01.943 [INFO][4566] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cee73f8b450522634ab8e46befb03a08332d3e0c5c51ba78b5b9c70f3d02f214" Namespace="kube-system" Pod="coredns-674b8bbfcf-ck2dh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ck2dh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--ck2dh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0e165bbe-8839-4003-b237-6d7afba67d0d", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 11, 47, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cee73f8b450522634ab8e46befb03a08332d3e0c5c51ba78b5b9c70f3d02f214", Pod:"coredns-674b8bbfcf-ck2dh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali27485a5dde7", MAC:"3a:87:2a:13:7c:d3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 11:48:01.966054 containerd[1601]: 2025-10-29 11:48:01.956 [INFO][4566] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cee73f8b450522634ab8e46befb03a08332d3e0c5c51ba78b5b9c70f3d02f214" Namespace="kube-system" Pod="coredns-674b8bbfcf-ck2dh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ck2dh-eth0" Oct 29 11:48:01.981681 systemd[1]: Started cri-containerd-645cba4305b78643a0484d40e850840ea2eb1205b731db963e64e7484a6ced1b.scope - libcontainer container 645cba4305b78643a0484d40e850840ea2eb1205b731db963e64e7484a6ced1b. Oct 29 11:48:01.993812 containerd[1601]: time="2025-10-29T11:48:01.993747231Z" level=info msg="connecting to shim cee73f8b450522634ab8e46befb03a08332d3e0c5c51ba78b5b9c70f3d02f214" address="unix:///run/containerd/s/5ae009d7860fe1c2ed1fb850f2f4f815a17734375e6856c40f3f7e89614e7e88" namespace=k8s.io protocol=ttrpc version=3 Oct 29 11:48:02.013609 containerd[1601]: time="2025-10-29T11:48:02.013563195Z" level=info msg="StartContainer for \"645cba4305b78643a0484d40e850840ea2eb1205b731db963e64e7484a6ced1b\" returns successfully" Oct 29 11:48:02.024164 systemd[1]: Started cri-containerd-cee73f8b450522634ab8e46befb03a08332d3e0c5c51ba78b5b9c70f3d02f214.scope - libcontainer container cee73f8b450522634ab8e46befb03a08332d3e0c5c51ba78b5b9c70f3d02f214. Oct 29 11:48:02.037657 systemd-resolved[1286]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 11:48:02.084417 containerd[1601]: time="2025-10-29T11:48:02.084359561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ck2dh,Uid:0e165bbe-8839-4003-b237-6d7afba67d0d,Namespace:kube-system,Attempt:0,} returns sandbox id \"cee73f8b450522634ab8e46befb03a08332d3e0c5c51ba78b5b9c70f3d02f214\"" Oct 29 11:48:02.085274 kubelet[2763]: E1029 11:48:02.085201 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:48:02.103410 containerd[1601]: time="2025-10-29T11:48:02.103370072Z" level=info msg="CreateContainer within sandbox \"cee73f8b450522634ab8e46befb03a08332d3e0c5c51ba78b5b9c70f3d02f214\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 29 11:48:02.110597 containerd[1601]: time="2025-10-29T11:48:02.110501534Z" level=info msg="Container b20b6791d45139ae6aff9075c4227929e26a769d37ba1979beb28c6d81c8bc9a: CDI devices from CRI Config.CDIDevices: []" Oct 29 11:48:02.115904 containerd[1601]: time="2025-10-29T11:48:02.115772573Z" level=info msg="CreateContainer within sandbox \"cee73f8b450522634ab8e46befb03a08332d3e0c5c51ba78b5b9c70f3d02f214\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b20b6791d45139ae6aff9075c4227929e26a769d37ba1979beb28c6d81c8bc9a\"" Oct 29 11:48:02.117017 containerd[1601]: time="2025-10-29T11:48:02.116590893Z" level=info msg="StartContainer for \"b20b6791d45139ae6aff9075c4227929e26a769d37ba1979beb28c6d81c8bc9a\"" Oct 29 11:48:02.117869 containerd[1601]: time="2025-10-29T11:48:02.117831655Z" level=info msg="connecting to shim b20b6791d45139ae6aff9075c4227929e26a769d37ba1979beb28c6d81c8bc9a" address="unix:///run/containerd/s/5ae009d7860fe1c2ed1fb850f2f4f815a17734375e6856c40f3f7e89614e7e88" protocol=ttrpc version=3 Oct 29 11:48:02.149224 systemd[1]: Started cri-containerd-b20b6791d45139ae6aff9075c4227929e26a769d37ba1979beb28c6d81c8bc9a.scope - libcontainer container b20b6791d45139ae6aff9075c4227929e26a769d37ba1979beb28c6d81c8bc9a. Oct 29 11:48:02.184121 containerd[1601]: time="2025-10-29T11:48:02.184017408Z" level=info msg="StartContainer for \"b20b6791d45139ae6aff9075c4227929e26a769d37ba1979beb28c6d81c8bc9a\" returns successfully" Oct 29 11:48:02.635161 systemd-networkd[1504]: cali3c00acf7f88: Gained IPv6LL Oct 29 11:48:02.671895 containerd[1601]: time="2025-10-29T11:48:02.671852294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-786f955c4-g4w2m,Uid:b3b7028a-b276-42fe-9fe7-b12ae54b50d3,Namespace:calico-apiserver,Attempt:0,}" Oct 29 11:48:02.766817 systemd-networkd[1504]: cali8235b6d9983: Link UP Oct 29 11:48:02.767288 systemd-networkd[1504]: cali8235b6d9983: Gained carrier Oct 29 11:48:02.781442 containerd[1601]: 2025-10-29 11:48:02.704 [INFO][4783] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--786f955c4--g4w2m-eth0 calico-apiserver-786f955c4- calico-apiserver b3b7028a-b276-42fe-9fe7-b12ae54b50d3 854 0 2025-10-29 11:47:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:786f955c4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-786f955c4-g4w2m eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8235b6d9983 [] [] }} ContainerID="8ff0c1642c65f52ebdb366396a064d5410212cbb68656a674c221c1a28da1e57" Namespace="calico-apiserver" Pod="calico-apiserver-786f955c4-g4w2m" WorkloadEndpoint="localhost-k8s-calico--apiserver--786f955c4--g4w2m-" Oct 29 11:48:02.781442 containerd[1601]: 2025-10-29 11:48:02.705 [INFO][4783] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8ff0c1642c65f52ebdb366396a064d5410212cbb68656a674c221c1a28da1e57" Namespace="calico-apiserver" Pod="calico-apiserver-786f955c4-g4w2m" WorkloadEndpoint="localhost-k8s-calico--apiserver--786f955c4--g4w2m-eth0" Oct 29 11:48:02.781442 containerd[1601]: 2025-10-29 11:48:02.726 [INFO][4797] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8ff0c1642c65f52ebdb366396a064d5410212cbb68656a674c221c1a28da1e57" HandleID="k8s-pod-network.8ff0c1642c65f52ebdb366396a064d5410212cbb68656a674c221c1a28da1e57" Workload="localhost-k8s-calico--apiserver--786f955c4--g4w2m-eth0" Oct 29 11:48:02.781442 containerd[1601]: 2025-10-29 11:48:02.726 [INFO][4797] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8ff0c1642c65f52ebdb366396a064d5410212cbb68656a674c221c1a28da1e57" HandleID="k8s-pod-network.8ff0c1642c65f52ebdb366396a064d5410212cbb68656a674c221c1a28da1e57" Workload="localhost-k8s-calico--apiserver--786f955c4--g4w2m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dcb30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-786f955c4-g4w2m", "timestamp":"2025-10-29 11:48:02.726619164 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 11:48:02.781442 containerd[1601]: 2025-10-29 11:48:02.726 [INFO][4797] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 11:48:02.781442 containerd[1601]: 2025-10-29 11:48:02.726 [INFO][4797] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 11:48:02.781442 containerd[1601]: 2025-10-29 11:48:02.726 [INFO][4797] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 11:48:02.781442 containerd[1601]: 2025-10-29 11:48:02.736 [INFO][4797] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8ff0c1642c65f52ebdb366396a064d5410212cbb68656a674c221c1a28da1e57" host="localhost" Oct 29 11:48:02.781442 containerd[1601]: 2025-10-29 11:48:02.741 [INFO][4797] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 11:48:02.781442 containerd[1601]: 2025-10-29 11:48:02.745 [INFO][4797] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 11:48:02.781442 containerd[1601]: 2025-10-29 11:48:02.747 [INFO][4797] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 11:48:02.781442 containerd[1601]: 2025-10-29 11:48:02.750 [INFO][4797] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 11:48:02.781442 containerd[1601]: 2025-10-29 11:48:02.751 [INFO][4797] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8ff0c1642c65f52ebdb366396a064d5410212cbb68656a674c221c1a28da1e57" host="localhost" Oct 29 11:48:02.781442 containerd[1601]: 2025-10-29 11:48:02.753 [INFO][4797] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8ff0c1642c65f52ebdb366396a064d5410212cbb68656a674c221c1a28da1e57 Oct 29 11:48:02.781442 containerd[1601]: 2025-10-29 11:48:02.756 [INFO][4797] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8ff0c1642c65f52ebdb366396a064d5410212cbb68656a674c221c1a28da1e57" host="localhost" Oct 29 11:48:02.781442 containerd[1601]: 2025-10-29 11:48:02.762 [INFO][4797] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.8ff0c1642c65f52ebdb366396a064d5410212cbb68656a674c221c1a28da1e57" host="localhost" Oct 29 11:48:02.781442 containerd[1601]: 2025-10-29 11:48:02.762 [INFO][4797] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.8ff0c1642c65f52ebdb366396a064d5410212cbb68656a674c221c1a28da1e57" host="localhost" Oct 29 11:48:02.781442 containerd[1601]: 2025-10-29 11:48:02.762 [INFO][4797] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 11:48:02.781442 containerd[1601]: 2025-10-29 11:48:02.762 [INFO][4797] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="8ff0c1642c65f52ebdb366396a064d5410212cbb68656a674c221c1a28da1e57" HandleID="k8s-pod-network.8ff0c1642c65f52ebdb366396a064d5410212cbb68656a674c221c1a28da1e57" Workload="localhost-k8s-calico--apiserver--786f955c4--g4w2m-eth0" Oct 29 11:48:02.782033 containerd[1601]: 2025-10-29 11:48:02.764 [INFO][4783] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8ff0c1642c65f52ebdb366396a064d5410212cbb68656a674c221c1a28da1e57" Namespace="calico-apiserver" Pod="calico-apiserver-786f955c4-g4w2m" WorkloadEndpoint="localhost-k8s-calico--apiserver--786f955c4--g4w2m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--786f955c4--g4w2m-eth0", GenerateName:"calico-apiserver-786f955c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"b3b7028a-b276-42fe-9fe7-b12ae54b50d3", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 11, 47, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"786f955c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-786f955c4-g4w2m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8235b6d9983", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 11:48:02.782033 containerd[1601]: 2025-10-29 11:48:02.765 [INFO][4783] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="8ff0c1642c65f52ebdb366396a064d5410212cbb68656a674c221c1a28da1e57" Namespace="calico-apiserver" Pod="calico-apiserver-786f955c4-g4w2m" WorkloadEndpoint="localhost-k8s-calico--apiserver--786f955c4--g4w2m-eth0" Oct 29 11:48:02.782033 containerd[1601]: 2025-10-29 11:48:02.765 [INFO][4783] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8235b6d9983 ContainerID="8ff0c1642c65f52ebdb366396a064d5410212cbb68656a674c221c1a28da1e57" Namespace="calico-apiserver" Pod="calico-apiserver-786f955c4-g4w2m" WorkloadEndpoint="localhost-k8s-calico--apiserver--786f955c4--g4w2m-eth0" Oct 29 11:48:02.782033 containerd[1601]: 2025-10-29 11:48:02.767 [INFO][4783] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8ff0c1642c65f52ebdb366396a064d5410212cbb68656a674c221c1a28da1e57" Namespace="calico-apiserver" Pod="calico-apiserver-786f955c4-g4w2m" WorkloadEndpoint="localhost-k8s-calico--apiserver--786f955c4--g4w2m-eth0" Oct 29 11:48:02.782033 containerd[1601]: 2025-10-29 11:48:02.767 [INFO][4783] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8ff0c1642c65f52ebdb366396a064d5410212cbb68656a674c221c1a28da1e57" Namespace="calico-apiserver" Pod="calico-apiserver-786f955c4-g4w2m" WorkloadEndpoint="localhost-k8s-calico--apiserver--786f955c4--g4w2m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--786f955c4--g4w2m-eth0", GenerateName:"calico-apiserver-786f955c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"b3b7028a-b276-42fe-9fe7-b12ae54b50d3", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 11, 47, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"786f955c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8ff0c1642c65f52ebdb366396a064d5410212cbb68656a674c221c1a28da1e57", Pod:"calico-apiserver-786f955c4-g4w2m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8235b6d9983", MAC:"16:f5:c2:a0:5b:4d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 11:48:02.782033 containerd[1601]: 2025-10-29 11:48:02.778 [INFO][4783] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8ff0c1642c65f52ebdb366396a064d5410212cbb68656a674c221c1a28da1e57" Namespace="calico-apiserver" Pod="calico-apiserver-786f955c4-g4w2m" WorkloadEndpoint="localhost-k8s-calico--apiserver--786f955c4--g4w2m-eth0" Oct 29 11:48:02.808769 containerd[1601]: time="2025-10-29T11:48:02.808304162Z" level=info msg="connecting to shim 8ff0c1642c65f52ebdb366396a064d5410212cbb68656a674c221c1a28da1e57" address="unix:///run/containerd/s/58e03e08b3e0e7af2c68db5413a97542038388d8ae140a46b75ca827a5f7ca8e" namespace=k8s.io protocol=ttrpc version=3 Oct 29 11:48:02.833100 systemd[1]: Started cri-containerd-8ff0c1642c65f52ebdb366396a064d5410212cbb68656a674c221c1a28da1e57.scope - libcontainer container 8ff0c1642c65f52ebdb366396a064d5410212cbb68656a674c221c1a28da1e57. Oct 29 11:48:02.846428 systemd-resolved[1286]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 11:48:02.871366 containerd[1601]: time="2025-10-29T11:48:02.871256997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-786f955c4-g4w2m,Uid:b3b7028a-b276-42fe-9fe7-b12ae54b50d3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8ff0c1642c65f52ebdb366396a064d5410212cbb68656a674c221c1a28da1e57\"" Oct 29 11:48:02.873276 containerd[1601]: time="2025-10-29T11:48:02.873240552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 11:48:02.882594 kubelet[2763]: E1029 11:48:02.882214 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:48:02.888622 kubelet[2763]: E1029 11:48:02.888212 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:48:02.907178 kubelet[2763]: I1029 11:48:02.906447 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-lznm8" podStartSLOduration=41.906433898 podStartE2EDuration="41.906433898s" podCreationTimestamp="2025-10-29 11:47:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 11:48:02.905768673 +0000 UTC m=+48.321946408" watchObservedRunningTime="2025-10-29 11:48:02.906433898 +0000 UTC m=+48.322611553" Oct 29 11:48:02.907495 kubelet[2763]: I1029 11:48:02.907457 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-ck2dh" podStartSLOduration=41.907444798 podStartE2EDuration="41.907444798s" podCreationTimestamp="2025-10-29 11:47:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 11:48:02.894087883 +0000 UTC m=+48.310265578" watchObservedRunningTime="2025-10-29 11:48:02.907444798 +0000 UTC m=+48.323622453" Oct 29 11:48:03.150021 containerd[1601]: time="2025-10-29T11:48:03.149883669Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 11:48:03.150912 containerd[1601]: time="2025-10-29T11:48:03.150865883Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 11:48:03.150985 containerd[1601]: time="2025-10-29T11:48:03.150902287Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 11:48:03.151138 kubelet[2763]: E1029 11:48:03.151088 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 11:48:03.151185 kubelet[2763]: E1029 11:48:03.151138 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 11:48:03.151315 kubelet[2763]: E1029 11:48:03.151274 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-54zkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-786f955c4-g4w2m_calico-apiserver(b3b7028a-b276-42fe-9fe7-b12ae54b50d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 11:48:03.152565 kubelet[2763]: E1029 11:48:03.152516 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-786f955c4-g4w2m" podUID="b3b7028a-b276-42fe-9fe7-b12ae54b50d3" Oct 29 11:48:03.275065 systemd-networkd[1504]: cali27485a5dde7: Gained IPv6LL Oct 29 11:48:03.468193 systemd-networkd[1504]: calid95f1013396: Gained IPv6LL Oct 29 11:48:03.671730 containerd[1601]: time="2025-10-29T11:48:03.671443659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f975cc8d8-hnv7g,Uid:571233d6-8903-4d8f-8101-eb09343bdca4,Namespace:calico-system,Attempt:0,}" Oct 29 11:48:03.815256 systemd-networkd[1504]: cali2990e40718a: Link UP Oct 29 11:48:03.815840 systemd-networkd[1504]: cali2990e40718a: Gained carrier Oct 29 11:48:03.832746 containerd[1601]: 2025-10-29 11:48:03.746 [INFO][4866] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7f975cc8d8--hnv7g-eth0 calico-kube-controllers-7f975cc8d8- calico-system 571233d6-8903-4d8f-8101-eb09343bdca4 853 0 2025-10-29 11:47:39 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7f975cc8d8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7f975cc8d8-hnv7g eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2990e40718a [] [] }} ContainerID="f13f63229bd219c36b907c2bf8c02762faec065ba803f3fbad5c75d82b00ce44" Namespace="calico-system" Pod="calico-kube-controllers-7f975cc8d8-hnv7g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f975cc8d8--hnv7g-" Oct 29 11:48:03.832746 containerd[1601]: 2025-10-29 11:48:03.747 [INFO][4866] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f13f63229bd219c36b907c2bf8c02762faec065ba803f3fbad5c75d82b00ce44" Namespace="calico-system" Pod="calico-kube-controllers-7f975cc8d8-hnv7g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f975cc8d8--hnv7g-eth0" Oct 29 11:48:03.832746 containerd[1601]: 2025-10-29 11:48:03.771 [INFO][4881] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f13f63229bd219c36b907c2bf8c02762faec065ba803f3fbad5c75d82b00ce44" HandleID="k8s-pod-network.f13f63229bd219c36b907c2bf8c02762faec065ba803f3fbad5c75d82b00ce44" Workload="localhost-k8s-calico--kube--controllers--7f975cc8d8--hnv7g-eth0" Oct 29 11:48:03.832746 containerd[1601]: 2025-10-29 11:48:03.771 [INFO][4881] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f13f63229bd219c36b907c2bf8c02762faec065ba803f3fbad5c75d82b00ce44" HandleID="k8s-pod-network.f13f63229bd219c36b907c2bf8c02762faec065ba803f3fbad5c75d82b00ce44" Workload="localhost-k8s-calico--kube--controllers--7f975cc8d8--hnv7g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd5c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7f975cc8d8-hnv7g", "timestamp":"2025-10-29 11:48:03.770995237 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 11:48:03.832746 containerd[1601]: 2025-10-29 11:48:03.771 [INFO][4881] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 11:48:03.832746 containerd[1601]: 2025-10-29 11:48:03.771 [INFO][4881] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 11:48:03.832746 containerd[1601]: 2025-10-29 11:48:03.771 [INFO][4881] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 11:48:03.832746 containerd[1601]: 2025-10-29 11:48:03.784 [INFO][4881] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f13f63229bd219c36b907c2bf8c02762faec065ba803f3fbad5c75d82b00ce44" host="localhost" Oct 29 11:48:03.832746 containerd[1601]: 2025-10-29 11:48:03.789 [INFO][4881] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 11:48:03.832746 containerd[1601]: 2025-10-29 11:48:03.794 [INFO][4881] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 11:48:03.832746 containerd[1601]: 2025-10-29 11:48:03.796 [INFO][4881] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 11:48:03.832746 containerd[1601]: 2025-10-29 11:48:03.798 [INFO][4881] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 11:48:03.832746 containerd[1601]: 2025-10-29 11:48:03.798 [INFO][4881] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f13f63229bd219c36b907c2bf8c02762faec065ba803f3fbad5c75d82b00ce44" host="localhost" Oct 29 11:48:03.832746 containerd[1601]: 2025-10-29 11:48:03.800 [INFO][4881] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f13f63229bd219c36b907c2bf8c02762faec065ba803f3fbad5c75d82b00ce44 Oct 29 11:48:03.832746 containerd[1601]: 2025-10-29 11:48:03.803 [INFO][4881] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f13f63229bd219c36b907c2bf8c02762faec065ba803f3fbad5c75d82b00ce44" host="localhost" Oct 29 11:48:03.832746 containerd[1601]: 2025-10-29 11:48:03.810 [INFO][4881] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.f13f63229bd219c36b907c2bf8c02762faec065ba803f3fbad5c75d82b00ce44" host="localhost" Oct 29 11:48:03.832746 containerd[1601]: 2025-10-29 11:48:03.810 [INFO][4881] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.f13f63229bd219c36b907c2bf8c02762faec065ba803f3fbad5c75d82b00ce44" host="localhost" Oct 29 11:48:03.832746 containerd[1601]: 2025-10-29 11:48:03.810 [INFO][4881] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 11:48:03.832746 containerd[1601]: 2025-10-29 11:48:03.810 [INFO][4881] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="f13f63229bd219c36b907c2bf8c02762faec065ba803f3fbad5c75d82b00ce44" HandleID="k8s-pod-network.f13f63229bd219c36b907c2bf8c02762faec065ba803f3fbad5c75d82b00ce44" Workload="localhost-k8s-calico--kube--controllers--7f975cc8d8--hnv7g-eth0" Oct 29 11:48:03.833433 containerd[1601]: 2025-10-29 11:48:03.812 [INFO][4866] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f13f63229bd219c36b907c2bf8c02762faec065ba803f3fbad5c75d82b00ce44" Namespace="calico-system" Pod="calico-kube-controllers-7f975cc8d8-hnv7g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f975cc8d8--hnv7g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7f975cc8d8--hnv7g-eth0", GenerateName:"calico-kube-controllers-7f975cc8d8-", Namespace:"calico-system", SelfLink:"", UID:"571233d6-8903-4d8f-8101-eb09343bdca4", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 11, 47, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f975cc8d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7f975cc8d8-hnv7g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2990e40718a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 11:48:03.833433 containerd[1601]: 2025-10-29 11:48:03.812 [INFO][4866] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="f13f63229bd219c36b907c2bf8c02762faec065ba803f3fbad5c75d82b00ce44" Namespace="calico-system" Pod="calico-kube-controllers-7f975cc8d8-hnv7g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f975cc8d8--hnv7g-eth0" Oct 29 11:48:03.833433 containerd[1601]: 2025-10-29 11:48:03.812 [INFO][4866] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2990e40718a ContainerID="f13f63229bd219c36b907c2bf8c02762faec065ba803f3fbad5c75d82b00ce44" Namespace="calico-system" Pod="calico-kube-controllers-7f975cc8d8-hnv7g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f975cc8d8--hnv7g-eth0" Oct 29 11:48:03.833433 containerd[1601]: 2025-10-29 11:48:03.816 [INFO][4866] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f13f63229bd219c36b907c2bf8c02762faec065ba803f3fbad5c75d82b00ce44" Namespace="calico-system" Pod="calico-kube-controllers-7f975cc8d8-hnv7g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f975cc8d8--hnv7g-eth0" Oct 29 11:48:03.833433 containerd[1601]: 2025-10-29 11:48:03.818 [INFO][4866] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f13f63229bd219c36b907c2bf8c02762faec065ba803f3fbad5c75d82b00ce44" Namespace="calico-system" Pod="calico-kube-controllers-7f975cc8d8-hnv7g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f975cc8d8--hnv7g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7f975cc8d8--hnv7g-eth0", GenerateName:"calico-kube-controllers-7f975cc8d8-", Namespace:"calico-system", SelfLink:"", UID:"571233d6-8903-4d8f-8101-eb09343bdca4", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 11, 47, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f975cc8d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f13f63229bd219c36b907c2bf8c02762faec065ba803f3fbad5c75d82b00ce44", Pod:"calico-kube-controllers-7f975cc8d8-hnv7g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2990e40718a", MAC:"32:59:05:d0:68:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 11:48:03.833433 containerd[1601]: 2025-10-29 11:48:03.830 [INFO][4866] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f13f63229bd219c36b907c2bf8c02762faec065ba803f3fbad5c75d82b00ce44" Namespace="calico-system" Pod="calico-kube-controllers-7f975cc8d8-hnv7g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f975cc8d8--hnv7g-eth0" Oct 29 11:48:03.851268 systemd-networkd[1504]: cali8235b6d9983: Gained IPv6LL Oct 29 11:48:03.866201 containerd[1601]: time="2025-10-29T11:48:03.866162672Z" level=info msg="connecting to shim f13f63229bd219c36b907c2bf8c02762faec065ba803f3fbad5c75d82b00ce44" address="unix:///run/containerd/s/0813ca8033e47569e3482bcebdf4dfcddef47ceb8e1c07cc8940b05c3c377434" namespace=k8s.io protocol=ttrpc version=3 Oct 29 11:48:03.890036 kubelet[2763]: E1029 11:48:03.889923 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:48:03.890576 kubelet[2763]: E1029 11:48:03.890372 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:48:03.890991 kubelet[2763]: E1029 11:48:03.890962 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-786f955c4-g4w2m" podUID="b3b7028a-b276-42fe-9fe7-b12ae54b50d3" Oct 29 11:48:03.898175 systemd[1]: Started cri-containerd-f13f63229bd219c36b907c2bf8c02762faec065ba803f3fbad5c75d82b00ce44.scope - libcontainer container f13f63229bd219c36b907c2bf8c02762faec065ba803f3fbad5c75d82b00ce44. Oct 29 11:48:03.914083 systemd-resolved[1286]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 11:48:03.935208 containerd[1601]: time="2025-10-29T11:48:03.935172739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f975cc8d8-hnv7g,Uid:571233d6-8903-4d8f-8101-eb09343bdca4,Namespace:calico-system,Attempt:0,} returns sandbox id \"f13f63229bd219c36b907c2bf8c02762faec065ba803f3fbad5c75d82b00ce44\"" Oct 29 11:48:03.936528 containerd[1601]: time="2025-10-29T11:48:03.936455863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 29 11:48:04.180926 containerd[1601]: time="2025-10-29T11:48:04.180802889Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 11:48:04.189200 containerd[1601]: time="2025-10-29T11:48:04.188532423Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 29 11:48:04.189388 containerd[1601]: time="2025-10-29T11:48:04.189346540Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 29 11:48:04.189610 kubelet[2763]: E1029 11:48:04.189558 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 11:48:04.189680 kubelet[2763]: E1029 11:48:04.189618 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 11:48:04.190291 kubelet[2763]: E1029 11:48:04.189788 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jcnw5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7f975cc8d8-hnv7g_calico-system(571233d6-8903-4d8f-8101-eb09343bdca4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 29 11:48:04.191550 kubelet[2763]: E1029 11:48:04.191484 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f975cc8d8-hnv7g" podUID="571233d6-8903-4d8f-8101-eb09343bdca4" Oct 29 11:48:04.440103 systemd[1]: Started sshd@9-10.0.0.75:22-10.0.0.1:49324.service - OpenSSH per-connection server daemon (10.0.0.1:49324). Oct 29 11:48:04.500306 sshd[4949]: Accepted publickey for core from 10.0.0.1 port 49324 ssh2: RSA SHA256:+91isbynmBIjf6V6jkIkZf2tk+egrDOc6wdtdos75g8 Oct 29 11:48:04.502009 sshd-session[4949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 11:48:04.505957 systemd-logind[1586]: New session 10 of user core. Oct 29 11:48:04.512113 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 29 11:48:04.625126 sshd[4952]: Connection closed by 10.0.0.1 port 49324 Oct 29 11:48:04.625667 sshd-session[4949]: pam_unix(sshd:session): session closed for user core Oct 29 11:48:04.635200 systemd[1]: sshd@9-10.0.0.75:22-10.0.0.1:49324.service: Deactivated successfully. Oct 29 11:48:04.638356 systemd[1]: session-10.scope: Deactivated successfully. Oct 29 11:48:04.639027 systemd-logind[1586]: Session 10 logged out. Waiting for processes to exit. Oct 29 11:48:04.641300 systemd[1]: Started sshd@10-10.0.0.75:22-10.0.0.1:49328.service - OpenSSH per-connection server daemon (10.0.0.1:49328). Oct 29 11:48:04.642258 systemd-logind[1586]: Removed session 10. Oct 29 11:48:04.697183 sshd[4966]: Accepted publickey for core from 10.0.0.1 port 49328 ssh2: RSA SHA256:+91isbynmBIjf6V6jkIkZf2tk+egrDOc6wdtdos75g8 Oct 29 11:48:04.699261 sshd-session[4966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 11:48:04.703493 systemd-logind[1586]: New session 11 of user core. Oct 29 11:48:04.712116 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 29 11:48:04.859924 sshd[4969]: Connection closed by 10.0.0.1 port 49328 Oct 29 11:48:04.860935 sshd-session[4966]: pam_unix(sshd:session): session closed for user core Oct 29 11:48:04.869291 systemd[1]: sshd@10-10.0.0.75:22-10.0.0.1:49328.service: Deactivated successfully. Oct 29 11:48:04.874475 systemd[1]: session-11.scope: Deactivated successfully. Oct 29 11:48:04.878384 systemd-logind[1586]: Session 11 logged out. Waiting for processes to exit. Oct 29 11:48:04.885206 systemd[1]: Started sshd@11-10.0.0.75:22-10.0.0.1:49344.service - OpenSSH per-connection server daemon (10.0.0.1:49344). Oct 29 11:48:04.887187 systemd-logind[1586]: Removed session 11. Oct 29 11:48:04.900671 kubelet[2763]: E1029 11:48:04.899172 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:48:04.901626 kubelet[2763]: E1029 11:48:04.900616 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f975cc8d8-hnv7g" podUID="571233d6-8903-4d8f-8101-eb09343bdca4" Oct 29 11:48:04.901843 kubelet[2763]: E1029 11:48:04.901824 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:48:04.957668 sshd[4983]: Accepted publickey for core from 10.0.0.1 port 49344 ssh2: RSA SHA256:+91isbynmBIjf6V6jkIkZf2tk+egrDOc6wdtdos75g8 Oct 29 11:48:04.958632 sshd-session[4983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 11:48:04.962387 systemd-logind[1586]: New session 12 of user core. Oct 29 11:48:04.971131 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 29 11:48:05.003118 systemd-networkd[1504]: cali2990e40718a: Gained IPv6LL Oct 29 11:48:05.073554 sshd[4990]: Connection closed by 10.0.0.1 port 49344 Oct 29 11:48:05.073888 sshd-session[4983]: pam_unix(sshd:session): session closed for user core Oct 29 11:48:05.078548 systemd[1]: sshd@11-10.0.0.75:22-10.0.0.1:49344.service: Deactivated successfully. Oct 29 11:48:05.080771 systemd[1]: session-12.scope: Deactivated successfully. Oct 29 11:48:05.081494 systemd-logind[1586]: Session 12 logged out. Waiting for processes to exit. Oct 29 11:48:05.082382 systemd-logind[1586]: Removed session 12. Oct 29 11:48:05.902750 kubelet[2763]: E1029 11:48:05.902030 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f975cc8d8-hnv7g" podUID="571233d6-8903-4d8f-8101-eb09343bdca4" Oct 29 11:48:09.673151 containerd[1601]: time="2025-10-29T11:48:09.673010640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 29 11:48:09.883282 containerd[1601]: time="2025-10-29T11:48:09.883243658Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 11:48:09.884132 containerd[1601]: time="2025-10-29T11:48:09.884097533Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 29 11:48:09.884177 containerd[1601]: time="2025-10-29T11:48:09.884166219Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 29 11:48:09.884343 kubelet[2763]: E1029 11:48:09.884306 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 11:48:09.884606 kubelet[2763]: E1029 11:48:09.884355 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 11:48:09.884802 kubelet[2763]: E1029 11:48:09.884480 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:63d9a4c1b08c40218229a9d7e57cb0f1,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mtv2r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-98cbc6c6c-m8wcp_calico-system(fb600d11-592b-4097-9fd7-ec12a58553d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 29 11:48:09.887548 containerd[1601]: time="2025-10-29T11:48:09.887429907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 29 11:48:10.084971 systemd[1]: Started sshd@12-10.0.0.75:22-10.0.0.1:50966.service - OpenSSH per-connection server daemon (10.0.0.1:50966). Oct 29 11:48:10.139057 containerd[1601]: time="2025-10-29T11:48:10.139005487Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 11:48:10.141215 containerd[1601]: time="2025-10-29T11:48:10.141145993Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 29 11:48:10.141370 containerd[1601]: time="2025-10-29T11:48:10.141208278Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 29 11:48:10.141446 kubelet[2763]: E1029 11:48:10.141364 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 11:48:10.141446 kubelet[2763]: E1029 11:48:10.141409 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 11:48:10.141565 kubelet[2763]: E1029 11:48:10.141522 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mtv2r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-98cbc6c6c-m8wcp_calico-system(fb600d11-592b-4097-9fd7-ec12a58553d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 29 11:48:10.141819 sshd[5010]: Accepted publickey for core from 10.0.0.1 port 50966 ssh2: RSA SHA256:+91isbynmBIjf6V6jkIkZf2tk+egrDOc6wdtdos75g8 Oct 29 11:48:10.143370 kubelet[2763]: E1029 11:48:10.143332 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-98cbc6c6c-m8wcp" podUID="fb600d11-592b-4097-9fd7-ec12a58553d8" Oct 29 11:48:10.144564 sshd-session[5010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 11:48:10.150366 systemd-logind[1586]: New session 13 of user core. Oct 29 11:48:10.159098 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 29 11:48:10.232722 sshd[5013]: Connection closed by 10.0.0.1 port 50966 Oct 29 11:48:10.233167 sshd-session[5010]: pam_unix(sshd:session): session closed for user core Oct 29 11:48:10.236826 systemd[1]: sshd@12-10.0.0.75:22-10.0.0.1:50966.service: Deactivated successfully. Oct 29 11:48:10.239823 systemd[1]: session-13.scope: Deactivated successfully. Oct 29 11:48:10.240931 systemd-logind[1586]: Session 13 logged out. Waiting for processes to exit. Oct 29 11:48:10.242429 systemd-logind[1586]: Removed session 13. Oct 29 11:48:12.674151 containerd[1601]: time="2025-10-29T11:48:12.673916500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 11:48:12.904523 containerd[1601]: time="2025-10-29T11:48:12.904479487Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 11:48:12.905420 containerd[1601]: time="2025-10-29T11:48:12.905381683Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 11:48:12.905495 containerd[1601]: time="2025-10-29T11:48:12.905456050Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 11:48:12.905639 kubelet[2763]: E1029 11:48:12.905604 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 11:48:12.905921 kubelet[2763]: E1029 11:48:12.905652 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 11:48:12.905921 kubelet[2763]: E1029 11:48:12.905790 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j9gpk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-786f955c4-cnjbr_calico-apiserver(d0c9cc6e-25b8-45f8-aafd-601ef8c53fc7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 11:48:12.907228 kubelet[2763]: E1029 11:48:12.907181 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-786f955c4-cnjbr" podUID="d0c9cc6e-25b8-45f8-aafd-601ef8c53fc7" Oct 29 11:48:15.253612 systemd[1]: Started sshd@13-10.0.0.75:22-10.0.0.1:50972.service - OpenSSH per-connection server daemon (10.0.0.1:50972). Oct 29 11:48:15.318292 sshd[5036]: Accepted publickey for core from 10.0.0.1 port 50972 ssh2: RSA SHA256:+91isbynmBIjf6V6jkIkZf2tk+egrDOc6wdtdos75g8 Oct 29 11:48:15.320036 sshd-session[5036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 11:48:15.324358 systemd-logind[1586]: New session 14 of user core. Oct 29 11:48:15.338128 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 29 11:48:15.499257 sshd[5039]: Connection closed by 10.0.0.1 port 50972 Oct 29 11:48:15.499755 sshd-session[5036]: pam_unix(sshd:session): session closed for user core Oct 29 11:48:15.505715 systemd-logind[1586]: Session 14 logged out. Waiting for processes to exit. Oct 29 11:48:15.506409 systemd[1]: sshd@13-10.0.0.75:22-10.0.0.1:50972.service: Deactivated successfully. Oct 29 11:48:15.510254 systemd[1]: session-14.scope: Deactivated successfully. Oct 29 11:48:15.512836 systemd-logind[1586]: Removed session 14. Oct 29 11:48:15.672746 containerd[1601]: time="2025-10-29T11:48:15.672694076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 29 11:48:15.916834 containerd[1601]: time="2025-10-29T11:48:15.916781884Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 11:48:15.919822 containerd[1601]: time="2025-10-29T11:48:15.919755008Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 29 11:48:15.919961 containerd[1601]: time="2025-10-29T11:48:15.919841735Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 29 11:48:15.920213 kubelet[2763]: E1029 11:48:15.920106 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 11:48:15.920526 kubelet[2763]: E1029 11:48:15.920220 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 11:48:15.921020 kubelet[2763]: E1029 11:48:15.920525 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b4st4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tpbbb_calico-system(19d4b9de-b24e-493e-a2fd-91157fcb3c0a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 29 11:48:15.930105 containerd[1601]: time="2025-10-29T11:48:15.930027412Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 29 11:48:16.174688 containerd[1601]: time="2025-10-29T11:48:16.174561522Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 11:48:16.175666 containerd[1601]: time="2025-10-29T11:48:16.175621968Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 29 11:48:16.175745 containerd[1601]: time="2025-10-29T11:48:16.175710416Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 29 11:48:16.175905 kubelet[2763]: E1029 11:48:16.175865 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 11:48:16.175987 kubelet[2763]: E1029 11:48:16.175915 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 11:48:16.176105 kubelet[2763]: E1029 11:48:16.176065 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b4st4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tpbbb_calico-system(19d4b9de-b24e-493e-a2fd-91157fcb3c0a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 29 11:48:16.177928 kubelet[2763]: E1029 11:48:16.177886 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tpbbb" podUID="19d4b9de-b24e-493e-a2fd-91157fcb3c0a" Oct 29 11:48:16.672331 containerd[1601]: time="2025-10-29T11:48:16.672231772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 29 11:48:16.908981 containerd[1601]: time="2025-10-29T11:48:16.908883225Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 11:48:16.910122 containerd[1601]: time="2025-10-29T11:48:16.909820102Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 29 11:48:16.910122 containerd[1601]: time="2025-10-29T11:48:16.909890507Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 29 11:48:16.910185 kubelet[2763]: E1029 11:48:16.910056 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 11:48:16.910185 kubelet[2763]: E1029 11:48:16.910107 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 11:48:16.910421 containerd[1601]: time="2025-10-29T11:48:16.910393708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 29 11:48:16.910913 kubelet[2763]: E1029 11:48:16.910848 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7wjwp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-ml2tk_calico-system(7ef53ca0-e6af-4f13-8298-54b41e79363b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 29 11:48:16.912087 kubelet[2763]: E1029 11:48:16.912024 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ml2tk" podUID="7ef53ca0-e6af-4f13-8298-54b41e79363b" Oct 29 11:48:17.153713 containerd[1601]: time="2025-10-29T11:48:17.153639707Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 11:48:17.154853 containerd[1601]: time="2025-10-29T11:48:17.154813322Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 29 11:48:17.154934 containerd[1601]: time="2025-10-29T11:48:17.154901289Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 29 11:48:17.155111 kubelet[2763]: E1029 11:48:17.155061 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 11:48:17.155358 kubelet[2763]: E1029 11:48:17.155112 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 11:48:17.155358 kubelet[2763]: E1029 11:48:17.155251 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jcnw5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7f975cc8d8-hnv7g_calico-system(571233d6-8903-4d8f-8101-eb09343bdca4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 29 11:48:17.156739 kubelet[2763]: E1029 11:48:17.156705 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f975cc8d8-hnv7g" podUID="571233d6-8903-4d8f-8101-eb09343bdca4" Oct 29 11:48:17.671746 containerd[1601]: time="2025-10-29T11:48:17.671706681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 11:48:18.031399 containerd[1601]: time="2025-10-29T11:48:18.031274053Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 11:48:18.032381 containerd[1601]: time="2025-10-29T11:48:18.032328297Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 11:48:18.032467 containerd[1601]: time="2025-10-29T11:48:18.032407703Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 11:48:18.032648 kubelet[2763]: E1029 11:48:18.032581 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 11:48:18.032648 kubelet[2763]: E1029 11:48:18.032633 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 11:48:18.033298 kubelet[2763]: E1029 11:48:18.033238 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-54zkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-786f955c4-g4w2m_calico-apiserver(b3b7028a-b276-42fe-9fe7-b12ae54b50d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 11:48:18.035158 kubelet[2763]: E1029 11:48:18.035112 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-786f955c4-g4w2m" podUID="b3b7028a-b276-42fe-9fe7-b12ae54b50d3" Oct 29 11:48:20.512225 systemd[1]: Started sshd@14-10.0.0.75:22-10.0.0.1:37252.service - OpenSSH per-connection server daemon (10.0.0.1:37252). Oct 29 11:48:20.579044 sshd[5052]: Accepted publickey for core from 10.0.0.1 port 37252 ssh2: RSA SHA256:+91isbynmBIjf6V6jkIkZf2tk+egrDOc6wdtdos75g8 Oct 29 11:48:20.580656 sshd-session[5052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 11:48:20.584998 systemd-logind[1586]: New session 15 of user core. Oct 29 11:48:20.594148 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 29 11:48:20.691670 sshd[5055]: Connection closed by 10.0.0.1 port 37252 Oct 29 11:48:20.692151 sshd-session[5052]: pam_unix(sshd:session): session closed for user core Oct 29 11:48:20.696684 systemd[1]: sshd@14-10.0.0.75:22-10.0.0.1:37252.service: Deactivated successfully. Oct 29 11:48:20.698783 systemd[1]: session-15.scope: Deactivated successfully. Oct 29 11:48:20.700156 systemd-logind[1586]: Session 15 logged out. Waiting for processes to exit. Oct 29 11:48:20.701398 systemd-logind[1586]: Removed session 15. Oct 29 11:48:22.673456 kubelet[2763]: E1029 11:48:22.673408 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-98cbc6c6c-m8wcp" podUID="fb600d11-592b-4097-9fd7-ec12a58553d8" Oct 29 11:48:24.672427 kubelet[2763]: E1029 11:48:24.672142 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-786f955c4-cnjbr" podUID="d0c9cc6e-25b8-45f8-aafd-601ef8c53fc7" Oct 29 11:48:25.708714 systemd[1]: Started sshd@15-10.0.0.75:22-10.0.0.1:37262.service - OpenSSH per-connection server daemon (10.0.0.1:37262). Oct 29 11:48:25.764585 sshd[5075]: Accepted publickey for core from 10.0.0.1 port 37262 ssh2: RSA SHA256:+91isbynmBIjf6V6jkIkZf2tk+egrDOc6wdtdos75g8 Oct 29 11:48:25.766058 sshd-session[5075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 11:48:25.770006 systemd-logind[1586]: New session 16 of user core. Oct 29 11:48:25.779585 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 29 11:48:25.883470 sshd[5078]: Connection closed by 10.0.0.1 port 37262 Oct 29 11:48:25.883907 sshd-session[5075]: pam_unix(sshd:session): session closed for user core Oct 29 11:48:25.895567 systemd[1]: sshd@15-10.0.0.75:22-10.0.0.1:37262.service: Deactivated successfully. Oct 29 11:48:25.898717 systemd[1]: session-16.scope: Deactivated successfully. Oct 29 11:48:25.899819 systemd-logind[1586]: Session 16 logged out. Waiting for processes to exit. Oct 29 11:48:25.902622 systemd[1]: Started sshd@16-10.0.0.75:22-10.0.0.1:37272.service - OpenSSH per-connection server daemon (10.0.0.1:37272). Oct 29 11:48:25.904530 systemd-logind[1586]: Removed session 16. Oct 29 11:48:25.966552 sshd[5091]: Accepted publickey for core from 10.0.0.1 port 37272 ssh2: RSA SHA256:+91isbynmBIjf6V6jkIkZf2tk+egrDOc6wdtdos75g8 Oct 29 11:48:25.968879 sshd-session[5091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 11:48:25.975044 systemd-logind[1586]: New session 17 of user core. Oct 29 11:48:25.979305 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 29 11:48:26.139297 sshd[5094]: Connection closed by 10.0.0.1 port 37272 Oct 29 11:48:26.138811 sshd-session[5091]: pam_unix(sshd:session): session closed for user core Oct 29 11:48:26.146310 systemd[1]: sshd@16-10.0.0.75:22-10.0.0.1:37272.service: Deactivated successfully. Oct 29 11:48:26.148259 systemd[1]: session-17.scope: Deactivated successfully. Oct 29 11:48:26.149117 systemd-logind[1586]: Session 17 logged out. Waiting for processes to exit. Oct 29 11:48:26.152420 systemd[1]: Started sshd@17-10.0.0.75:22-10.0.0.1:37280.service - OpenSSH per-connection server daemon (10.0.0.1:37280). Oct 29 11:48:26.153046 systemd-logind[1586]: Removed session 17. Oct 29 11:48:26.209652 sshd[5106]: Accepted publickey for core from 10.0.0.1 port 37280 ssh2: RSA SHA256:+91isbynmBIjf6V6jkIkZf2tk+egrDOc6wdtdos75g8 Oct 29 11:48:26.210772 sshd-session[5106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 11:48:26.215875 systemd-logind[1586]: New session 18 of user core. Oct 29 11:48:26.223110 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 29 11:48:26.674150 kubelet[2763]: E1029 11:48:26.674108 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tpbbb" podUID="19d4b9de-b24e-493e-a2fd-91157fcb3c0a" Oct 29 11:48:26.785084 sshd[5109]: Connection closed by 10.0.0.1 port 37280 Oct 29 11:48:26.785229 sshd-session[5106]: pam_unix(sshd:session): session closed for user core Oct 29 11:48:26.794500 systemd[1]: sshd@17-10.0.0.75:22-10.0.0.1:37280.service: Deactivated successfully. Oct 29 11:48:26.798783 systemd[1]: session-18.scope: Deactivated successfully. Oct 29 11:48:26.805326 systemd-logind[1586]: Session 18 logged out. Waiting for processes to exit. Oct 29 11:48:26.807314 systemd[1]: Started sshd@18-10.0.0.75:22-10.0.0.1:37282.service - OpenSSH per-connection server daemon (10.0.0.1:37282). Oct 29 11:48:26.809301 systemd-logind[1586]: Removed session 18. Oct 29 11:48:26.858783 sshd[5127]: Accepted publickey for core from 10.0.0.1 port 37282 ssh2: RSA SHA256:+91isbynmBIjf6V6jkIkZf2tk+egrDOc6wdtdos75g8 Oct 29 11:48:26.859936 sshd-session[5127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 11:48:26.863703 systemd-logind[1586]: New session 19 of user core. Oct 29 11:48:26.870103 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 29 11:48:27.085610 sshd[5130]: Connection closed by 10.0.0.1 port 37282 Oct 29 11:48:27.082802 sshd-session[5127]: pam_unix(sshd:session): session closed for user core Oct 29 11:48:27.096196 systemd[1]: sshd@18-10.0.0.75:22-10.0.0.1:37282.service: Deactivated successfully. Oct 29 11:48:27.100252 systemd[1]: session-19.scope: Deactivated successfully. Oct 29 11:48:27.101322 systemd-logind[1586]: Session 19 logged out. Waiting for processes to exit. Oct 29 11:48:27.106314 systemd[1]: Started sshd@19-10.0.0.75:22-10.0.0.1:37290.service - OpenSSH per-connection server daemon (10.0.0.1:37290). Oct 29 11:48:27.107563 systemd-logind[1586]: Removed session 19. Oct 29 11:48:27.163978 sshd[5142]: Accepted publickey for core from 10.0.0.1 port 37290 ssh2: RSA SHA256:+91isbynmBIjf6V6jkIkZf2tk+egrDOc6wdtdos75g8 Oct 29 11:48:27.165500 sshd-session[5142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 11:48:27.170848 systemd-logind[1586]: New session 20 of user core. Oct 29 11:48:27.180127 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 29 11:48:27.276670 sshd[5145]: Connection closed by 10.0.0.1 port 37290 Oct 29 11:48:27.277086 sshd-session[5142]: pam_unix(sshd:session): session closed for user core Oct 29 11:48:27.282832 systemd[1]: sshd@19-10.0.0.75:22-10.0.0.1:37290.service: Deactivated successfully. Oct 29 11:48:27.285482 systemd[1]: session-20.scope: Deactivated successfully. Oct 29 11:48:27.286327 systemd-logind[1586]: Session 20 logged out. Waiting for processes to exit. Oct 29 11:48:27.288436 systemd-logind[1586]: Removed session 20. Oct 29 11:48:27.671414 kubelet[2763]: E1029 11:48:27.671059 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:48:27.674973 kubelet[2763]: E1029 11:48:27.674203 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ml2tk" podUID="7ef53ca0-e6af-4f13-8298-54b41e79363b" Oct 29 11:48:29.671297 kubelet[2763]: E1029 11:48:29.671251 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:48:29.672261 kubelet[2763]: E1029 11:48:29.672213 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f975cc8d8-hnv7g" podUID="571233d6-8903-4d8f-8101-eb09343bdca4" Oct 29 11:48:30.526527 containerd[1601]: time="2025-10-29T11:48:30.526482000Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b7891fac55aba6b890cf8a81e9a5a724ecf8309b8790fc6abe816e18762b8b70\" id:\"306e0cac5144286819986c8df2a02d7af6e380ff559980dcff91c03ba9376e5c\" pid:5171 exited_at:{seconds:1761738510 nanos:526183893}" Oct 29 11:48:30.671070 kubelet[2763]: E1029 11:48:30.671028 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 11:48:30.672136 kubelet[2763]: E1029 11:48:30.672064 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-786f955c4-g4w2m" podUID="b3b7028a-b276-42fe-9fe7-b12ae54b50d3" Oct 29 11:48:32.291687 systemd[1]: Started sshd@20-10.0.0.75:22-10.0.0.1:54308.service - OpenSSH per-connection server daemon (10.0.0.1:54308). Oct 29 11:48:32.381372 sshd[5190]: Accepted publickey for core from 10.0.0.1 port 54308 ssh2: RSA SHA256:+91isbynmBIjf6V6jkIkZf2tk+egrDOc6wdtdos75g8 Oct 29 11:48:32.383166 sshd-session[5190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 11:48:32.387176 systemd-logind[1586]: New session 21 of user core. Oct 29 11:48:32.398162 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 29 11:48:32.514532 sshd[5193]: Connection closed by 10.0.0.1 port 54308 Oct 29 11:48:32.515070 sshd-session[5190]: pam_unix(sshd:session): session closed for user core Oct 29 11:48:32.518795 systemd[1]: sshd@20-10.0.0.75:22-10.0.0.1:54308.service: Deactivated successfully. Oct 29 11:48:32.522653 systemd[1]: session-21.scope: Deactivated successfully. Oct 29 11:48:32.523768 systemd-logind[1586]: Session 21 logged out. Waiting for processes to exit. Oct 29 11:48:32.525074 systemd-logind[1586]: Removed session 21. Oct 29 11:48:36.674089 containerd[1601]: time="2025-10-29T11:48:36.674024848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 29 11:48:36.862049 containerd[1601]: time="2025-10-29T11:48:36.861929602Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 11:48:36.863430 containerd[1601]: time="2025-10-29T11:48:36.863390641Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 29 11:48:36.863524 containerd[1601]: time="2025-10-29T11:48:36.863442799Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 29 11:48:36.863606 kubelet[2763]: E1029 11:48:36.863549 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 11:48:36.863606 kubelet[2763]: E1029 11:48:36.863592 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 11:48:36.864729 kubelet[2763]: E1029 11:48:36.863777 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:63d9a4c1b08c40218229a9d7e57cb0f1,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mtv2r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-98cbc6c6c-m8wcp_calico-system(fb600d11-592b-4097-9fd7-ec12a58553d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 29 11:48:36.864801 containerd[1601]: time="2025-10-29T11:48:36.864112460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 11:48:37.067332 containerd[1601]: time="2025-10-29T11:48:37.067288150Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 11:48:37.069287 containerd[1601]: time="2025-10-29T11:48:37.069236900Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 11:48:37.069381 containerd[1601]: time="2025-10-29T11:48:37.069314938Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 11:48:37.069626 kubelet[2763]: E1029 11:48:37.069465 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 11:48:37.069626 kubelet[2763]: E1029 11:48:37.069516 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 11:48:37.070144 kubelet[2763]: E1029 11:48:37.069734 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j9gpk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-786f955c4-cnjbr_calico-apiserver(d0c9cc6e-25b8-45f8-aafd-601ef8c53fc7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 11:48:37.070306 containerd[1601]: time="2025-10-29T11:48:37.069907723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 29 11:48:37.070858 kubelet[2763]: E1029 11:48:37.070819 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-786f955c4-cnjbr" podUID="d0c9cc6e-25b8-45f8-aafd-601ef8c53fc7" Oct 29 11:48:37.323959 containerd[1601]: time="2025-10-29T11:48:37.323820411Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 11:48:37.326430 containerd[1601]: time="2025-10-29T11:48:37.326373785Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 29 11:48:37.326505 containerd[1601]: time="2025-10-29T11:48:37.326459743Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 29 11:48:37.327085 kubelet[2763]: E1029 11:48:37.326754 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 11:48:37.327085 kubelet[2763]: E1029 11:48:37.326800 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 11:48:37.327258 kubelet[2763]: E1029 11:48:37.326919 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mtv2r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-98cbc6c6c-m8wcp_calico-system(fb600d11-592b-4097-9fd7-ec12a58553d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 29 11:48:37.328427 kubelet[2763]: E1029 11:48:37.328342 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-98cbc6c6c-m8wcp" podUID="fb600d11-592b-4097-9fd7-ec12a58553d8" Oct 29 11:48:37.527771 systemd[1]: Started sshd@21-10.0.0.75:22-10.0.0.1:54312.service - OpenSSH per-connection server daemon (10.0.0.1:54312). Oct 29 11:48:37.585931 sshd[5214]: Accepted publickey for core from 10.0.0.1 port 54312 ssh2: RSA SHA256:+91isbynmBIjf6V6jkIkZf2tk+egrDOc6wdtdos75g8 Oct 29 11:48:37.587558 sshd-session[5214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 11:48:37.592480 systemd-logind[1586]: New session 22 of user core. Oct 29 11:48:37.600114 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 29 11:48:37.743884 sshd[5217]: Connection closed by 10.0.0.1 port 54312 Oct 29 11:48:37.744536 sshd-session[5214]: pam_unix(sshd:session): session closed for user core Oct 29 11:48:37.750008 systemd[1]: sshd@21-10.0.0.75:22-10.0.0.1:54312.service: Deactivated successfully. Oct 29 11:48:37.752551 systemd[1]: session-22.scope: Deactivated successfully. Oct 29 11:48:37.753606 systemd-logind[1586]: Session 22 logged out. Waiting for processes to exit. Oct 29 11:48:37.754621 systemd-logind[1586]: Removed session 22. Oct 29 11:48:39.672370 containerd[1601]: time="2025-10-29T11:48:39.672153805Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 29 11:48:39.895095 containerd[1601]: time="2025-10-29T11:48:39.895042013Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 11:48:39.896058 containerd[1601]: time="2025-10-29T11:48:39.896021192Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 29 11:48:39.896129 containerd[1601]: time="2025-10-29T11:48:39.896082311Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 29 11:48:39.896284 kubelet[2763]: E1029 11:48:39.896222 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 11:48:39.896579 kubelet[2763]: E1029 11:48:39.896296 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 11:48:39.896579 kubelet[2763]: E1029 11:48:39.896433 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7wjwp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-ml2tk_calico-system(7ef53ca0-e6af-4f13-8298-54b41e79363b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 29 11:48:39.897856 kubelet[2763]: E1029 11:48:39.897793 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ml2tk" podUID="7ef53ca0-e6af-4f13-8298-54b41e79363b" Oct 29 11:48:41.671780 containerd[1601]: time="2025-10-29T11:48:41.671619247Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 29 11:48:41.891367 containerd[1601]: time="2025-10-29T11:48:41.891280400Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 11:48:41.892898 containerd[1601]: time="2025-10-29T11:48:41.892859734Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 29 11:48:41.893011 containerd[1601]: time="2025-10-29T11:48:41.892986532Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 29 11:48:41.893172 kubelet[2763]: E1029 11:48:41.893138 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 11:48:41.893420 kubelet[2763]: E1029 11:48:41.893185 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 11:48:41.893420 kubelet[2763]: E1029 11:48:41.893296 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b4st4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tpbbb_calico-system(19d4b9de-b24e-493e-a2fd-91157fcb3c0a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 29 11:48:41.895746 containerd[1601]: time="2025-10-29T11:48:41.895705567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 29 11:48:42.101611 containerd[1601]: time="2025-10-29T11:48:42.101447447Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 11:48:42.102389 containerd[1601]: time="2025-10-29T11:48:42.102288195Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 29 11:48:42.102389 containerd[1601]: time="2025-10-29T11:48:42.102349514Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 29 11:48:42.102554 kubelet[2763]: E1029 11:48:42.102512 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 11:48:42.102595 kubelet[2763]: E1029 11:48:42.102562 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 11:48:42.102747 kubelet[2763]: E1029 11:48:42.102700 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b4st4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tpbbb_calico-system(19d4b9de-b24e-493e-a2fd-91157fcb3c0a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 29 11:48:42.104186 kubelet[2763]: E1029 11:48:42.104143 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tpbbb" podUID="19d4b9de-b24e-493e-a2fd-91157fcb3c0a" Oct 29 11:48:42.674574 containerd[1601]: time="2025-10-29T11:48:42.673892328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 29 11:48:42.755176 systemd[1]: Started sshd@22-10.0.0.75:22-10.0.0.1:50142.service - OpenSSH per-connection server daemon (10.0.0.1:50142). Oct 29 11:48:42.816525 sshd[5231]: Accepted publickey for core from 10.0.0.1 port 50142 ssh2: RSA SHA256:+91isbynmBIjf6V6jkIkZf2tk+egrDOc6wdtdos75g8 Oct 29 11:48:42.817906 sshd-session[5231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 11:48:42.821654 systemd-logind[1586]: New session 23 of user core. Oct 29 11:48:42.829105 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 29 11:48:42.910297 containerd[1601]: time="2025-10-29T11:48:42.910250679Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 11:48:42.911276 containerd[1601]: time="2025-10-29T11:48:42.911238305Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 29 11:48:42.911350 containerd[1601]: time="2025-10-29T11:48:42.911299904Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 29 11:48:42.911582 kubelet[2763]: E1029 11:48:42.911436 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 11:48:42.911582 kubelet[2763]: E1029 11:48:42.911484 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 11:48:42.911884 kubelet[2763]: E1029 11:48:42.911630 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jcnw5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7f975cc8d8-hnv7g_calico-system(571233d6-8903-4d8f-8101-eb09343bdca4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 29 11:48:42.913035 kubelet[2763]: E1029 11:48:42.912875 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f975cc8d8-hnv7g" podUID="571233d6-8903-4d8f-8101-eb09343bdca4" Oct 29 11:48:42.978118 sshd[5234]: Connection closed by 10.0.0.1 port 50142 Oct 29 11:48:42.979167 sshd-session[5231]: pam_unix(sshd:session): session closed for user core Oct 29 11:48:42.983349 systemd[1]: sshd@22-10.0.0.75:22-10.0.0.1:50142.service: Deactivated successfully. Oct 29 11:48:42.986504 systemd[1]: session-23.scope: Deactivated successfully. Oct 29 11:48:42.987372 systemd-logind[1586]: Session 23 logged out. Waiting for processes to exit. Oct 29 11:48:42.989455 systemd-logind[1586]: Removed session 23. Oct 29 11:48:43.671976 containerd[1601]: time="2025-10-29T11:48:43.671929070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 11:48:43.878969 containerd[1601]: time="2025-10-29T11:48:43.878901234Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 11:48:43.879986 containerd[1601]: time="2025-10-29T11:48:43.879886822Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 11:48:43.880217 containerd[1601]: time="2025-10-29T11:48:43.879929582Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 11:48:43.880986 kubelet[2763]: E1029 11:48:43.880413 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 11:48:43.880986 kubelet[2763]: E1029 11:48:43.880457 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 11:48:43.880986 kubelet[2763]: E1029 11:48:43.880735 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-54zkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-786f955c4-g4w2m_calico-apiserver(b3b7028a-b276-42fe-9fe7-b12ae54b50d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 11:48:43.882353 kubelet[2763]: E1029 11:48:43.882301 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-786f955c4-g4w2m" podUID="b3b7028a-b276-42fe-9fe7-b12ae54b50d3"