Nov 5 14:58:37.345541 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Nov 5 14:58:37.345566 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Wed Nov 5 13:42:06 -00 2025 Nov 5 14:58:37.345575 kernel: KASLR enabled Nov 5 14:58:37.345581 kernel: efi: EFI v2.7 by EDK II Nov 5 14:58:37.345587 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Nov 5 14:58:37.345593 kernel: random: crng init done Nov 5 14:58:37.345600 kernel: secureboot: Secure boot disabled Nov 5 14:58:37.345606 kernel: ACPI: Early table checksum verification disabled Nov 5 14:58:37.345615 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Nov 5 14:58:37.345621 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Nov 5 14:58:37.345627 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 14:58:37.345633 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 14:58:37.345639 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 14:58:37.345645 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 14:58:37.345654 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 14:58:37.345660 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 14:58:37.345667 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 14:58:37.345673 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 14:58:37.345680 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 14:58:37.345686 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Nov 5 14:58:37.345693 kernel: ACPI: Use ACPI SPCR as default console: No Nov 5 14:58:37.345699 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Nov 5 14:58:37.345707 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Nov 5 14:58:37.345714 kernel: Zone ranges: Nov 5 14:58:37.345720 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Nov 5 14:58:37.345726 kernel: DMA32 empty Nov 5 14:58:37.345733 kernel: Normal empty Nov 5 14:58:37.345739 kernel: Device empty Nov 5 14:58:37.345745 kernel: Movable zone start for each node Nov 5 14:58:37.345752 kernel: Early memory node ranges Nov 5 14:58:37.345758 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Nov 5 14:58:37.345765 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Nov 5 14:58:37.345771 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Nov 5 14:58:37.345777 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Nov 5 14:58:37.345785 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Nov 5 14:58:37.345791 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Nov 5 14:58:37.345798 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Nov 5 14:58:37.345804 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Nov 5 14:58:37.345811 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Nov 5 14:58:37.345817 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Nov 5 14:58:37.345827 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Nov 5 14:58:37.345834 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Nov 5 14:58:37.345841 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Nov 5 14:58:37.345848 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Nov 5 14:58:37.345854 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Nov 5 14:58:37.345861 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Nov 5 14:58:37.345868 kernel: psci: probing for conduit method from ACPI. Nov 5 14:58:37.345875 kernel: psci: PSCIv1.1 detected in firmware. Nov 5 14:58:37.345883 kernel: psci: Using standard PSCI v0.2 function IDs Nov 5 14:58:37.345890 kernel: psci: Trusted OS migration not required Nov 5 14:58:37.345896 kernel: psci: SMC Calling Convention v1.1 Nov 5 14:58:37.345903 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Nov 5 14:58:37.345910 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Nov 5 14:58:37.345917 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Nov 5 14:58:37.345924 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Nov 5 14:58:37.345931 kernel: Detected PIPT I-cache on CPU0 Nov 5 14:58:37.345938 kernel: CPU features: detected: GIC system register CPU interface Nov 5 14:58:37.345945 kernel: CPU features: detected: Spectre-v4 Nov 5 14:58:37.345952 kernel: CPU features: detected: Spectre-BHB Nov 5 14:58:37.345960 kernel: CPU features: kernel page table isolation forced ON by KASLR Nov 5 14:58:37.345966 kernel: CPU features: detected: Kernel page table isolation (KPTI) Nov 5 14:58:37.345973 kernel: CPU features: detected: ARM erratum 1418040 Nov 5 14:58:37.345980 kernel: CPU features: detected: SSBS not fully self-synchronizing Nov 5 14:58:37.345987 kernel: alternatives: applying boot alternatives Nov 5 14:58:37.345994 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=15758474ef4cace68fb389c1b75e821ab8f30d9b752a28429e0459793723ea7b Nov 5 14:58:37.346002 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 5 14:58:37.346009 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 5 14:58:37.346015 kernel: Fallback order for Node 0: 0 Nov 5 14:58:37.346022 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Nov 5 14:58:37.346031 kernel: Policy zone: DMA Nov 5 14:58:37.346038 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 5 14:58:37.346044 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Nov 5 14:58:37.346051 kernel: software IO TLB: area num 4. Nov 5 14:58:37.346074 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Nov 5 14:58:37.346081 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Nov 5 14:58:37.346088 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 5 14:58:37.346094 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 5 14:58:37.346102 kernel: rcu: RCU event tracing is enabled. Nov 5 14:58:37.346109 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 5 14:58:37.346116 kernel: Trampoline variant of Tasks RCU enabled. Nov 5 14:58:37.346124 kernel: Tracing variant of Tasks RCU enabled. Nov 5 14:58:37.346131 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 5 14:58:37.346138 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 5 14:58:37.346145 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 5 14:58:37.346152 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 5 14:58:37.346159 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 5 14:58:37.346166 kernel: GICv3: 256 SPIs implemented Nov 5 14:58:37.346173 kernel: GICv3: 0 Extended SPIs implemented Nov 5 14:58:37.346180 kernel: Root IRQ handler: gic_handle_irq Nov 5 14:58:37.346187 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Nov 5 14:58:37.346194 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Nov 5 14:58:37.346229 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Nov 5 14:58:37.346237 kernel: ITS [mem 0x08080000-0x0809ffff] Nov 5 14:58:37.346244 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Nov 5 14:58:37.346251 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Nov 5 14:58:37.346258 kernel: GICv3: using LPI property table @0x0000000040130000 Nov 5 14:58:37.346265 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Nov 5 14:58:37.346272 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 5 14:58:37.346279 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 5 14:58:37.346286 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Nov 5 14:58:37.346293 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Nov 5 14:58:37.346300 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Nov 5 14:58:37.346309 kernel: arm-pv: using stolen time PV Nov 5 14:58:37.346317 kernel: Console: colour dummy device 80x25 Nov 5 14:58:37.346324 kernel: ACPI: Core revision 20240827 Nov 5 14:58:37.346332 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Nov 5 14:58:37.346339 kernel: pid_max: default: 32768 minimum: 301 Nov 5 14:58:37.346346 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 5 14:58:37.346353 kernel: landlock: Up and running. Nov 5 14:58:37.346360 kernel: SELinux: Initializing. Nov 5 14:58:37.346369 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 5 14:58:37.346376 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 5 14:58:37.346384 kernel: rcu: Hierarchical SRCU implementation. Nov 5 14:58:37.346391 kernel: rcu: Max phase no-delay instances is 400. Nov 5 14:58:37.346398 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 5 14:58:37.346406 kernel: Remapping and enabling EFI services. Nov 5 14:58:37.346413 kernel: smp: Bringing up secondary CPUs ... Nov 5 14:58:37.346421 kernel: Detected PIPT I-cache on CPU1 Nov 5 14:58:37.346433 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Nov 5 14:58:37.346442 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Nov 5 14:58:37.346449 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 5 14:58:37.346457 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Nov 5 14:58:37.346464 kernel: Detected PIPT I-cache on CPU2 Nov 5 14:58:37.346472 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Nov 5 14:58:37.346481 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Nov 5 14:58:37.346488 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 5 14:58:37.346496 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Nov 5 14:58:37.346504 kernel: Detected PIPT I-cache on CPU3 Nov 5 14:58:37.346512 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Nov 5 14:58:37.346519 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Nov 5 14:58:37.346527 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 5 14:58:37.346535 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Nov 5 14:58:37.346543 kernel: smp: Brought up 1 node, 4 CPUs Nov 5 14:58:37.346550 kernel: SMP: Total of 4 processors activated. Nov 5 14:58:37.346558 kernel: CPU: All CPU(s) started at EL1 Nov 5 14:58:37.346565 kernel: CPU features: detected: 32-bit EL0 Support Nov 5 14:58:37.346573 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Nov 5 14:58:37.346581 kernel: CPU features: detected: Common not Private translations Nov 5 14:58:37.346589 kernel: CPU features: detected: CRC32 instructions Nov 5 14:58:37.346597 kernel: CPU features: detected: Enhanced Virtualization Traps Nov 5 14:58:37.346604 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Nov 5 14:58:37.346612 kernel: CPU features: detected: LSE atomic instructions Nov 5 14:58:37.346620 kernel: CPU features: detected: Privileged Access Never Nov 5 14:58:37.346627 kernel: CPU features: detected: RAS Extension Support Nov 5 14:58:37.346634 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Nov 5 14:58:37.346642 kernel: alternatives: applying system-wide alternatives Nov 5 14:58:37.346651 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Nov 5 14:58:37.346659 kernel: Memory: 2450400K/2572288K available (11136K kernel code, 2456K rwdata, 9084K rodata, 12992K init, 1038K bss, 99552K reserved, 16384K cma-reserved) Nov 5 14:58:37.346666 kernel: devtmpfs: initialized Nov 5 14:58:37.346674 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 5 14:58:37.346682 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 5 14:58:37.346689 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Nov 5 14:58:37.346697 kernel: 0 pages in range for non-PLT usage Nov 5 14:58:37.346705 kernel: 515056 pages in range for PLT usage Nov 5 14:58:37.346713 kernel: pinctrl core: initialized pinctrl subsystem Nov 5 14:58:37.346720 kernel: SMBIOS 3.0.0 present. Nov 5 14:58:37.346728 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Nov 5 14:58:37.346736 kernel: DMI: Memory slots populated: 1/1 Nov 5 14:58:37.346743 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 5 14:58:37.346751 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 5 14:58:37.346760 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 5 14:58:37.346768 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 5 14:58:37.346775 kernel: audit: initializing netlink subsys (disabled) Nov 5 14:58:37.346783 kernel: audit: type=2000 audit(0.015:1): state=initialized audit_enabled=0 res=1 Nov 5 14:58:37.346791 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 5 14:58:37.346798 kernel: cpuidle: using governor menu Nov 5 14:58:37.346806 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 5 14:58:37.346815 kernel: ASID allocator initialised with 32768 entries Nov 5 14:58:37.346822 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 5 14:58:37.346830 kernel: Serial: AMBA PL011 UART driver Nov 5 14:58:37.346837 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 5 14:58:37.346845 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 5 14:58:37.346852 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 5 14:58:37.346860 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 5 14:58:37.346867 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 5 14:58:37.346876 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 5 14:58:37.346884 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 5 14:58:37.346891 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 5 14:58:37.346898 kernel: ACPI: Added _OSI(Module Device) Nov 5 14:58:37.346906 kernel: ACPI: Added _OSI(Processor Device) Nov 5 14:58:37.346913 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 5 14:58:37.346921 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 5 14:58:37.346930 kernel: ACPI: Interpreter enabled Nov 5 14:58:37.346937 kernel: ACPI: Using GIC for interrupt routing Nov 5 14:58:37.346945 kernel: ACPI: MCFG table detected, 1 entries Nov 5 14:58:37.346952 kernel: ACPI: CPU0 has been hot-added Nov 5 14:58:37.346960 kernel: ACPI: CPU1 has been hot-added Nov 5 14:58:37.346967 kernel: ACPI: CPU2 has been hot-added Nov 5 14:58:37.346975 kernel: ACPI: CPU3 has been hot-added Nov 5 14:58:37.346983 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Nov 5 14:58:37.346991 kernel: printk: legacy console [ttyAMA0] enabled Nov 5 14:58:37.346999 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 5 14:58:37.347161 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 5 14:58:37.347281 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 5 14:58:37.347367 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 5 14:58:37.347453 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Nov 5 14:58:37.347533 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Nov 5 14:58:37.347543 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Nov 5 14:58:37.347551 kernel: PCI host bridge to bus 0000:00 Nov 5 14:58:37.347640 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Nov 5 14:58:37.347717 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 5 14:58:37.347792 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Nov 5 14:58:37.347867 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 5 14:58:37.347964 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Nov 5 14:58:37.348055 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 5 14:58:37.348141 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Nov 5 14:58:37.348244 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Nov 5 14:58:37.348344 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Nov 5 14:58:37.348442 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Nov 5 14:58:37.348533 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Nov 5 14:58:37.348618 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Nov 5 14:58:37.348697 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Nov 5 14:58:37.348792 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 5 14:58:37.348872 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Nov 5 14:58:37.348881 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 5 14:58:37.348889 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 5 14:58:37.348897 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 5 14:58:37.348905 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 5 14:58:37.348912 kernel: iommu: Default domain type: Translated Nov 5 14:58:37.348921 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 5 14:58:37.348929 kernel: efivars: Registered efivars operations Nov 5 14:58:37.348937 kernel: vgaarb: loaded Nov 5 14:58:37.348944 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 5 14:58:37.348952 kernel: VFS: Disk quotas dquot_6.6.0 Nov 5 14:58:37.348959 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 5 14:58:37.348967 kernel: pnp: PnP ACPI init Nov 5 14:58:37.349061 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Nov 5 14:58:37.349072 kernel: pnp: PnP ACPI: found 1 devices Nov 5 14:58:37.349080 kernel: NET: Registered PF_INET protocol family Nov 5 14:58:37.349087 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 5 14:58:37.349095 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 5 14:58:37.349103 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 5 14:58:37.349111 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 5 14:58:37.349120 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 5 14:58:37.349128 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 5 14:58:37.349135 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 5 14:58:37.349143 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 5 14:58:37.349151 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 5 14:58:37.349158 kernel: PCI: CLS 0 bytes, default 64 Nov 5 14:58:37.349166 kernel: kvm [1]: HYP mode not available Nov 5 14:58:37.349174 kernel: Initialise system trusted keyrings Nov 5 14:58:37.349182 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 5 14:58:37.349190 kernel: Key type asymmetric registered Nov 5 14:58:37.349197 kernel: Asymmetric key parser 'x509' registered Nov 5 14:58:37.349224 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 5 14:58:37.349232 kernel: io scheduler mq-deadline registered Nov 5 14:58:37.349240 kernel: io scheduler kyber registered Nov 5 14:58:37.349250 kernel: io scheduler bfq registered Nov 5 14:58:37.349258 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 5 14:58:37.349265 kernel: ACPI: button: Power Button [PWRB] Nov 5 14:58:37.349273 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 5 14:58:37.349361 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Nov 5 14:58:37.349371 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 5 14:58:37.349379 kernel: thunder_xcv, ver 1.0 Nov 5 14:58:37.349389 kernel: thunder_bgx, ver 1.0 Nov 5 14:58:37.349397 kernel: nicpf, ver 1.0 Nov 5 14:58:37.349404 kernel: nicvf, ver 1.0 Nov 5 14:58:37.349495 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 5 14:58:37.349571 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-05T14:58:36 UTC (1762354716) Nov 5 14:58:37.349582 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 5 14:58:37.349590 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Nov 5 14:58:37.349600 kernel: watchdog: NMI not fully supported Nov 5 14:58:37.349607 kernel: watchdog: Hard watchdog permanently disabled Nov 5 14:58:37.349615 kernel: NET: Registered PF_INET6 protocol family Nov 5 14:58:37.349622 kernel: Segment Routing with IPv6 Nov 5 14:58:37.349630 kernel: In-situ OAM (IOAM) with IPv6 Nov 5 14:58:37.349637 kernel: NET: Registered PF_PACKET protocol family Nov 5 14:58:37.349645 kernel: Key type dns_resolver registered Nov 5 14:58:37.349654 kernel: registered taskstats version 1 Nov 5 14:58:37.349661 kernel: Loading compiled-in X.509 certificates Nov 5 14:58:37.349669 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 4b3babb46eb583bd8b0310732885d24e60ea58c5' Nov 5 14:58:37.349677 kernel: Demotion targets for Node 0: null Nov 5 14:58:37.349684 kernel: Key type .fscrypt registered Nov 5 14:58:37.349692 kernel: Key type fscrypt-provisioning registered Nov 5 14:58:37.349700 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 5 14:58:37.349709 kernel: ima: Allocated hash algorithm: sha1 Nov 5 14:58:37.349716 kernel: ima: No architecture policies found Nov 5 14:58:37.349724 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 5 14:58:37.349731 kernel: clk: Disabling unused clocks Nov 5 14:58:37.349739 kernel: PM: genpd: Disabling unused power domains Nov 5 14:58:37.349746 kernel: Freeing unused kernel memory: 12992K Nov 5 14:58:37.349754 kernel: Run /init as init process Nov 5 14:58:37.349763 kernel: with arguments: Nov 5 14:58:37.349771 kernel: /init Nov 5 14:58:37.349778 kernel: with environment: Nov 5 14:58:37.349785 kernel: HOME=/ Nov 5 14:58:37.349793 kernel: TERM=linux Nov 5 14:58:37.349886 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Nov 5 14:58:37.349964 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Nov 5 14:58:37.349977 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 5 14:58:37.349985 kernel: GPT:16515071 != 27000831 Nov 5 14:58:37.350000 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 5 14:58:37.350008 kernel: GPT:16515071 != 27000831 Nov 5 14:58:37.350015 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 5 14:58:37.350023 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 5 14:58:37.350031 kernel: SCSI subsystem initialized Nov 5 14:58:37.350039 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 5 14:58:37.350047 kernel: device-mapper: uevent: version 1.0.3 Nov 5 14:58:37.350055 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 5 14:58:37.350063 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Nov 5 14:58:37.350070 kernel: raid6: neonx8 gen() 15785 MB/s Nov 5 14:58:37.350078 kernel: raid6: neonx4 gen() 15827 MB/s Nov 5 14:58:37.350087 kernel: raid6: neonx2 gen() 13240 MB/s Nov 5 14:58:37.350095 kernel: raid6: neonx1 gen() 10491 MB/s Nov 5 14:58:37.350102 kernel: raid6: int64x8 gen() 6900 MB/s Nov 5 14:58:37.350110 kernel: raid6: int64x4 gen() 7353 MB/s Nov 5 14:58:37.350118 kernel: raid6: int64x2 gen() 6109 MB/s Nov 5 14:58:37.350126 kernel: raid6: int64x1 gen() 5044 MB/s Nov 5 14:58:37.350133 kernel: raid6: using algorithm neonx4 gen() 15827 MB/s Nov 5 14:58:37.350141 kernel: raid6: .... xor() 12352 MB/s, rmw enabled Nov 5 14:58:37.350150 kernel: raid6: using neon recovery algorithm Nov 5 14:58:37.350158 kernel: xor: measuring software checksum speed Nov 5 14:58:37.350166 kernel: 8regs : 21579 MB/sec Nov 5 14:58:37.350174 kernel: 32regs : 21681 MB/sec Nov 5 14:58:37.350182 kernel: arm64_neon : 28128 MB/sec Nov 5 14:58:37.350189 kernel: xor: using function: arm64_neon (28128 MB/sec) Nov 5 14:58:37.350197 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 5 14:58:37.350229 kernel: BTRFS: device fsid d8f84a83-fd8b-4c0e-831a-0d7c5ff234be devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (206) Nov 5 14:58:37.350237 kernel: BTRFS info (device dm-0): first mount of filesystem d8f84a83-fd8b-4c0e-831a-0d7c5ff234be Nov 5 14:58:37.350245 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 5 14:58:37.350253 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 5 14:58:37.350261 kernel: BTRFS info (device dm-0): enabling free space tree Nov 5 14:58:37.350268 kernel: loop: module loaded Nov 5 14:58:37.350276 kernel: loop0: detected capacity change from 0 to 91464 Nov 5 14:58:37.350285 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 5 14:58:37.350294 systemd[1]: Successfully made /usr/ read-only. Nov 5 14:58:37.350305 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 14:58:37.350313 systemd[1]: Detected virtualization kvm. Nov 5 14:58:37.350321 systemd[1]: Detected architecture arm64. Nov 5 14:58:37.350329 systemd[1]: Running in initrd. Nov 5 14:58:37.350339 systemd[1]: No hostname configured, using default hostname. Nov 5 14:58:37.350348 systemd[1]: Hostname set to . Nov 5 14:58:37.350356 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 5 14:58:37.350364 systemd[1]: Queued start job for default target initrd.target. Nov 5 14:58:37.350372 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 14:58:37.350381 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 14:58:37.350391 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 14:58:37.350400 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 5 14:58:37.350408 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 14:58:37.350417 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 5 14:58:37.350426 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 5 14:58:37.350435 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 14:58:37.350444 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 14:58:37.350453 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 5 14:58:37.350461 systemd[1]: Reached target paths.target - Path Units. Nov 5 14:58:37.350469 systemd[1]: Reached target slices.target - Slice Units. Nov 5 14:58:37.350478 systemd[1]: Reached target swap.target - Swaps. Nov 5 14:58:37.350486 systemd[1]: Reached target timers.target - Timer Units. Nov 5 14:58:37.350494 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 14:58:37.350504 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 14:58:37.350512 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 5 14:58:37.350521 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 5 14:58:37.350536 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 14:58:37.350545 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 14:58:37.350556 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 14:58:37.350564 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 14:58:37.350573 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 5 14:58:37.350581 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 5 14:58:37.350590 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 14:58:37.350598 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 5 14:58:37.350609 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 5 14:58:37.350617 systemd[1]: Starting systemd-fsck-usr.service... Nov 5 14:58:37.350626 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 14:58:37.350634 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 14:58:37.350643 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 14:58:37.350653 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 5 14:58:37.350662 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 14:58:37.350670 systemd[1]: Finished systemd-fsck-usr.service. Nov 5 14:58:37.350700 systemd-journald[346]: Collecting audit messages is disabled. Nov 5 14:58:37.350722 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 14:58:37.350731 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 5 14:58:37.350740 systemd-journald[346]: Journal started Nov 5 14:58:37.350759 systemd-journald[346]: Runtime Journal (/run/log/journal/53bae84090df4c41a90ef2375cdeb1f0) is 6M, max 48.5M, 42.4M free. Nov 5 14:58:37.353242 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 14:58:37.353708 systemd-modules-load[347]: Inserted module 'br_netfilter' Nov 5 14:58:37.354627 kernel: Bridge firewalling registered Nov 5 14:58:37.355293 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 14:58:37.358576 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 14:58:37.361121 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 14:58:37.363316 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 14:58:37.366565 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 14:58:37.369709 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 5 14:58:37.371268 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 14:58:37.377521 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 14:58:37.378715 systemd-tmpfiles[364]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 5 14:58:37.380390 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 14:58:37.381977 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 14:58:37.385575 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 14:58:37.395535 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 14:58:37.398792 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 5 14:58:37.418434 systemd-resolved[374]: Positive Trust Anchors: Nov 5 14:58:37.418453 systemd-resolved[374]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 14:58:37.418457 systemd-resolved[374]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 14:58:37.418488 systemd-resolved[374]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 14:58:37.428745 dracut-cmdline[389]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=15758474ef4cace68fb389c1b75e821ab8f30d9b752a28429e0459793723ea7b Nov 5 14:58:37.440898 systemd-resolved[374]: Defaulting to hostname 'linux'. Nov 5 14:58:37.441855 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 14:58:37.443612 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 14:58:37.496240 kernel: Loading iSCSI transport class v2.0-870. Nov 5 14:58:37.504234 kernel: iscsi: registered transport (tcp) Nov 5 14:58:37.517229 kernel: iscsi: registered transport (qla4xxx) Nov 5 14:58:37.517251 kernel: QLogic iSCSI HBA Driver Nov 5 14:58:37.538661 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 14:58:37.557335 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 14:58:37.559722 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 14:58:37.605272 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 5 14:58:37.607705 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 5 14:58:37.609182 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 5 14:58:37.640054 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 5 14:58:37.642770 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 14:58:37.669526 systemd-udevd[627]: Using default interface naming scheme 'v257'. Nov 5 14:58:37.677250 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 14:58:37.680565 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 5 14:58:37.703853 dracut-pre-trigger[693]: rd.md=0: removing MD RAID activation Nov 5 14:58:37.707090 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 14:58:37.710704 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 14:58:37.729245 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 14:58:37.731360 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 14:58:37.753500 systemd-networkd[740]: lo: Link UP Nov 5 14:58:37.753509 systemd-networkd[740]: lo: Gained carrier Nov 5 14:58:37.753965 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 14:58:37.755184 systemd[1]: Reached target network.target - Network. Nov 5 14:58:37.788064 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 14:58:37.790531 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 5 14:58:37.833155 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 5 14:58:37.844851 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 5 14:58:37.859116 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 5 14:58:37.865459 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 5 14:58:37.867154 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 5 14:58:37.872908 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 14:58:37.873028 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 14:58:37.876052 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 14:58:37.880353 systemd-networkd[740]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 14:58:37.880365 systemd-networkd[740]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 14:58:37.881654 systemd-networkd[740]: eth0: Link UP Nov 5 14:58:37.881808 systemd-networkd[740]: eth0: Gained carrier Nov 5 14:58:37.881818 systemd-networkd[740]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 14:58:37.881874 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 14:58:37.892456 disk-uuid[804]: Primary Header is updated. Nov 5 14:58:37.892456 disk-uuid[804]: Secondary Entries is updated. Nov 5 14:58:37.892456 disk-uuid[804]: Secondary Header is updated. Nov 5 14:58:37.895268 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 5 14:58:37.897294 systemd-networkd[740]: eth0: DHCPv4 address 10.0.0.21/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 5 14:58:37.897790 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 14:58:37.899825 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 14:58:37.902062 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 14:58:37.909229 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 5 14:58:37.917928 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 14:58:37.941653 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 5 14:58:38.918440 disk-uuid[809]: Warning: The kernel is still using the old partition table. Nov 5 14:58:38.918440 disk-uuid[809]: The new table will be used at the next reboot or after you Nov 5 14:58:38.918440 disk-uuid[809]: run partprobe(8) or kpartx(8) Nov 5 14:58:38.918440 disk-uuid[809]: The operation has completed successfully. Nov 5 14:58:38.928266 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 5 14:58:38.929148 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 5 14:58:38.931140 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 5 14:58:38.960088 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (833) Nov 5 14:58:38.960139 kernel: BTRFS info (device vda6): first mount of filesystem 53018052-4eb1-4655-a725-a5d3199d5804 Nov 5 14:58:38.961420 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 5 14:58:38.963687 kernel: BTRFS info (device vda6): turning on async discard Nov 5 14:58:38.963709 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 14:58:38.969221 kernel: BTRFS info (device vda6): last unmount of filesystem 53018052-4eb1-4655-a725-a5d3199d5804 Nov 5 14:58:38.969689 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 5 14:58:38.972797 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 5 14:58:38.995338 systemd-networkd[740]: eth0: Gained IPv6LL Nov 5 14:58:39.090278 ignition[852]: Ignition 2.22.0 Nov 5 14:58:39.090291 ignition[852]: Stage: fetch-offline Nov 5 14:58:39.090342 ignition[852]: no configs at "/usr/lib/ignition/base.d" Nov 5 14:58:39.090351 ignition[852]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 14:58:39.090431 ignition[852]: parsed url from cmdline: "" Nov 5 14:58:39.090434 ignition[852]: no config URL provided Nov 5 14:58:39.090438 ignition[852]: reading system config file "/usr/lib/ignition/user.ign" Nov 5 14:58:39.090446 ignition[852]: no config at "/usr/lib/ignition/user.ign" Nov 5 14:58:39.090488 ignition[852]: op(1): [started] loading QEMU firmware config module Nov 5 14:58:39.090493 ignition[852]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 5 14:58:39.095753 ignition[852]: op(1): [finished] loading QEMU firmware config module Nov 5 14:58:39.139676 ignition[852]: parsing config with SHA512: 3633a1fcb4e73ef70ddd2dfb4f742e28589423a7f30020aacffe55429522dbc0e62d34bcd75ae5d54b8e368679578191d3caff27fd0c613d3132b2096ea6c105 Nov 5 14:58:39.145651 unknown[852]: fetched base config from "system" Nov 5 14:58:39.145664 unknown[852]: fetched user config from "qemu" Nov 5 14:58:39.146054 ignition[852]: fetch-offline: fetch-offline passed Nov 5 14:58:39.147974 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 14:58:39.146108 ignition[852]: Ignition finished successfully Nov 5 14:58:39.149084 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 5 14:58:39.149917 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 5 14:58:39.181357 ignition[867]: Ignition 2.22.0 Nov 5 14:58:39.181375 ignition[867]: Stage: kargs Nov 5 14:58:39.181519 ignition[867]: no configs at "/usr/lib/ignition/base.d" Nov 5 14:58:39.181526 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 14:58:39.184247 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 5 14:58:39.182306 ignition[867]: kargs: kargs passed Nov 5 14:58:39.182350 ignition[867]: Ignition finished successfully Nov 5 14:58:39.186977 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 5 14:58:39.216103 ignition[875]: Ignition 2.22.0 Nov 5 14:58:39.216122 ignition[875]: Stage: disks Nov 5 14:58:39.216300 ignition[875]: no configs at "/usr/lib/ignition/base.d" Nov 5 14:58:39.216308 ignition[875]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 14:58:39.218983 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 5 14:58:39.217026 ignition[875]: disks: disks passed Nov 5 14:58:39.220777 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 5 14:58:39.217065 ignition[875]: Ignition finished successfully Nov 5 14:58:39.222166 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 5 14:58:39.223531 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 14:58:39.225024 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 14:58:39.226653 systemd[1]: Reached target basic.target - Basic System. Nov 5 14:58:39.229299 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 5 14:58:39.262210 systemd-fsck[885]: ROOT: clean, 15/456736 files, 38230/456704 blocks Nov 5 14:58:39.266768 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 5 14:58:39.269352 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 5 14:58:39.348233 kernel: EXT4-fs (vda9): mounted filesystem 67ab558f-e1dc-496b-b18a-e9709809a3c4 r/w with ordered data mode. Quota mode: none. Nov 5 14:58:39.348676 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 5 14:58:39.349754 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 5 14:58:39.351826 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 14:58:39.353253 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 5 14:58:39.354036 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 5 14:58:39.354066 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 5 14:58:39.354088 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 14:58:39.375280 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 5 14:58:39.378755 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 5 14:58:39.383224 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (893) Nov 5 14:58:39.383261 kernel: BTRFS info (device vda6): first mount of filesystem 53018052-4eb1-4655-a725-a5d3199d5804 Nov 5 14:58:39.385392 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 5 14:58:39.388616 kernel: BTRFS info (device vda6): turning on async discard Nov 5 14:58:39.388664 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 14:58:39.389670 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 14:58:39.418975 initrd-setup-root[917]: cut: /sysroot/etc/passwd: No such file or directory Nov 5 14:58:39.422454 initrd-setup-root[924]: cut: /sysroot/etc/group: No such file or directory Nov 5 14:58:39.426757 initrd-setup-root[931]: cut: /sysroot/etc/shadow: No such file or directory Nov 5 14:58:39.429765 initrd-setup-root[938]: cut: /sysroot/etc/gshadow: No such file or directory Nov 5 14:58:39.502828 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 5 14:58:39.505003 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 5 14:58:39.507293 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 5 14:58:39.522784 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 5 14:58:39.525231 kernel: BTRFS info (device vda6): last unmount of filesystem 53018052-4eb1-4655-a725-a5d3199d5804 Nov 5 14:58:39.541342 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 5 14:58:39.557784 ignition[1008]: INFO : Ignition 2.22.0 Nov 5 14:58:39.557784 ignition[1008]: INFO : Stage: mount Nov 5 14:58:39.559608 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 14:58:39.559608 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 14:58:39.559608 ignition[1008]: INFO : mount: mount passed Nov 5 14:58:39.559608 ignition[1008]: INFO : Ignition finished successfully Nov 5 14:58:39.560814 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 5 14:58:39.566081 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 5 14:58:40.350314 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 14:58:40.369658 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1019) Nov 5 14:58:40.369707 kernel: BTRFS info (device vda6): first mount of filesystem 53018052-4eb1-4655-a725-a5d3199d5804 Nov 5 14:58:40.369719 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 5 14:58:40.373323 kernel: BTRFS info (device vda6): turning on async discard Nov 5 14:58:40.373354 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 14:58:40.375184 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 14:58:40.403952 ignition[1036]: INFO : Ignition 2.22.0 Nov 5 14:58:40.403952 ignition[1036]: INFO : Stage: files Nov 5 14:58:40.405366 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 14:58:40.405366 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 14:58:40.405366 ignition[1036]: DEBUG : files: compiled without relabeling support, skipping Nov 5 14:58:40.408160 ignition[1036]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 5 14:58:40.408160 ignition[1036]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 5 14:58:40.410348 ignition[1036]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 5 14:58:40.410348 ignition[1036]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 5 14:58:40.410348 ignition[1036]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 5 14:58:40.410277 unknown[1036]: wrote ssh authorized keys file for user: core Nov 5 14:58:40.414495 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Nov 5 14:58:40.414495 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Nov 5 14:58:40.445487 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 5 14:58:40.634986 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Nov 5 14:58:40.634986 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 5 14:58:40.638858 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 5 14:58:40.638858 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 5 14:58:40.638858 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 5 14:58:40.638858 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 14:58:40.638858 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 14:58:40.638858 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 14:58:40.638858 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 14:58:40.638858 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 14:58:40.638858 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 14:58:40.638858 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 5 14:58:40.655289 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 5 14:58:40.655289 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 5 14:58:40.655289 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Nov 5 14:58:41.150436 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 5 14:58:41.775172 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 5 14:58:41.775172 ignition[1036]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 5 14:58:41.778196 ignition[1036]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 14:58:41.857786 ignition[1036]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 14:58:41.857786 ignition[1036]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 5 14:58:41.857786 ignition[1036]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 5 14:58:41.861776 ignition[1036]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 5 14:58:41.861776 ignition[1036]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 5 14:58:41.861776 ignition[1036]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 5 14:58:41.861776 ignition[1036]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 5 14:58:41.874341 ignition[1036]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 5 14:58:41.877860 ignition[1036]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 5 14:58:41.877860 ignition[1036]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 5 14:58:41.877860 ignition[1036]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 5 14:58:41.877860 ignition[1036]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 5 14:58:41.877860 ignition[1036]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 5 14:58:41.877860 ignition[1036]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 5 14:58:41.877860 ignition[1036]: INFO : files: files passed Nov 5 14:58:41.877860 ignition[1036]: INFO : Ignition finished successfully Nov 5 14:58:41.881300 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 5 14:58:41.884394 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 5 14:58:41.887043 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 5 14:58:41.896179 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 5 14:58:41.896292 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 5 14:58:41.901930 initrd-setup-root-after-ignition[1067]: grep: /sysroot/oem/oem-release: No such file or directory Nov 5 14:58:41.904923 initrd-setup-root-after-ignition[1070]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 14:58:41.904923 initrd-setup-root-after-ignition[1070]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 5 14:58:41.907400 initrd-setup-root-after-ignition[1074]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 14:58:41.908744 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 14:58:41.910080 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 5 14:58:41.912289 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 5 14:58:41.978346 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 5 14:58:41.979212 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 5 14:58:41.980361 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 5 14:58:41.982297 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 5 14:58:41.984367 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 5 14:58:41.985350 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 5 14:58:42.017127 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 14:58:42.019733 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 5 14:58:42.040895 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 14:58:42.041104 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 5 14:58:42.042789 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 14:58:42.044568 systemd[1]: Stopped target timers.target - Timer Units. Nov 5 14:58:42.046007 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 5 14:58:42.046137 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 14:58:42.048233 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 5 14:58:42.049982 systemd[1]: Stopped target basic.target - Basic System. Nov 5 14:58:42.051490 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 5 14:58:42.052838 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 14:58:42.054443 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 5 14:58:42.055975 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 5 14:58:42.057596 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 5 14:58:42.059104 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 14:58:42.060671 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 5 14:58:42.062161 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 5 14:58:42.063614 systemd[1]: Stopped target swap.target - Swaps. Nov 5 14:58:42.064912 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 5 14:58:42.065040 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 5 14:58:42.067039 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 5 14:58:42.068788 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 14:58:42.070311 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 5 14:58:42.071901 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 14:58:42.072977 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 5 14:58:42.073101 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 5 14:58:42.076576 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 5 14:58:42.076696 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 14:58:42.078689 systemd[1]: Stopped target paths.target - Path Units. Nov 5 14:58:42.080298 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 5 14:58:42.084295 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 14:58:42.085335 systemd[1]: Stopped target slices.target - Slice Units. Nov 5 14:58:42.087716 systemd[1]: Stopped target sockets.target - Socket Units. Nov 5 14:58:42.089122 systemd[1]: iscsid.socket: Deactivated successfully. Nov 5 14:58:42.089225 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 14:58:42.090821 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 5 14:58:42.090889 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 14:58:42.092302 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 5 14:58:42.092420 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 14:58:42.094140 systemd[1]: ignition-files.service: Deactivated successfully. Nov 5 14:58:42.094268 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 5 14:58:42.096523 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 5 14:58:42.097195 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 5 14:58:42.097344 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 14:58:42.100116 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 5 14:58:42.101752 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 5 14:58:42.101904 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 14:58:42.103537 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 5 14:58:42.103642 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 14:58:42.105196 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 5 14:58:42.105326 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 14:58:42.111133 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 5 14:58:42.116059 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 5 14:58:42.131078 ignition[1094]: INFO : Ignition 2.22.0 Nov 5 14:58:42.131078 ignition[1094]: INFO : Stage: umount Nov 5 14:58:42.133287 ignition[1094]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 14:58:42.133287 ignition[1094]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 14:58:42.133287 ignition[1094]: INFO : umount: umount passed Nov 5 14:58:42.133287 ignition[1094]: INFO : Ignition finished successfully Nov 5 14:58:42.133402 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 5 14:58:42.133507 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 5 14:58:42.135475 systemd[1]: Stopped target network.target - Network. Nov 5 14:58:42.136901 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 5 14:58:42.136953 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 5 14:58:42.138651 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 5 14:58:42.138700 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 5 14:58:42.140118 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 5 14:58:42.140163 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 5 14:58:42.141732 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 5 14:58:42.141772 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 5 14:58:42.143448 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 5 14:58:42.144955 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 5 14:58:42.147435 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 5 14:58:42.155684 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 5 14:58:42.155827 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 5 14:58:42.160171 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 5 14:58:42.161702 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 5 14:58:42.167056 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 5 14:58:42.168159 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 5 14:58:42.168212 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 5 14:58:42.172352 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 5 14:58:42.174026 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 5 14:58:42.174096 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 14:58:42.176389 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 5 14:58:42.176441 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 5 14:58:42.178244 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 5 14:58:42.178300 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 5 14:58:42.180298 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 14:58:42.184341 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 5 14:58:42.184425 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 5 14:58:42.186430 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 5 14:58:42.186515 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 5 14:58:42.196488 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 5 14:58:42.201431 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 14:58:42.202897 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 5 14:58:42.202935 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 5 14:58:42.204662 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 5 14:58:42.204690 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 14:58:42.206077 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 5 14:58:42.206124 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 5 14:58:42.208502 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 5 14:58:42.208549 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 5 14:58:42.210808 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 5 14:58:42.210854 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 14:58:42.213758 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 5 14:58:42.214633 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 5 14:58:42.214686 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 14:58:42.216170 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 5 14:58:42.216221 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 14:58:42.218116 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 14:58:42.218153 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 14:58:42.220361 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 5 14:58:42.226363 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 5 14:58:42.231782 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 5 14:58:42.231868 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 5 14:58:42.233634 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 5 14:58:42.235849 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 5 14:58:42.265194 systemd[1]: Switching root. Nov 5 14:58:42.302515 systemd-journald[346]: Journal stopped Nov 5 14:58:43.063294 systemd-journald[346]: Received SIGTERM from PID 1 (systemd). Nov 5 14:58:43.063355 kernel: SELinux: policy capability network_peer_controls=1 Nov 5 14:58:43.063369 kernel: SELinux: policy capability open_perms=1 Nov 5 14:58:43.063380 kernel: SELinux: policy capability extended_socket_class=1 Nov 5 14:58:43.063396 kernel: SELinux: policy capability always_check_network=0 Nov 5 14:58:43.063411 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 5 14:58:43.063421 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 5 14:58:43.063435 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 5 14:58:43.063445 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 5 14:58:43.063456 kernel: SELinux: policy capability userspace_initial_context=0 Nov 5 14:58:43.063466 kernel: audit: type=1403 audit(1762354722.490:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 5 14:58:43.063478 systemd[1]: Successfully loaded SELinux policy in 53.819ms. Nov 5 14:58:43.063495 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.349ms. Nov 5 14:58:43.063507 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 14:58:43.063518 systemd[1]: Detected virtualization kvm. Nov 5 14:58:43.063529 systemd[1]: Detected architecture arm64. Nov 5 14:58:43.063539 systemd[1]: Detected first boot. Nov 5 14:58:43.063549 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 5 14:58:43.063561 zram_generator::config[1139]: No configuration found. Nov 5 14:58:43.063574 kernel: NET: Registered PF_VSOCK protocol family Nov 5 14:58:43.063584 systemd[1]: Populated /etc with preset unit settings. Nov 5 14:58:43.063595 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 5 14:58:43.063606 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 5 14:58:43.063616 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 5 14:58:43.063704 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 5 14:58:43.063726 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 5 14:58:43.063737 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 5 14:58:43.063752 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 5 14:58:43.063763 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 5 14:58:43.063774 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 5 14:58:43.063784 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 5 14:58:43.063796 systemd[1]: Created slice user.slice - User and Session Slice. Nov 5 14:58:43.063807 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 14:58:43.063818 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 14:58:43.063922 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 5 14:58:43.063934 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 5 14:58:43.063946 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 5 14:58:43.063957 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 14:58:43.063971 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Nov 5 14:58:43.063983 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 14:58:43.063993 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 14:58:43.064004 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 5 14:58:43.064015 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 5 14:58:43.064026 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 5 14:58:43.064038 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 5 14:58:43.064049 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 14:58:43.064059 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 14:58:43.064070 systemd[1]: Reached target slices.target - Slice Units. Nov 5 14:58:43.064080 systemd[1]: Reached target swap.target - Swaps. Nov 5 14:58:43.064091 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 5 14:58:43.064101 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 5 14:58:43.064112 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 5 14:58:43.064123 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 14:58:43.064134 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 14:58:43.064145 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 14:58:43.064156 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 5 14:58:43.064166 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 5 14:58:43.064177 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 5 14:58:43.064188 systemd[1]: Mounting media.mount - External Media Directory... Nov 5 14:58:43.064199 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 5 14:58:43.064270 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 5 14:58:43.064282 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 5 14:58:43.064304 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 5 14:58:43.064315 systemd[1]: Reached target machines.target - Containers. Nov 5 14:58:43.064326 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 5 14:58:43.064337 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 14:58:43.064351 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 14:58:43.064361 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 5 14:58:43.064372 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 14:58:43.064382 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 14:58:43.064393 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 14:58:43.064403 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 5 14:58:43.064413 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 14:58:43.064426 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 5 14:58:43.064437 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 5 14:58:43.064447 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 5 14:58:43.064458 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 5 14:58:43.064469 systemd[1]: Stopped systemd-fsck-usr.service. Nov 5 14:58:43.064479 kernel: fuse: init (API version 7.41) Nov 5 14:58:43.064492 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 14:58:43.064503 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 14:58:43.064514 kernel: ACPI: bus type drm_connector registered Nov 5 14:58:43.064524 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 14:58:43.064535 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 14:58:43.064545 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 5 14:58:43.064556 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 5 14:58:43.064568 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 14:58:43.064579 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 5 14:58:43.064610 systemd-journald[1211]: Collecting audit messages is disabled. Nov 5 14:58:43.064635 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 5 14:58:43.064646 systemd[1]: Mounted media.mount - External Media Directory. Nov 5 14:58:43.064657 systemd-journald[1211]: Journal started Nov 5 14:58:43.064678 systemd-journald[1211]: Runtime Journal (/run/log/journal/53bae84090df4c41a90ef2375cdeb1f0) is 6M, max 48.5M, 42.4M free. Nov 5 14:58:42.860587 systemd[1]: Queued start job for default target multi-user.target. Nov 5 14:58:42.883103 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 5 14:58:42.883555 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 5 14:58:43.067257 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 14:58:43.068127 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 5 14:58:43.069134 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 5 14:58:43.070169 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 5 14:58:43.072251 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 5 14:58:43.073383 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 14:58:43.074559 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 5 14:58:43.074711 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 5 14:58:43.075916 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 14:58:43.076085 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 14:58:43.077308 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 14:58:43.077454 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 14:58:43.078473 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 14:58:43.078621 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 14:58:43.079780 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 5 14:58:43.079954 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 5 14:58:43.081328 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 14:58:43.081480 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 14:58:43.082585 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 14:58:43.083962 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 14:58:43.086072 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 5 14:58:43.087499 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 5 14:58:43.099552 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 14:58:43.100721 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 5 14:58:43.104682 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 5 14:58:43.106592 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 5 14:58:43.108083 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 5 14:58:43.108197 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 14:58:43.110157 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 5 14:58:43.111509 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 14:58:43.113498 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 5 14:58:43.115549 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 5 14:58:43.116490 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 14:58:43.117423 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 5 14:58:43.118397 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 14:58:43.120492 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 14:58:43.124441 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 5 14:58:43.127391 systemd-journald[1211]: Time spent on flushing to /var/log/journal/53bae84090df4c41a90ef2375cdeb1f0 is 16.950ms for 869 entries. Nov 5 14:58:43.127391 systemd-journald[1211]: System Journal (/var/log/journal/53bae84090df4c41a90ef2375cdeb1f0) is 8M, max 163.5M, 155.5M free. Nov 5 14:58:43.158069 systemd-journald[1211]: Received client request to flush runtime journal. Nov 5 14:58:43.158120 kernel: loop1: detected capacity change from 0 to 119344 Nov 5 14:58:43.128393 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 5 14:58:43.130472 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 14:58:43.132633 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 5 14:58:43.134832 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 5 14:58:43.136484 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 5 14:58:43.140367 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 5 14:58:43.143806 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 5 14:58:43.147342 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 14:58:43.159514 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 5 14:58:43.162230 kernel: loop2: detected capacity change from 0 to 207008 Nov 5 14:58:43.168396 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 5 14:58:43.180565 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 5 14:58:43.183057 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 14:58:43.184794 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 14:58:43.190267 kernel: loop3: detected capacity change from 0 to 100624 Nov 5 14:58:43.193403 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 5 14:58:43.206325 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Nov 5 14:58:43.206339 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Nov 5 14:58:43.209591 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 14:58:43.216230 kernel: loop4: detected capacity change from 0 to 119344 Nov 5 14:58:43.221234 kernel: loop5: detected capacity change from 0 to 207008 Nov 5 14:58:43.225084 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 5 14:58:43.228295 kernel: loop6: detected capacity change from 0 to 100624 Nov 5 14:58:43.233511 (sd-merge)[1278]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Nov 5 14:58:43.236136 (sd-merge)[1278]: Merged extensions into '/usr'. Nov 5 14:58:43.239650 systemd[1]: Reload requested from client PID 1255 ('systemd-sysext') (unit systemd-sysext.service)... Nov 5 14:58:43.239659 systemd[1]: Reloading... Nov 5 14:58:43.293220 zram_generator::config[1311]: No configuration found. Nov 5 14:58:43.292065 systemd-resolved[1272]: Positive Trust Anchors: Nov 5 14:58:43.292084 systemd-resolved[1272]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 14:58:43.292087 systemd-resolved[1272]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 14:58:43.292117 systemd-resolved[1272]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 14:58:43.298617 systemd-resolved[1272]: Defaulting to hostname 'linux'. Nov 5 14:58:43.420549 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 5 14:58:43.420855 systemd[1]: Reloading finished in 180 ms. Nov 5 14:58:43.444767 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 14:58:43.446016 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 5 14:58:43.448906 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 14:58:43.463480 systemd[1]: Starting ensure-sysext.service... Nov 5 14:58:43.465133 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 14:58:43.474155 systemd[1]: Reload requested from client PID 1344 ('systemctl') (unit ensure-sysext.service)... Nov 5 14:58:43.474169 systemd[1]: Reloading... Nov 5 14:58:43.478749 systemd-tmpfiles[1345]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 5 14:58:43.479025 systemd-tmpfiles[1345]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 5 14:58:43.479355 systemd-tmpfiles[1345]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 5 14:58:43.479620 systemd-tmpfiles[1345]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 5 14:58:43.480331 systemd-tmpfiles[1345]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 5 14:58:43.480615 systemd-tmpfiles[1345]: ACLs are not supported, ignoring. Nov 5 14:58:43.480725 systemd-tmpfiles[1345]: ACLs are not supported, ignoring. Nov 5 14:58:43.484233 systemd-tmpfiles[1345]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 14:58:43.484241 systemd-tmpfiles[1345]: Skipping /boot Nov 5 14:58:43.490259 systemd-tmpfiles[1345]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 14:58:43.490365 systemd-tmpfiles[1345]: Skipping /boot Nov 5 14:58:43.525233 zram_generator::config[1375]: No configuration found. Nov 5 14:58:43.661178 systemd[1]: Reloading finished in 186 ms. Nov 5 14:58:43.680762 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 5 14:58:43.701064 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 14:58:43.708465 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 14:58:43.710235 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 5 14:58:43.712185 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 5 14:58:43.716652 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 5 14:58:43.719501 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 14:58:43.723339 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 5 14:58:43.727350 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 14:58:43.737181 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 14:58:43.741505 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 14:58:43.744397 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 14:58:43.745356 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 14:58:43.745484 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 14:58:43.752107 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 14:58:43.752372 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 14:58:43.752572 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 14:58:43.757912 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 5 14:58:43.760085 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 5 14:58:43.763828 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 5 14:58:43.765926 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 14:58:43.766138 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 14:58:43.767900 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 14:58:43.768042 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 14:58:43.770035 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 14:58:43.770192 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 14:58:43.778103 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 14:58:43.779615 augenrules[1445]: No rules Nov 5 14:58:43.780438 systemd-udevd[1416]: Using default interface naming scheme 'v257'. Nov 5 14:58:43.780446 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 14:58:43.781574 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 14:58:43.781621 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 14:58:43.781656 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 14:58:43.781701 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 14:58:43.781736 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 5 14:58:43.782295 systemd[1]: Finished ensure-sysext.service. Nov 5 14:58:43.783426 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 14:58:43.783617 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 14:58:43.794972 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 5 14:58:43.797289 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 14:58:43.797544 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 14:58:43.804283 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 14:58:43.808338 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 14:58:43.847117 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 5 14:58:43.848782 systemd[1]: Reached target time-set.target - System Time Set. Nov 5 14:58:43.880535 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Nov 5 14:58:43.900063 systemd-networkd[1463]: lo: Link UP Nov 5 14:58:43.900074 systemd-networkd[1463]: lo: Gained carrier Nov 5 14:58:43.901332 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 14:58:43.904280 systemd[1]: Reached target network.target - Network. Nov 5 14:58:43.906566 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 5 14:58:43.908455 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 5 14:58:43.913310 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 5 14:58:43.915076 systemd-networkd[1463]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 14:58:43.915088 systemd-networkd[1463]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 14:58:43.915687 systemd-networkd[1463]: eth0: Link UP Nov 5 14:58:43.915801 systemd-networkd[1463]: eth0: Gained carrier Nov 5 14:58:43.915821 systemd-networkd[1463]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 14:58:43.917014 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 5 14:58:43.928270 systemd-networkd[1463]: eth0: DHCPv4 address 10.0.0.21/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 5 14:58:43.928768 systemd-timesyncd[1453]: Network configuration changed, trying to establish connection. Nov 5 14:58:43.929859 systemd-timesyncd[1453]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 5 14:58:43.929999 systemd-timesyncd[1453]: Initial clock synchronization to Wed 2025-11-05 14:58:43.705066 UTC. Nov 5 14:58:43.934632 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 5 14:58:43.936346 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 5 14:58:44.021502 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 14:58:44.042812 ldconfig[1413]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 5 14:58:44.048283 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 5 14:58:44.052071 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 5 14:58:44.067388 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 14:58:44.070048 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 5 14:58:44.071804 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 14:58:44.072818 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 5 14:58:44.074019 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 5 14:58:44.075178 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 5 14:58:44.076050 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 5 14:58:44.077341 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 5 14:58:44.078245 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 5 14:58:44.078277 systemd[1]: Reached target paths.target - Path Units. Nov 5 14:58:44.078917 systemd[1]: Reached target timers.target - Timer Units. Nov 5 14:58:44.080348 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 5 14:58:44.082262 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 5 14:58:44.084722 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 5 14:58:44.085888 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 5 14:58:44.086918 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 5 14:58:44.098178 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 5 14:58:44.099549 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 5 14:58:44.101462 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 5 14:58:44.102704 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 14:58:44.103752 systemd[1]: Reached target basic.target - Basic System. Nov 5 14:58:44.104804 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 5 14:58:44.104839 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 5 14:58:44.105853 systemd[1]: Starting containerd.service - containerd container runtime... Nov 5 14:58:44.107718 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 5 14:58:44.109519 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 5 14:58:44.111836 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 5 14:58:44.113728 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 5 14:58:44.114597 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 5 14:58:44.115493 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 5 14:58:44.118382 jq[1523]: false Nov 5 14:58:44.118705 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 5 14:58:44.121327 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 5 14:58:44.123456 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 5 14:58:44.129092 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 5 14:58:44.131238 extend-filesystems[1524]: Found /dev/vda6 Nov 5 14:58:44.131384 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 5 14:58:44.131771 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 5 14:58:44.132408 systemd[1]: Starting update-engine.service - Update Engine... Nov 5 14:58:44.134172 extend-filesystems[1524]: Found /dev/vda9 Nov 5 14:58:44.135356 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 5 14:58:44.137402 extend-filesystems[1524]: Checking size of /dev/vda9 Nov 5 14:58:44.138060 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 5 14:58:44.141897 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 5 14:58:44.146328 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 5 14:58:44.146693 systemd[1]: motdgen.service: Deactivated successfully. Nov 5 14:58:44.146940 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 5 14:58:44.151491 jq[1540]: true Nov 5 14:58:44.149927 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 5 14:58:44.150140 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 5 14:58:44.152399 extend-filesystems[1524]: Resized partition /dev/vda9 Nov 5 14:58:44.158686 extend-filesystems[1555]: resize2fs 1.47.3 (8-Jul-2025) Nov 5 14:58:44.173497 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Nov 5 14:58:44.174487 (ntainerd)[1553]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 5 14:58:44.181255 update_engine[1539]: I20251105 14:58:44.179996 1539 main.cc:92] Flatcar Update Engine starting Nov 5 14:58:44.185742 tar[1551]: linux-arm64/LICENSE Nov 5 14:58:44.185742 tar[1551]: linux-arm64/helm Nov 5 14:58:44.200246 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Nov 5 14:58:44.202649 jq[1554]: true Nov 5 14:58:44.202229 dbus-daemon[1521]: [system] SELinux support is enabled Nov 5 14:58:44.202765 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 5 14:58:44.231935 update_engine[1539]: I20251105 14:58:44.208458 1539 update_check_scheduler.cc:74] Next update check in 2m10s Nov 5 14:58:44.218690 systemd[1]: Started update-engine.service - Update Engine. Nov 5 14:58:44.219813 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 5 14:58:44.232225 extend-filesystems[1555]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 5 14:58:44.232225 extend-filesystems[1555]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 5 14:58:44.232225 extend-filesystems[1555]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Nov 5 14:58:44.219838 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 5 14:58:44.239054 extend-filesystems[1524]: Resized filesystem in /dev/vda9 Nov 5 14:58:44.220947 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 5 14:58:44.220989 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 5 14:58:44.226372 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 5 14:58:44.234611 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 5 14:58:44.235084 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 5 14:58:44.259741 systemd-logind[1534]: Watching system buttons on /dev/input/event0 (Power Button) Nov 5 14:58:44.260371 systemd-logind[1534]: New seat seat0. Nov 5 14:58:44.261441 systemd[1]: Started systemd-logind.service - User Login Management. Nov 5 14:58:44.266170 bash[1595]: Updated "/home/core/.ssh/authorized_keys" Nov 5 14:58:44.270339 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 5 14:58:44.271925 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 5 14:58:44.281693 locksmithd[1574]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 5 14:58:44.335994 containerd[1553]: time="2025-11-05T14:58:44Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 5 14:58:44.336705 containerd[1553]: time="2025-11-05T14:58:44.336670342Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 5 14:58:44.350909 containerd[1553]: time="2025-11-05T14:58:44.350846142Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.602µs" Nov 5 14:58:44.350909 containerd[1553]: time="2025-11-05T14:58:44.350898625Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 5 14:58:44.350991 containerd[1553]: time="2025-11-05T14:58:44.350921367Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 5 14:58:44.351087 containerd[1553]: time="2025-11-05T14:58:44.351064121Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 5 14:58:44.351131 containerd[1553]: time="2025-11-05T14:58:44.351087758Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 5 14:58:44.351131 containerd[1553]: time="2025-11-05T14:58:44.351113339Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 14:58:44.351227 containerd[1553]: time="2025-11-05T14:58:44.351175697Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 14:58:44.351227 containerd[1553]: time="2025-11-05T14:58:44.351224720Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 14:58:44.351591 containerd[1553]: time="2025-11-05T14:58:44.351554275Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 14:58:44.351591 containerd[1553]: time="2025-11-05T14:58:44.351586659Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 14:58:44.351647 containerd[1553]: time="2025-11-05T14:58:44.351604814Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 14:58:44.351647 containerd[1553]: time="2025-11-05T14:58:44.351614494Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 5 14:58:44.351724 containerd[1553]: time="2025-11-05T14:58:44.351704454Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 5 14:58:44.351947 containerd[1553]: time="2025-11-05T14:58:44.351916991Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 14:58:44.351993 containerd[1553]: time="2025-11-05T14:58:44.351969163Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 14:58:44.351993 containerd[1553]: time="2025-11-05T14:58:44.351989184Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 5 14:58:44.352032 containerd[1553]: time="2025-11-05T14:58:44.352018536Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 5 14:58:44.352400 containerd[1553]: time="2025-11-05T14:58:44.352380086Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 5 14:58:44.352565 containerd[1553]: time="2025-11-05T14:58:44.352539829Z" level=info msg="metadata content store policy set" policy=shared Nov 5 14:58:44.355821 containerd[1553]: time="2025-11-05T14:58:44.355783790Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 5 14:58:44.355901 containerd[1553]: time="2025-11-05T14:58:44.355837089Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 5 14:58:44.355901 containerd[1553]: time="2025-11-05T14:58:44.355853767Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 5 14:58:44.355901 containerd[1553]: time="2025-11-05T14:58:44.355865469Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 5 14:58:44.355901 containerd[1553]: time="2025-11-05T14:58:44.355877987Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 5 14:58:44.355901 containerd[1553]: time="2025-11-05T14:58:44.355895520Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 5 14:58:44.355901 containerd[1553]: time="2025-11-05T14:58:44.355907261Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 5 14:58:44.355901 containerd[1553]: time="2025-11-05T14:58:44.355922656Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 5 14:58:44.355901 containerd[1553]: time="2025-11-05T14:58:44.355953524Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 5 14:58:44.355901 containerd[1553]: time="2025-11-05T14:58:44.355966820Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 5 14:58:44.355901 containerd[1553]: time="2025-11-05T14:58:44.355975644Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 5 14:58:44.355901 containerd[1553]: time="2025-11-05T14:58:44.355986646Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 5 14:58:44.355901 containerd[1553]: time="2025-11-05T14:58:44.356112956Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 5 14:58:44.355901 containerd[1553]: time="2025-11-05T14:58:44.356133521Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 5 14:58:44.355901 containerd[1553]: time="2025-11-05T14:58:44.356148022Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 5 14:58:44.358456 containerd[1553]: time="2025-11-05T14:58:44.356158363Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 5 14:58:44.358456 containerd[1553]: time="2025-11-05T14:58:44.356169054Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 5 14:58:44.358456 containerd[1553]: time="2025-11-05T14:58:44.356188337Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 5 14:58:44.358456 containerd[1553]: time="2025-11-05T14:58:44.356218155Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 5 14:58:44.358456 containerd[1553]: time="2025-11-05T14:58:44.356241209Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 5 14:58:44.358456 containerd[1553]: time="2025-11-05T14:58:44.356253688Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 5 14:58:44.358456 containerd[1553]: time="2025-11-05T14:58:44.356267489Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 5 14:58:44.358456 containerd[1553]: time="2025-11-05T14:58:44.356277558Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 5 14:58:44.358456 containerd[1553]: time="2025-11-05T14:58:44.356457983Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 5 14:58:44.358456 containerd[1553]: time="2025-11-05T14:58:44.356472445Z" level=info msg="Start snapshots syncer" Nov 5 14:58:44.358456 containerd[1553]: time="2025-11-05T14:58:44.356498959Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 5 14:58:44.358626 containerd[1553]: time="2025-11-05T14:58:44.356688054Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 5 14:58:44.358626 containerd[1553]: time="2025-11-05T14:58:44.356733928Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 5 14:58:44.358720 containerd[1553]: time="2025-11-05T14:58:44.356793875Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 5 14:58:44.358720 containerd[1553]: time="2025-11-05T14:58:44.356893943Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 5 14:58:44.358720 containerd[1553]: time="2025-11-05T14:58:44.356913731Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 5 14:58:44.358720 containerd[1553]: time="2025-11-05T14:58:44.356935657Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 5 14:58:44.358720 containerd[1553]: time="2025-11-05T14:58:44.356949109Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 5 14:58:44.358720 containerd[1553]: time="2025-11-05T14:58:44.356961277Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 5 14:58:44.358720 containerd[1553]: time="2025-11-05T14:58:44.356971229Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 5 14:58:44.358720 containerd[1553]: time="2025-11-05T14:58:44.356985769Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 5 14:58:44.358720 containerd[1553]: time="2025-11-05T14:58:44.357017570Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 5 14:58:44.358720 containerd[1553]: time="2025-11-05T14:58:44.357031604Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 5 14:58:44.358720 containerd[1553]: time="2025-11-05T14:58:44.357042334Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 5 14:58:44.358720 containerd[1553]: time="2025-11-05T14:58:44.357073474Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 14:58:44.358720 containerd[1553]: time="2025-11-05T14:58:44.357086653Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 14:58:44.358720 containerd[1553]: time="2025-11-05T14:58:44.357095089Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 14:58:44.358929 containerd[1553]: time="2025-11-05T14:58:44.357105197Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 14:58:44.358929 containerd[1553]: time="2025-11-05T14:58:44.357112817Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 5 14:58:44.358929 containerd[1553]: time="2025-11-05T14:58:44.357126035Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 5 14:58:44.358929 containerd[1553]: time="2025-11-05T14:58:44.357136415Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 5 14:58:44.358929 containerd[1553]: time="2025-11-05T14:58:44.357231817Z" level=info msg="runtime interface created" Nov 5 14:58:44.358929 containerd[1553]: time="2025-11-05T14:58:44.357238932Z" level=info msg="created NRI interface" Nov 5 14:58:44.358929 containerd[1553]: time="2025-11-05T14:58:44.357246902Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 5 14:58:44.358929 containerd[1553]: time="2025-11-05T14:58:44.357257282Z" level=info msg="Connect containerd service" Nov 5 14:58:44.358929 containerd[1553]: time="2025-11-05T14:58:44.357282085Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 5 14:58:44.358929 containerd[1553]: time="2025-11-05T14:58:44.357893688Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 5 14:58:44.433087 containerd[1553]: time="2025-11-05T14:58:44.432754401Z" level=info msg="Start subscribing containerd event" Nov 5 14:58:44.433087 containerd[1553]: time="2025-11-05T14:58:44.432844983Z" level=info msg="Start recovering state" Nov 5 14:58:44.433087 containerd[1553]: time="2025-11-05T14:58:44.432938170Z" level=info msg="Start event monitor" Nov 5 14:58:44.433087 containerd[1553]: time="2025-11-05T14:58:44.432960407Z" level=info msg="Start cni network conf syncer for default" Nov 5 14:58:44.433087 containerd[1553]: time="2025-11-05T14:58:44.432968960Z" level=info msg="Start streaming server" Nov 5 14:58:44.433087 containerd[1553]: time="2025-11-05T14:58:44.432981050Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 5 14:58:44.433087 containerd[1553]: time="2025-11-05T14:58:44.432989253Z" level=info msg="runtime interface starting up..." Nov 5 14:58:44.433087 containerd[1553]: time="2025-11-05T14:58:44.432994307Z" level=info msg="starting plugins..." Nov 5 14:58:44.433087 containerd[1553]: time="2025-11-05T14:58:44.433009974Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 5 14:58:44.433342 containerd[1553]: time="2025-11-05T14:58:44.433158793Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 5 14:58:44.433342 containerd[1553]: time="2025-11-05T14:58:44.433234913Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 5 14:58:44.433342 containerd[1553]: time="2025-11-05T14:58:44.433292722Z" level=info msg="containerd successfully booted in 0.097627s" Nov 5 14:58:44.433438 systemd[1]: Started containerd.service - containerd container runtime. Nov 5 14:58:44.505663 tar[1551]: linux-arm64/README.md Nov 5 14:58:44.522110 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 5 14:58:45.331418 systemd-networkd[1463]: eth0: Gained IPv6LL Nov 5 14:58:45.335265 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 5 14:58:45.336594 systemd[1]: Reached target network-online.target - Network is Online. Nov 5 14:58:45.339504 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 5 14:58:45.341778 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 14:58:45.351621 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 5 14:58:45.366786 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 5 14:58:45.366973 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 5 14:58:45.368255 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 5 14:58:45.375883 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 5 14:58:45.666138 sshd_keygen[1546]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 5 14:58:45.685038 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 5 14:58:45.687840 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 5 14:58:45.721125 systemd[1]: issuegen.service: Deactivated successfully. Nov 5 14:58:45.721399 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 5 14:58:45.725771 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 5 14:58:45.743983 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 5 14:58:45.746522 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 5 14:58:45.748354 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Nov 5 14:58:45.749771 systemd[1]: Reached target getty.target - Login Prompts. Nov 5 14:58:45.887561 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 14:58:45.888991 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 5 14:58:45.890048 systemd[1]: Startup finished in 1.169s (kernel) + 5.374s (initrd) + 3.454s (userspace) = 9.997s. Nov 5 14:58:45.897528 (kubelet)[1658]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 14:58:46.233570 kubelet[1658]: E1105 14:58:46.233487 1658 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 14:58:46.235003 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 14:58:46.235261 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 14:58:46.235608 systemd[1]: kubelet.service: Consumed 740ms CPU time, 256.1M memory peak. Nov 5 14:58:48.145605 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 5 14:58:48.146721 systemd[1]: Started sshd@0-10.0.0.21:22-10.0.0.1:60798.service - OpenSSH per-connection server daemon (10.0.0.1:60798). Nov 5 14:58:48.223708 sshd[1672]: Accepted publickey for core from 10.0.0.1 port 60798 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 14:58:48.228317 sshd-session[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 14:58:48.234486 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 5 14:58:48.235351 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 5 14:58:48.242727 systemd-logind[1534]: New session 1 of user core. Nov 5 14:58:48.271061 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 5 14:58:48.274127 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 5 14:58:48.292281 (systemd)[1677]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 5 14:58:48.295933 systemd-logind[1534]: New session c1 of user core. Nov 5 14:58:48.401661 systemd[1677]: Queued start job for default target default.target. Nov 5 14:58:48.421705 systemd[1677]: Created slice app.slice - User Application Slice. Nov 5 14:58:48.422038 systemd[1677]: Reached target paths.target - Paths. Nov 5 14:58:48.422154 systemd[1677]: Reached target timers.target - Timers. Nov 5 14:58:48.423856 systemd[1677]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 5 14:58:48.435737 systemd[1677]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 5 14:58:48.435880 systemd[1677]: Reached target sockets.target - Sockets. Nov 5 14:58:48.436019 systemd[1677]: Reached target basic.target - Basic System. Nov 5 14:58:48.436115 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 5 14:58:48.436279 systemd[1677]: Reached target default.target - Main User Target. Nov 5 14:58:48.436318 systemd[1677]: Startup finished in 133ms. Nov 5 14:58:48.437607 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 5 14:58:48.498842 systemd[1]: Started sshd@1-10.0.0.21:22-10.0.0.1:60812.service - OpenSSH per-connection server daemon (10.0.0.1:60812). Nov 5 14:58:48.574857 sshd[1688]: Accepted publickey for core from 10.0.0.1 port 60812 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 14:58:48.576439 sshd-session[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 14:58:48.582713 systemd-logind[1534]: New session 2 of user core. Nov 5 14:58:48.589438 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 5 14:58:48.641739 sshd[1691]: Connection closed by 10.0.0.1 port 60812 Nov 5 14:58:48.642072 sshd-session[1688]: pam_unix(sshd:session): session closed for user core Nov 5 14:58:48.652331 systemd[1]: sshd@1-10.0.0.21:22-10.0.0.1:60812.service: Deactivated successfully. Nov 5 14:58:48.654399 systemd[1]: session-2.scope: Deactivated successfully. Nov 5 14:58:48.657458 systemd-logind[1534]: Session 2 logged out. Waiting for processes to exit. Nov 5 14:58:48.659766 systemd[1]: Started sshd@2-10.0.0.21:22-10.0.0.1:60820.service - OpenSSH per-connection server daemon (10.0.0.1:60820). Nov 5 14:58:48.662368 systemd-logind[1534]: Removed session 2. Nov 5 14:58:48.723274 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 60820 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 14:58:48.724735 sshd-session[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 14:58:48.730989 systemd-logind[1534]: New session 3 of user core. Nov 5 14:58:48.746408 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 5 14:58:48.795390 sshd[1700]: Connection closed by 10.0.0.1 port 60820 Nov 5 14:58:48.796802 sshd-session[1697]: pam_unix(sshd:session): session closed for user core Nov 5 14:58:48.806511 systemd[1]: sshd@2-10.0.0.21:22-10.0.0.1:60820.service: Deactivated successfully. Nov 5 14:58:48.809365 systemd[1]: session-3.scope: Deactivated successfully. Nov 5 14:58:48.810951 systemd-logind[1534]: Session 3 logged out. Waiting for processes to exit. Nov 5 14:58:48.813041 systemd[1]: Started sshd@3-10.0.0.21:22-10.0.0.1:60830.service - OpenSSH per-connection server daemon (10.0.0.1:60830). Nov 5 14:58:48.815336 systemd-logind[1534]: Removed session 3. Nov 5 14:58:48.878654 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 60830 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 14:58:48.880476 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 14:58:48.885473 systemd-logind[1534]: New session 4 of user core. Nov 5 14:58:48.898403 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 5 14:58:48.949118 sshd[1709]: Connection closed by 10.0.0.1 port 60830 Nov 5 14:58:48.950133 sshd-session[1706]: pam_unix(sshd:session): session closed for user core Nov 5 14:58:48.961364 systemd[1]: sshd@3-10.0.0.21:22-10.0.0.1:60830.service: Deactivated successfully. Nov 5 14:58:48.965732 systemd[1]: session-4.scope: Deactivated successfully. Nov 5 14:58:48.966935 systemd-logind[1534]: Session 4 logged out. Waiting for processes to exit. Nov 5 14:58:48.974642 systemd[1]: Started sshd@4-10.0.0.21:22-10.0.0.1:60834.service - OpenSSH per-connection server daemon (10.0.0.1:60834). Nov 5 14:58:48.975638 systemd-logind[1534]: Removed session 4. Nov 5 14:58:49.036424 sshd[1715]: Accepted publickey for core from 10.0.0.1 port 60834 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 14:58:49.037828 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 14:58:49.042328 systemd-logind[1534]: New session 5 of user core. Nov 5 14:58:49.056434 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 5 14:58:49.122976 sudo[1719]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 5 14:58:49.123703 sudo[1719]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 14:58:49.139363 sudo[1719]: pam_unix(sudo:session): session closed for user root Nov 5 14:58:49.142230 sshd[1718]: Connection closed by 10.0.0.1 port 60834 Nov 5 14:58:49.142839 sshd-session[1715]: pam_unix(sshd:session): session closed for user core Nov 5 14:58:49.157303 systemd[1]: sshd@4-10.0.0.21:22-10.0.0.1:60834.service: Deactivated successfully. Nov 5 14:58:49.159062 systemd[1]: session-5.scope: Deactivated successfully. Nov 5 14:58:49.160835 systemd-logind[1534]: Session 5 logged out. Waiting for processes to exit. Nov 5 14:58:49.163644 systemd[1]: Started sshd@5-10.0.0.21:22-10.0.0.1:60840.service - OpenSSH per-connection server daemon (10.0.0.1:60840). Nov 5 14:58:49.164516 systemd-logind[1534]: Removed session 5. Nov 5 14:58:49.246360 sshd[1725]: Accepted publickey for core from 10.0.0.1 port 60840 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 14:58:49.247707 sshd-session[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 14:58:49.252283 systemd-logind[1534]: New session 6 of user core. Nov 5 14:58:49.260442 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 5 14:58:49.313334 sudo[1730]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 5 14:58:49.313610 sudo[1730]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 14:58:49.319694 sudo[1730]: pam_unix(sudo:session): session closed for user root Nov 5 14:58:49.327289 sudo[1729]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 5 14:58:49.327587 sudo[1729]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 14:58:49.336922 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 14:58:49.371698 augenrules[1752]: No rules Nov 5 14:58:49.372864 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 14:58:49.373079 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 14:58:49.375268 sudo[1729]: pam_unix(sudo:session): session closed for user root Nov 5 14:58:49.376839 sshd[1728]: Connection closed by 10.0.0.1 port 60840 Nov 5 14:58:49.377372 sshd-session[1725]: pam_unix(sshd:session): session closed for user core Nov 5 14:58:49.385437 systemd[1]: sshd@5-10.0.0.21:22-10.0.0.1:60840.service: Deactivated successfully. Nov 5 14:58:49.386938 systemd[1]: session-6.scope: Deactivated successfully. Nov 5 14:58:49.389097 systemd-logind[1534]: Session 6 logged out. Waiting for processes to exit. Nov 5 14:58:49.390610 systemd[1]: Started sshd@6-10.0.0.21:22-10.0.0.1:42124.service - OpenSSH per-connection server daemon (10.0.0.1:42124). Nov 5 14:58:49.391908 systemd-logind[1534]: Removed session 6. Nov 5 14:58:49.449047 sshd[1761]: Accepted publickey for core from 10.0.0.1 port 42124 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 14:58:49.450803 sshd-session[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 14:58:49.455266 systemd-logind[1534]: New session 7 of user core. Nov 5 14:58:49.465383 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 5 14:58:49.518084 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 5 14:58:49.518400 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 14:58:49.811429 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 5 14:58:49.833529 (dockerd)[1786]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 5 14:58:50.031303 dockerd[1786]: time="2025-11-05T14:58:50.031231017Z" level=info msg="Starting up" Nov 5 14:58:50.032123 dockerd[1786]: time="2025-11-05T14:58:50.032099647Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 5 14:58:50.043002 dockerd[1786]: time="2025-11-05T14:58:50.042955440Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 5 14:58:50.213738 dockerd[1786]: time="2025-11-05T14:58:50.213627333Z" level=info msg="Loading containers: start." Nov 5 14:58:50.223230 kernel: Initializing XFRM netlink socket Nov 5 14:58:50.427019 systemd-networkd[1463]: docker0: Link UP Nov 5 14:58:50.432112 dockerd[1786]: time="2025-11-05T14:58:50.432064676Z" level=info msg="Loading containers: done." Nov 5 14:58:50.449530 dockerd[1786]: time="2025-11-05T14:58:50.449473638Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 5 14:58:50.449676 dockerd[1786]: time="2025-11-05T14:58:50.449561634Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 5 14:58:50.449676 dockerd[1786]: time="2025-11-05T14:58:50.449647814Z" level=info msg="Initializing buildkit" Nov 5 14:58:50.477671 dockerd[1786]: time="2025-11-05T14:58:50.477561965Z" level=info msg="Completed buildkit initialization" Nov 5 14:58:50.484699 dockerd[1786]: time="2025-11-05T14:58:50.484652117Z" level=info msg="Daemon has completed initialization" Nov 5 14:58:50.484960 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 5 14:58:50.486029 dockerd[1786]: time="2025-11-05T14:58:50.484733992Z" level=info msg="API listen on /run/docker.sock" Nov 5 14:58:51.093078 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2946709182-merged.mount: Deactivated successfully. Nov 5 14:58:51.113627 containerd[1553]: time="2025-11-05T14:58:51.113356928Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 5 14:58:51.725508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3788196283.mount: Deactivated successfully. Nov 5 14:58:52.750609 containerd[1553]: time="2025-11-05T14:58:52.750546825Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:58:52.751688 containerd[1553]: time="2025-11-05T14:58:52.751045049Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=26363687" Nov 5 14:58:52.752737 containerd[1553]: time="2025-11-05T14:58:52.752151584Z" level=info msg="ImageCreate event name:\"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:58:52.754764 containerd[1553]: time="2025-11-05T14:58:52.754732866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:58:52.755918 containerd[1553]: time="2025-11-05T14:58:52.755876123Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"26360284\" in 1.64247642s" Nov 5 14:58:52.755918 containerd[1553]: time="2025-11-05T14:58:52.755914628Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Nov 5 14:58:52.756450 containerd[1553]: time="2025-11-05T14:58:52.756427628Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 5 14:58:54.046763 containerd[1553]: time="2025-11-05T14:58:54.046697720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:58:54.047250 containerd[1553]: time="2025-11-05T14:58:54.047218840Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=22531202" Nov 5 14:58:54.048238 containerd[1553]: time="2025-11-05T14:58:54.048184332Z" level=info msg="ImageCreate event name:\"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:58:54.051298 containerd[1553]: time="2025-11-05T14:58:54.051263170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:58:54.052924 containerd[1553]: time="2025-11-05T14:58:54.052888827Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"24099975\" in 1.296432934s" Nov 5 14:58:54.052924 containerd[1553]: time="2025-11-05T14:58:54.052921583Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Nov 5 14:58:54.053496 containerd[1553]: time="2025-11-05T14:58:54.053465692Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 5 14:58:55.179378 containerd[1553]: time="2025-11-05T14:58:55.179332246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:58:55.180226 containerd[1553]: time="2025-11-05T14:58:55.179799962Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=17484326" Nov 5 14:58:55.180911 containerd[1553]: time="2025-11-05T14:58:55.180877032Z" level=info msg="ImageCreate event name:\"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:58:55.183549 containerd[1553]: time="2025-11-05T14:58:55.183513990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:58:55.184693 containerd[1553]: time="2025-11-05T14:58:55.184569083Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"19053117\" in 1.131072413s" Nov 5 14:58:55.184693 containerd[1553]: time="2025-11-05T14:58:55.184606003Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Nov 5 14:58:55.185032 containerd[1553]: time="2025-11-05T14:58:55.184989785Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 5 14:58:56.145751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount281867078.mount: Deactivated successfully. Nov 5 14:58:56.485579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 5 14:58:56.487152 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 14:58:56.533434 containerd[1553]: time="2025-11-05T14:58:56.533380596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:58:56.534751 containerd[1553]: time="2025-11-05T14:58:56.534722998Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=27417819" Nov 5 14:58:56.536934 containerd[1553]: time="2025-11-05T14:58:56.536656715Z" level=info msg="ImageCreate event name:\"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:58:56.559721 containerd[1553]: time="2025-11-05T14:58:56.559664396Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:58:56.560715 containerd[1553]: time="2025-11-05T14:58:56.560676835Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"27416836\" in 1.375658669s" Nov 5 14:58:56.561046 containerd[1553]: time="2025-11-05T14:58:56.560793452Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Nov 5 14:58:56.561898 containerd[1553]: time="2025-11-05T14:58:56.561877006Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 5 14:58:56.615397 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 14:58:56.619879 (kubelet)[2089]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 14:58:56.658599 kubelet[2089]: E1105 14:58:56.658542 2089 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 14:58:56.661804 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 14:58:56.661940 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 14:58:56.662235 systemd[1]: kubelet.service: Consumed 151ms CPU time, 109.2M memory peak. Nov 5 14:58:57.189109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount402089384.mount: Deactivated successfully. Nov 5 14:58:57.902225 containerd[1553]: time="2025-11-05T14:58:57.901304258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:58:57.902225 containerd[1553]: time="2025-11-05T14:58:57.901837008Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Nov 5 14:58:57.902871 containerd[1553]: time="2025-11-05T14:58:57.902839977Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:58:57.905444 containerd[1553]: time="2025-11-05T14:58:57.905410150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:58:57.907373 containerd[1553]: time="2025-11-05T14:58:57.907335451Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.345355067s" Nov 5 14:58:57.907423 containerd[1553]: time="2025-11-05T14:58:57.907380666Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Nov 5 14:58:57.907923 containerd[1553]: time="2025-11-05T14:58:57.907898211Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 5 14:58:58.341108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount294038113.mount: Deactivated successfully. Nov 5 14:58:58.344982 containerd[1553]: time="2025-11-05T14:58:58.344941821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 14:58:58.345665 containerd[1553]: time="2025-11-05T14:58:58.345450448Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Nov 5 14:58:58.346371 containerd[1553]: time="2025-11-05T14:58:58.346335437Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 14:58:58.348867 containerd[1553]: time="2025-11-05T14:58:58.348837272Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 14:58:58.349576 containerd[1553]: time="2025-11-05T14:58:58.349403091Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 441.477014ms" Nov 5 14:58:58.349576 containerd[1553]: time="2025-11-05T14:58:58.349433558Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Nov 5 14:58:58.349947 containerd[1553]: time="2025-11-05T14:58:58.349920201Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 5 14:58:58.896719 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3062427917.mount: Deactivated successfully. Nov 5 14:59:01.036356 containerd[1553]: time="2025-11-05T14:59:01.036293721Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:59:01.038437 containerd[1553]: time="2025-11-05T14:59:01.038399312Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943167" Nov 5 14:59:01.039276 containerd[1553]: time="2025-11-05T14:59:01.039226744Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:59:01.042852 containerd[1553]: time="2025-11-05T14:59:01.042199173Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:59:01.044188 containerd[1553]: time="2025-11-05T14:59:01.044153326Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.694200695s" Nov 5 14:59:01.044393 containerd[1553]: time="2025-11-05T14:59:01.044296309Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Nov 5 14:59:06.151277 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 14:59:06.151433 systemd[1]: kubelet.service: Consumed 151ms CPU time, 109.2M memory peak. Nov 5 14:59:06.153537 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 14:59:06.176641 systemd[1]: Reload requested from client PID 2236 ('systemctl') (unit session-7.scope)... Nov 5 14:59:06.176657 systemd[1]: Reloading... Nov 5 14:59:06.249354 zram_generator::config[2280]: No configuration found. Nov 5 14:59:06.455294 systemd[1]: Reloading finished in 278 ms. Nov 5 14:59:06.510030 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 14:59:06.513406 systemd[1]: kubelet.service: Deactivated successfully. Nov 5 14:59:06.513820 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 14:59:06.513932 systemd[1]: kubelet.service: Consumed 101ms CPU time, 95.3M memory peak. Nov 5 14:59:06.516689 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 14:59:06.643996 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 14:59:06.665606 (kubelet)[2328]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 14:59:06.703055 kubelet[2328]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 14:59:06.703055 kubelet[2328]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 14:59:06.703055 kubelet[2328]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 14:59:06.704177 kubelet[2328]: I1105 14:59:06.704110 2328 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 14:59:07.583905 kubelet[2328]: I1105 14:59:07.583855 2328 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 5 14:59:07.583905 kubelet[2328]: I1105 14:59:07.583889 2328 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 14:59:07.584178 kubelet[2328]: I1105 14:59:07.584150 2328 server.go:954] "Client rotation is on, will bootstrap in background" Nov 5 14:59:07.609058 kubelet[2328]: E1105 14:59:07.608539 2328 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.21:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Nov 5 14:59:07.610659 kubelet[2328]: I1105 14:59:07.610628 2328 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 14:59:07.616123 kubelet[2328]: I1105 14:59:07.616106 2328 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 14:59:07.618945 kubelet[2328]: I1105 14:59:07.618918 2328 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 14:59:07.619707 kubelet[2328]: I1105 14:59:07.619666 2328 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 14:59:07.619971 kubelet[2328]: I1105 14:59:07.619797 2328 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 14:59:07.620183 kubelet[2328]: I1105 14:59:07.620168 2328 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 14:59:07.620272 kubelet[2328]: I1105 14:59:07.620262 2328 container_manager_linux.go:304] "Creating device plugin manager" Nov 5 14:59:07.620521 kubelet[2328]: I1105 14:59:07.620506 2328 state_mem.go:36] "Initialized new in-memory state store" Nov 5 14:59:07.622973 kubelet[2328]: I1105 14:59:07.622947 2328 kubelet.go:446] "Attempting to sync node with API server" Nov 5 14:59:07.623077 kubelet[2328]: I1105 14:59:07.623065 2328 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 14:59:07.623154 kubelet[2328]: I1105 14:59:07.623143 2328 kubelet.go:352] "Adding apiserver pod source" Nov 5 14:59:07.623226 kubelet[2328]: I1105 14:59:07.623215 2328 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 14:59:07.626884 kubelet[2328]: I1105 14:59:07.626857 2328 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 14:59:07.627781 kubelet[2328]: W1105 14:59:07.627447 2328 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Nov 5 14:59:07.627781 kubelet[2328]: E1105 14:59:07.627511 2328 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Nov 5 14:59:07.627781 kubelet[2328]: W1105 14:59:07.627678 2328 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Nov 5 14:59:07.627781 kubelet[2328]: E1105 14:59:07.627727 2328 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Nov 5 14:59:07.627983 kubelet[2328]: I1105 14:59:07.627962 2328 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 5 14:59:07.628099 kubelet[2328]: W1105 14:59:07.628087 2328 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 5 14:59:07.628929 kubelet[2328]: I1105 14:59:07.628914 2328 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 14:59:07.628975 kubelet[2328]: I1105 14:59:07.628947 2328 server.go:1287] "Started kubelet" Nov 5 14:59:07.629956 kubelet[2328]: I1105 14:59:07.629811 2328 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 14:59:07.630159 kubelet[2328]: I1105 14:59:07.630131 2328 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 14:59:07.630250 kubelet[2328]: I1105 14:59:07.630223 2328 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 14:59:07.630496 kubelet[2328]: I1105 14:59:07.630477 2328 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 14:59:07.631059 kubelet[2328]: I1105 14:59:07.631032 2328 server.go:479] "Adding debug handlers to kubelet server" Nov 5 14:59:07.632605 kubelet[2328]: I1105 14:59:07.632577 2328 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 14:59:07.634000 kubelet[2328]: I1105 14:59:07.633973 2328 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 14:59:07.634399 kubelet[2328]: E1105 14:59:07.634373 2328 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 14:59:07.634868 kubelet[2328]: E1105 14:59:07.634833 2328 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="200ms" Nov 5 14:59:07.635137 kubelet[2328]: I1105 14:59:07.635110 2328 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 14:59:07.635180 kubelet[2328]: E1105 14:59:07.634869 2328 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.21:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.21:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1875244e3ca54012 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-05 14:59:07.628929042 +0000 UTC m=+0.960243474,LastTimestamp:2025-11-05 14:59:07.628929042 +0000 UTC m=+0.960243474,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 5 14:59:07.635180 kubelet[2328]: I1105 14:59:07.635172 2328 reconciler.go:26] "Reconciler: start to sync state" Nov 5 14:59:07.635580 kubelet[2328]: I1105 14:59:07.635532 2328 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 14:59:07.635838 kubelet[2328]: W1105 14:59:07.635799 2328 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Nov 5 14:59:07.635880 kubelet[2328]: E1105 14:59:07.635843 2328 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Nov 5 14:59:07.636953 kubelet[2328]: I1105 14:59:07.636927 2328 factory.go:221] Registration of the containerd container factory successfully Nov 5 14:59:07.636953 kubelet[2328]: I1105 14:59:07.636950 2328 factory.go:221] Registration of the systemd container factory successfully Nov 5 14:59:07.641893 kubelet[2328]: E1105 14:59:07.641796 2328 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 14:59:07.648116 kubelet[2328]: I1105 14:59:07.648093 2328 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 14:59:07.648511 kubelet[2328]: I1105 14:59:07.648257 2328 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 14:59:07.648511 kubelet[2328]: I1105 14:59:07.648280 2328 state_mem.go:36] "Initialized new in-memory state store" Nov 5 14:59:07.651466 kubelet[2328]: I1105 14:59:07.651433 2328 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 5 14:59:07.652609 kubelet[2328]: I1105 14:59:07.652587 2328 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 5 14:59:07.652802 kubelet[2328]: I1105 14:59:07.652791 2328 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 5 14:59:07.652875 kubelet[2328]: I1105 14:59:07.652862 2328 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 14:59:07.652932 kubelet[2328]: I1105 14:59:07.652923 2328 kubelet.go:2382] "Starting kubelet main sync loop" Nov 5 14:59:07.653037 kubelet[2328]: E1105 14:59:07.653019 2328 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 14:59:07.731773 kubelet[2328]: W1105 14:59:07.731707 2328 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Nov 5 14:59:07.732109 kubelet[2328]: E1105 14:59:07.731786 2328 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Nov 5 14:59:07.732488 kubelet[2328]: I1105 14:59:07.732153 2328 policy_none.go:49] "None policy: Start" Nov 5 14:59:07.732488 kubelet[2328]: I1105 14:59:07.732183 2328 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 14:59:07.732488 kubelet[2328]: I1105 14:59:07.732196 2328 state_mem.go:35] "Initializing new in-memory state store" Nov 5 14:59:07.734629 kubelet[2328]: E1105 14:59:07.734607 2328 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 14:59:07.736945 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 5 14:59:07.748657 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 5 14:59:07.751790 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 5 14:59:07.753774 kubelet[2328]: E1105 14:59:07.753748 2328 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 5 14:59:07.772244 kubelet[2328]: I1105 14:59:07.772010 2328 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 5 14:59:07.772510 kubelet[2328]: I1105 14:59:07.772379 2328 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 14:59:07.772510 kubelet[2328]: I1105 14:59:07.772396 2328 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 14:59:07.772582 kubelet[2328]: I1105 14:59:07.772571 2328 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 14:59:07.773462 kubelet[2328]: E1105 14:59:07.773439 2328 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 14:59:07.773535 kubelet[2328]: E1105 14:59:07.773480 2328 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 5 14:59:07.835818 kubelet[2328]: E1105 14:59:07.835685 2328 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="400ms" Nov 5 14:59:07.873924 kubelet[2328]: I1105 14:59:07.873863 2328 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 14:59:07.874432 kubelet[2328]: E1105 14:59:07.874374 2328 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Nov 5 14:59:07.961876 systemd[1]: Created slice kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice - libcontainer container kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice. Nov 5 14:59:07.987675 kubelet[2328]: E1105 14:59:07.987651 2328 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 14:59:07.990198 systemd[1]: Created slice kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice - libcontainer container kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice. Nov 5 14:59:08.001261 kubelet[2328]: E1105 14:59:08.001235 2328 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 14:59:08.003536 systemd[1]: Created slice kubepods-burstable-pod8d1777f2632ee8d6e0b7c1c7aa8e83f8.slice - libcontainer container kubepods-burstable-pod8d1777f2632ee8d6e0b7c1c7aa8e83f8.slice. Nov 5 14:59:08.005462 kubelet[2328]: E1105 14:59:08.005439 2328 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 14:59:08.076182 kubelet[2328]: I1105 14:59:08.076151 2328 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 14:59:08.076740 kubelet[2328]: E1105 14:59:08.076709 2328 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Nov 5 14:59:08.136686 kubelet[2328]: I1105 14:59:08.136502 2328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8d1777f2632ee8d6e0b7c1c7aa8e83f8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8d1777f2632ee8d6e0b7c1c7aa8e83f8\") " pod="kube-system/kube-apiserver-localhost" Nov 5 14:59:08.136686 kubelet[2328]: I1105 14:59:08.136541 2328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 14:59:08.136686 kubelet[2328]: I1105 14:59:08.136562 2328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 14:59:08.136686 kubelet[2328]: I1105 14:59:08.136588 2328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 14:59:08.136686 kubelet[2328]: I1105 14:59:08.136663 2328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 5 14:59:08.136932 kubelet[2328]: I1105 14:59:08.136704 2328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8d1777f2632ee8d6e0b7c1c7aa8e83f8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8d1777f2632ee8d6e0b7c1c7aa8e83f8\") " pod="kube-system/kube-apiserver-localhost" Nov 5 14:59:08.136932 kubelet[2328]: I1105 14:59:08.136729 2328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8d1777f2632ee8d6e0b7c1c7aa8e83f8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8d1777f2632ee8d6e0b7c1c7aa8e83f8\") " pod="kube-system/kube-apiserver-localhost" Nov 5 14:59:08.136932 kubelet[2328]: I1105 14:59:08.136745 2328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 14:59:08.136932 kubelet[2328]: I1105 14:59:08.136761 2328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 14:59:08.237069 kubelet[2328]: E1105 14:59:08.237002 2328 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="800ms" Nov 5 14:59:08.288467 kubelet[2328]: E1105 14:59:08.288388 2328 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:08.289020 containerd[1553]: time="2025-11-05T14:59:08.288986335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Nov 5 14:59:08.302585 kubelet[2328]: E1105 14:59:08.302326 2328 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:08.304705 containerd[1553]: time="2025-11-05T14:59:08.304372221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Nov 5 14:59:08.306782 kubelet[2328]: E1105 14:59:08.306737 2328 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:08.307184 containerd[1553]: time="2025-11-05T14:59:08.307134869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8d1777f2632ee8d6e0b7c1c7aa8e83f8,Namespace:kube-system,Attempt:0,}" Nov 5 14:59:08.311253 containerd[1553]: time="2025-11-05T14:59:08.311193479Z" level=info msg="connecting to shim 6f7f9562fd2681e949369a0ace4900f24ad92f72b9c32dbfcafb072d90ea0d2f" address="unix:///run/containerd/s/d1fecc36ca0874be68208b3ab59bcd3848fdd513dbc7005f2710f9b47b16064c" namespace=k8s.io protocol=ttrpc version=3 Nov 5 14:59:08.350530 systemd[1]: Started cri-containerd-6f7f9562fd2681e949369a0ace4900f24ad92f72b9c32dbfcafb072d90ea0d2f.scope - libcontainer container 6f7f9562fd2681e949369a0ace4900f24ad92f72b9c32dbfcafb072d90ea0d2f. Nov 5 14:59:08.356771 containerd[1553]: time="2025-11-05T14:59:08.356710507Z" level=info msg="connecting to shim c07505fa6583931514f8fed7b7928492844cf318bfc3be8bed0243abc96182e0" address="unix:///run/containerd/s/713c7e4cc67be5423cb9f0709a701fe57a83eddb6975a1efa2a1a2ed0d16cf50" namespace=k8s.io protocol=ttrpc version=3 Nov 5 14:59:08.365387 containerd[1553]: time="2025-11-05T14:59:08.365335747Z" level=info msg="connecting to shim 81ecb0fd0c269e4ddf604dbb0e560676bbb92df3c44a961245f4f15f743ab009" address="unix:///run/containerd/s/9a3ed394753ad68e9da8462369546648058e050d6a374276c067c17fa7f6099f" namespace=k8s.io protocol=ttrpc version=3 Nov 5 14:59:08.383527 systemd[1]: Started cri-containerd-c07505fa6583931514f8fed7b7928492844cf318bfc3be8bed0243abc96182e0.scope - libcontainer container c07505fa6583931514f8fed7b7928492844cf318bfc3be8bed0243abc96182e0. Nov 5 14:59:08.387714 systemd[1]: Started cri-containerd-81ecb0fd0c269e4ddf604dbb0e560676bbb92df3c44a961245f4f15f743ab009.scope - libcontainer container 81ecb0fd0c269e4ddf604dbb0e560676bbb92df3c44a961245f4f15f743ab009. Nov 5 14:59:08.405974 containerd[1553]: time="2025-11-05T14:59:08.405776847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f7f9562fd2681e949369a0ace4900f24ad92f72b9c32dbfcafb072d90ea0d2f\"" Nov 5 14:59:08.407170 kubelet[2328]: E1105 14:59:08.407086 2328 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:08.410346 containerd[1553]: time="2025-11-05T14:59:08.410304561Z" level=info msg="CreateContainer within sandbox \"6f7f9562fd2681e949369a0ace4900f24ad92f72b9c32dbfcafb072d90ea0d2f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 5 14:59:08.421618 containerd[1553]: time="2025-11-05T14:59:08.421567551Z" level=info msg="Container 74698359a27bdc1fef8bc7d49efa5b44ff1aca8468997d3c41ddc8eeae47a95b: CDI devices from CRI Config.CDIDevices: []" Nov 5 14:59:08.430717 containerd[1553]: time="2025-11-05T14:59:08.430675679Z" level=info msg="CreateContainer within sandbox \"6f7f9562fd2681e949369a0ace4900f24ad92f72b9c32dbfcafb072d90ea0d2f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"74698359a27bdc1fef8bc7d49efa5b44ff1aca8468997d3c41ddc8eeae47a95b\"" Nov 5 14:59:08.430899 containerd[1553]: time="2025-11-05T14:59:08.430849521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8d1777f2632ee8d6e0b7c1c7aa8e83f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"81ecb0fd0c269e4ddf604dbb0e560676bbb92df3c44a961245f4f15f743ab009\"" Nov 5 14:59:08.431841 containerd[1553]: time="2025-11-05T14:59:08.431810904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"c07505fa6583931514f8fed7b7928492844cf318bfc3be8bed0243abc96182e0\"" Nov 5 14:59:08.431959 containerd[1553]: time="2025-11-05T14:59:08.431939557Z" level=info msg="StartContainer for \"74698359a27bdc1fef8bc7d49efa5b44ff1aca8468997d3c41ddc8eeae47a95b\"" Nov 5 14:59:08.432499 kubelet[2328]: E1105 14:59:08.432459 2328 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:08.432651 kubelet[2328]: E1105 14:59:08.432631 2328 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:08.433724 containerd[1553]: time="2025-11-05T14:59:08.433698151Z" level=info msg="CreateContainer within sandbox \"81ecb0fd0c269e4ddf604dbb0e560676bbb92df3c44a961245f4f15f743ab009\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 5 14:59:08.434034 containerd[1553]: time="2025-11-05T14:59:08.433780337Z" level=info msg="connecting to shim 74698359a27bdc1fef8bc7d49efa5b44ff1aca8468997d3c41ddc8eeae47a95b" address="unix:///run/containerd/s/d1fecc36ca0874be68208b3ab59bcd3848fdd513dbc7005f2710f9b47b16064c" protocol=ttrpc version=3 Nov 5 14:59:08.434290 containerd[1553]: time="2025-11-05T14:59:08.434260589Z" level=info msg="CreateContainer within sandbox \"c07505fa6583931514f8fed7b7928492844cf318bfc3be8bed0243abc96182e0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 5 14:59:08.441825 containerd[1553]: time="2025-11-05T14:59:08.441703178Z" level=info msg="Container ee7e2e46a52d5acbd134ac0b4743cc8ee10e9006b8545e1c3fcd6c12317c2817: CDI devices from CRI Config.CDIDevices: []" Nov 5 14:59:08.449595 containerd[1553]: time="2025-11-05T14:59:08.449549825Z" level=info msg="Container 03547ef090e2031866691e92b6fcd436026847323673d932d5cb9075bff604f7: CDI devices from CRI Config.CDIDevices: []" Nov 5 14:59:08.451388 systemd[1]: Started cri-containerd-74698359a27bdc1fef8bc7d49efa5b44ff1aca8468997d3c41ddc8eeae47a95b.scope - libcontainer container 74698359a27bdc1fef8bc7d49efa5b44ff1aca8468997d3c41ddc8eeae47a95b. Nov 5 14:59:08.454661 containerd[1553]: time="2025-11-05T14:59:08.454619801Z" level=info msg="CreateContainer within sandbox \"c07505fa6583931514f8fed7b7928492844cf318bfc3be8bed0243abc96182e0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ee7e2e46a52d5acbd134ac0b4743cc8ee10e9006b8545e1c3fcd6c12317c2817\"" Nov 5 14:59:08.456247 containerd[1553]: time="2025-11-05T14:59:08.456220255Z" level=info msg="StartContainer for \"ee7e2e46a52d5acbd134ac0b4743cc8ee10e9006b8545e1c3fcd6c12317c2817\"" Nov 5 14:59:08.457378 containerd[1553]: time="2025-11-05T14:59:08.457282403Z" level=info msg="CreateContainer within sandbox \"81ecb0fd0c269e4ddf604dbb0e560676bbb92df3c44a961245f4f15f743ab009\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"03547ef090e2031866691e92b6fcd436026847323673d932d5cb9075bff604f7\"" Nov 5 14:59:08.457653 containerd[1553]: time="2025-11-05T14:59:08.457624013Z" level=info msg="StartContainer for \"03547ef090e2031866691e92b6fcd436026847323673d932d5cb9075bff604f7\"" Nov 5 14:59:08.457770 containerd[1553]: time="2025-11-05T14:59:08.457633043Z" level=info msg="connecting to shim ee7e2e46a52d5acbd134ac0b4743cc8ee10e9006b8545e1c3fcd6c12317c2817" address="unix:///run/containerd/s/713c7e4cc67be5423cb9f0709a701fe57a83eddb6975a1efa2a1a2ed0d16cf50" protocol=ttrpc version=3 Nov 5 14:59:08.458603 containerd[1553]: time="2025-11-05T14:59:08.458578524Z" level=info msg="connecting to shim 03547ef090e2031866691e92b6fcd436026847323673d932d5cb9075bff604f7" address="unix:///run/containerd/s/9a3ed394753ad68e9da8462369546648058e050d6a374276c067c17fa7f6099f" protocol=ttrpc version=3 Nov 5 14:59:08.478378 kubelet[2328]: I1105 14:59:08.478261 2328 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 14:59:08.479080 kubelet[2328]: E1105 14:59:08.479011 2328 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Nov 5 14:59:08.480535 systemd[1]: Started cri-containerd-03547ef090e2031866691e92b6fcd436026847323673d932d5cb9075bff604f7.scope - libcontainer container 03547ef090e2031866691e92b6fcd436026847323673d932d5cb9075bff604f7. Nov 5 14:59:08.481491 systemd[1]: Started cri-containerd-ee7e2e46a52d5acbd134ac0b4743cc8ee10e9006b8545e1c3fcd6c12317c2817.scope - libcontainer container ee7e2e46a52d5acbd134ac0b4743cc8ee10e9006b8545e1c3fcd6c12317c2817. Nov 5 14:59:08.506635 kubelet[2328]: W1105 14:59:08.506580 2328 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Nov 5 14:59:08.506726 kubelet[2328]: E1105 14:59:08.506647 2328 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Nov 5 14:59:08.508214 containerd[1553]: time="2025-11-05T14:59:08.508176377Z" level=info msg="StartContainer for \"74698359a27bdc1fef8bc7d49efa5b44ff1aca8468997d3c41ddc8eeae47a95b\" returns successfully" Nov 5 14:59:08.532699 containerd[1553]: time="2025-11-05T14:59:08.532662880Z" level=info msg="StartContainer for \"03547ef090e2031866691e92b6fcd436026847323673d932d5cb9075bff604f7\" returns successfully" Nov 5 14:59:08.545272 containerd[1553]: time="2025-11-05T14:59:08.545234497Z" level=info msg="StartContainer for \"ee7e2e46a52d5acbd134ac0b4743cc8ee10e9006b8545e1c3fcd6c12317c2817\" returns successfully" Nov 5 14:59:08.659663 kubelet[2328]: E1105 14:59:08.659474 2328 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 14:59:08.659663 kubelet[2328]: E1105 14:59:08.659604 2328 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:08.663569 kubelet[2328]: E1105 14:59:08.663536 2328 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 14:59:08.663654 kubelet[2328]: E1105 14:59:08.663646 2328 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:08.668790 kubelet[2328]: E1105 14:59:08.668764 2328 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 14:59:08.669143 kubelet[2328]: E1105 14:59:08.669084 2328 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:09.280893 kubelet[2328]: I1105 14:59:09.280862 2328 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 14:59:09.670688 kubelet[2328]: E1105 14:59:09.670593 2328 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 14:59:09.670794 kubelet[2328]: E1105 14:59:09.670743 2328 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:09.670958 kubelet[2328]: E1105 14:59:09.670936 2328 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 14:59:09.671038 kubelet[2328]: E1105 14:59:09.671019 2328 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:10.185026 kubelet[2328]: E1105 14:59:10.184972 2328 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 5 14:59:10.314548 kubelet[2328]: E1105 14:59:10.314429 2328 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1875244e3ca54012 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-05 14:59:07.628929042 +0000 UTC m=+0.960243474,LastTimestamp:2025-11-05 14:59:07.628929042 +0000 UTC m=+0.960243474,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 5 14:59:10.363305 kubelet[2328]: I1105 14:59:10.363267 2328 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 5 14:59:10.363428 kubelet[2328]: E1105 14:59:10.363316 2328 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 5 14:59:10.378873 kubelet[2328]: E1105 14:59:10.378782 2328 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 14:59:10.479007 kubelet[2328]: E1105 14:59:10.478878 2328 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 14:59:10.579981 kubelet[2328]: E1105 14:59:10.579934 2328 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 14:59:10.680833 kubelet[2328]: E1105 14:59:10.680791 2328 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 14:59:10.777272 kubelet[2328]: E1105 14:59:10.776778 2328 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 14:59:10.777272 kubelet[2328]: E1105 14:59:10.776888 2328 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:10.780898 kubelet[2328]: E1105 14:59:10.780872 2328 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 14:59:10.881943 kubelet[2328]: E1105 14:59:10.881903 2328 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 14:59:10.982519 kubelet[2328]: E1105 14:59:10.982464 2328 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 14:59:11.035359 kubelet[2328]: I1105 14:59:11.035245 2328 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 5 14:59:11.043098 kubelet[2328]: E1105 14:59:11.043060 2328 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 5 14:59:11.043098 kubelet[2328]: I1105 14:59:11.043095 2328 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 14:59:11.044974 kubelet[2328]: E1105 14:59:11.044771 2328 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 5 14:59:11.044974 kubelet[2328]: I1105 14:59:11.044795 2328 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 14:59:11.046304 kubelet[2328]: E1105 14:59:11.046277 2328 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 5 14:59:11.627981 kubelet[2328]: I1105 14:59:11.627934 2328 apiserver.go:52] "Watching apiserver" Nov 5 14:59:11.635392 kubelet[2328]: I1105 14:59:11.635347 2328 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 14:59:12.589037 systemd[1]: Reload requested from client PID 2605 ('systemctl') (unit session-7.scope)... Nov 5 14:59:12.589052 systemd[1]: Reloading... Nov 5 14:59:12.648238 zram_generator::config[2649]: No configuration found. Nov 5 14:59:12.901096 systemd[1]: Reloading finished in 311 ms. Nov 5 14:59:12.926742 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 14:59:12.946769 systemd[1]: kubelet.service: Deactivated successfully. Nov 5 14:59:12.947015 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 14:59:12.947078 systemd[1]: kubelet.service: Consumed 1.355s CPU time, 130.5M memory peak. Nov 5 14:59:12.948993 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 14:59:13.107577 (kubelet)[2690]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 14:59:13.110379 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 14:59:13.165139 kubelet[2690]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 14:59:13.165139 kubelet[2690]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 14:59:13.165139 kubelet[2690]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 14:59:13.165139 kubelet[2690]: I1105 14:59:13.165119 2690 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 14:59:13.181805 kubelet[2690]: I1105 14:59:13.181755 2690 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 5 14:59:13.181805 kubelet[2690]: I1105 14:59:13.181786 2690 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 14:59:13.182094 kubelet[2690]: I1105 14:59:13.182064 2690 server.go:954] "Client rotation is on, will bootstrap in background" Nov 5 14:59:13.183946 kubelet[2690]: I1105 14:59:13.183927 2690 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 5 14:59:13.186329 kubelet[2690]: I1105 14:59:13.186183 2690 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 14:59:13.190686 kubelet[2690]: I1105 14:59:13.190665 2690 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 14:59:13.194809 kubelet[2690]: I1105 14:59:13.193882 2690 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 14:59:13.194809 kubelet[2690]: I1105 14:59:13.194083 2690 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 14:59:13.194809 kubelet[2690]: I1105 14:59:13.194109 2690 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 14:59:13.194809 kubelet[2690]: I1105 14:59:13.194278 2690 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 14:59:13.194995 kubelet[2690]: I1105 14:59:13.194286 2690 container_manager_linux.go:304] "Creating device plugin manager" Nov 5 14:59:13.194995 kubelet[2690]: I1105 14:59:13.194329 2690 state_mem.go:36] "Initialized new in-memory state store" Nov 5 14:59:13.194995 kubelet[2690]: I1105 14:59:13.194460 2690 kubelet.go:446] "Attempting to sync node with API server" Nov 5 14:59:13.194995 kubelet[2690]: I1105 14:59:13.194471 2690 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 14:59:13.194995 kubelet[2690]: I1105 14:59:13.194491 2690 kubelet.go:352] "Adding apiserver pod source" Nov 5 14:59:13.194995 kubelet[2690]: I1105 14:59:13.194501 2690 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 14:59:13.199644 kubelet[2690]: I1105 14:59:13.196115 2690 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 14:59:13.199644 kubelet[2690]: I1105 14:59:13.196676 2690 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 5 14:59:13.199644 kubelet[2690]: I1105 14:59:13.197154 2690 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 14:59:13.199644 kubelet[2690]: I1105 14:59:13.197180 2690 server.go:1287] "Started kubelet" Nov 5 14:59:13.199862 kubelet[2690]: I1105 14:59:13.199752 2690 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 14:59:13.199931 kubelet[2690]: I1105 14:59:13.199906 2690 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 14:59:13.200138 kubelet[2690]: I1105 14:59:13.199990 2690 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 14:59:13.200451 kubelet[2690]: I1105 14:59:13.200293 2690 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 14:59:13.202314 kubelet[2690]: I1105 14:59:13.201192 2690 server.go:479] "Adding debug handlers to kubelet server" Nov 5 14:59:13.205934 kubelet[2690]: I1105 14:59:13.204778 2690 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 14:59:13.207405 kubelet[2690]: I1105 14:59:13.207378 2690 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 14:59:13.209724 kubelet[2690]: I1105 14:59:13.209702 2690 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 14:59:13.209949 kubelet[2690]: I1105 14:59:13.209936 2690 reconciler.go:26] "Reconciler: start to sync state" Nov 5 14:59:13.216257 kubelet[2690]: E1105 14:59:13.216230 2690 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 14:59:13.216805 kubelet[2690]: E1105 14:59:13.216782 2690 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 14:59:13.220233 kubelet[2690]: I1105 14:59:13.220192 2690 factory.go:221] Registration of the systemd container factory successfully Nov 5 14:59:13.220586 kubelet[2690]: I1105 14:59:13.220557 2690 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 14:59:13.228785 kubelet[2690]: I1105 14:59:13.228746 2690 factory.go:221] Registration of the containerd container factory successfully Nov 5 14:59:13.233574 kubelet[2690]: I1105 14:59:13.233511 2690 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 5 14:59:13.236016 kubelet[2690]: I1105 14:59:13.235715 2690 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 5 14:59:13.236016 kubelet[2690]: I1105 14:59:13.235744 2690 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 5 14:59:13.236837 kubelet[2690]: I1105 14:59:13.235778 2690 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 14:59:13.237306 kubelet[2690]: I1105 14:59:13.237287 2690 kubelet.go:2382] "Starting kubelet main sync loop" Nov 5 14:59:13.238058 kubelet[2690]: E1105 14:59:13.237673 2690 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 14:59:13.268888 kubelet[2690]: I1105 14:59:13.268864 2690 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 14:59:13.269643 kubelet[2690]: I1105 14:59:13.269045 2690 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 14:59:13.269643 kubelet[2690]: I1105 14:59:13.269070 2690 state_mem.go:36] "Initialized new in-memory state store" Nov 5 14:59:13.269643 kubelet[2690]: I1105 14:59:13.269243 2690 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 5 14:59:13.269643 kubelet[2690]: I1105 14:59:13.269255 2690 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 5 14:59:13.269643 kubelet[2690]: I1105 14:59:13.269272 2690 policy_none.go:49] "None policy: Start" Nov 5 14:59:13.269643 kubelet[2690]: I1105 14:59:13.269282 2690 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 14:59:13.269643 kubelet[2690]: I1105 14:59:13.269291 2690 state_mem.go:35] "Initializing new in-memory state store" Nov 5 14:59:13.269643 kubelet[2690]: I1105 14:59:13.269393 2690 state_mem.go:75] "Updated machine memory state" Nov 5 14:59:13.276195 kubelet[2690]: I1105 14:59:13.276164 2690 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 5 14:59:13.276394 kubelet[2690]: I1105 14:59:13.276367 2690 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 14:59:13.276441 kubelet[2690]: I1105 14:59:13.276386 2690 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 14:59:13.277007 kubelet[2690]: I1105 14:59:13.276980 2690 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 14:59:13.277854 kubelet[2690]: E1105 14:59:13.277831 2690 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 14:59:13.339424 kubelet[2690]: I1105 14:59:13.339374 2690 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 14:59:13.339567 kubelet[2690]: I1105 14:59:13.339553 2690 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 14:59:13.339755 kubelet[2690]: I1105 14:59:13.339741 2690 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 5 14:59:13.380865 kubelet[2690]: I1105 14:59:13.380837 2690 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 14:59:13.394242 kubelet[2690]: I1105 14:59:13.394175 2690 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 5 14:59:13.394486 kubelet[2690]: I1105 14:59:13.394458 2690 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 5 14:59:13.411375 kubelet[2690]: I1105 14:59:13.411334 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8d1777f2632ee8d6e0b7c1c7aa8e83f8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8d1777f2632ee8d6e0b7c1c7aa8e83f8\") " pod="kube-system/kube-apiserver-localhost" Nov 5 14:59:13.411375 kubelet[2690]: I1105 14:59:13.411372 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 14:59:13.411527 kubelet[2690]: I1105 14:59:13.411393 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 5 14:59:13.411527 kubelet[2690]: I1105 14:59:13.411411 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 14:59:13.411527 kubelet[2690]: I1105 14:59:13.411439 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 14:59:13.411527 kubelet[2690]: I1105 14:59:13.411454 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 14:59:13.411527 kubelet[2690]: I1105 14:59:13.411469 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8d1777f2632ee8d6e0b7c1c7aa8e83f8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8d1777f2632ee8d6e0b7c1c7aa8e83f8\") " pod="kube-system/kube-apiserver-localhost" Nov 5 14:59:13.411628 kubelet[2690]: I1105 14:59:13.411485 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8d1777f2632ee8d6e0b7c1c7aa8e83f8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8d1777f2632ee8d6e0b7c1c7aa8e83f8\") " pod="kube-system/kube-apiserver-localhost" Nov 5 14:59:13.411628 kubelet[2690]: I1105 14:59:13.411514 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 14:59:13.646783 kubelet[2690]: E1105 14:59:13.646395 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:13.648928 kubelet[2690]: E1105 14:59:13.648556 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:13.648928 kubelet[2690]: E1105 14:59:13.648677 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:14.197604 kubelet[2690]: I1105 14:59:14.197543 2690 apiserver.go:52] "Watching apiserver" Nov 5 14:59:14.209914 kubelet[2690]: I1105 14:59:14.209860 2690 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 14:59:14.252274 kubelet[2690]: E1105 14:59:14.252232 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:14.253308 kubelet[2690]: E1105 14:59:14.252473 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:14.253464 kubelet[2690]: E1105 14:59:14.253393 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:14.278381 kubelet[2690]: I1105 14:59:14.278252 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.278197328 podStartE2EDuration="1.278197328s" podCreationTimestamp="2025-11-05 14:59:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 14:59:14.278129843 +0000 UTC m=+1.160703001" watchObservedRunningTime="2025-11-05 14:59:14.278197328 +0000 UTC m=+1.160770446" Nov 5 14:59:14.287816 kubelet[2690]: I1105 14:59:14.287738 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.287720455 podStartE2EDuration="1.287720455s" podCreationTimestamp="2025-11-05 14:59:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 14:59:14.287603755 +0000 UTC m=+1.170176953" watchObservedRunningTime="2025-11-05 14:59:14.287720455 +0000 UTC m=+1.170293613" Nov 5 14:59:14.296489 kubelet[2690]: I1105 14:59:14.296432 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.296417844 podStartE2EDuration="1.296417844s" podCreationTimestamp="2025-11-05 14:59:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 14:59:14.296381503 +0000 UTC m=+1.178954661" watchObservedRunningTime="2025-11-05 14:59:14.296417844 +0000 UTC m=+1.178991042" Nov 5 14:59:15.254232 kubelet[2690]: E1105 14:59:15.254006 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:15.254816 kubelet[2690]: E1105 14:59:15.254702 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:15.460030 kubelet[2690]: E1105 14:59:15.459995 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:17.533281 kubelet[2690]: E1105 14:59:17.533251 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:18.235655 kubelet[2690]: I1105 14:59:18.235626 2690 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 5 14:59:18.236032 containerd[1553]: time="2025-11-05T14:59:18.235911223Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 5 14:59:18.236379 kubelet[2690]: I1105 14:59:18.236355 2690 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 5 14:59:18.676980 kubelet[2690]: E1105 14:59:18.676939 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:19.213033 systemd[1]: Created slice kubepods-besteffort-pod4a970ea8_d9f7_4ac4_bb55_6fed24589096.slice - libcontainer container kubepods-besteffort-pod4a970ea8_d9f7_4ac4_bb55_6fed24589096.slice. Nov 5 14:59:19.240672 kubelet[2690]: I1105 14:59:19.240641 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4a970ea8-d9f7-4ac4-bb55-6fed24589096-lib-modules\") pod \"kube-proxy-7s48c\" (UID: \"4a970ea8-d9f7-4ac4-bb55-6fed24589096\") " pod="kube-system/kube-proxy-7s48c" Nov 5 14:59:19.240672 kubelet[2690]: I1105 14:59:19.240675 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7tf6\" (UniqueName: \"kubernetes.io/projected/4a970ea8-d9f7-4ac4-bb55-6fed24589096-kube-api-access-l7tf6\") pod \"kube-proxy-7s48c\" (UID: \"4a970ea8-d9f7-4ac4-bb55-6fed24589096\") " pod="kube-system/kube-proxy-7s48c" Nov 5 14:59:19.240812 kubelet[2690]: I1105 14:59:19.240694 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4a970ea8-d9f7-4ac4-bb55-6fed24589096-xtables-lock\") pod \"kube-proxy-7s48c\" (UID: \"4a970ea8-d9f7-4ac4-bb55-6fed24589096\") " pod="kube-system/kube-proxy-7s48c" Nov 5 14:59:19.240812 kubelet[2690]: I1105 14:59:19.240711 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4a970ea8-d9f7-4ac4-bb55-6fed24589096-kube-proxy\") pod \"kube-proxy-7s48c\" (UID: \"4a970ea8-d9f7-4ac4-bb55-6fed24589096\") " pod="kube-system/kube-proxy-7s48c" Nov 5 14:59:19.260233 kubelet[2690]: E1105 14:59:19.259340 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:19.319572 systemd[1]: Created slice kubepods-besteffort-pod8aa79999_7b85_4131_904d_957a3311a589.slice - libcontainer container kubepods-besteffort-pod8aa79999_7b85_4131_904d_957a3311a589.slice. Nov 5 14:59:19.341450 kubelet[2690]: I1105 14:59:19.341415 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8aa79999-7b85-4131-904d-957a3311a589-var-lib-calico\") pod \"tigera-operator-7dcd859c48-mftxf\" (UID: \"8aa79999-7b85-4131-904d-957a3311a589\") " pod="tigera-operator/tigera-operator-7dcd859c48-mftxf" Nov 5 14:59:19.341619 kubelet[2690]: I1105 14:59:19.341594 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtbrw\" (UniqueName: \"kubernetes.io/projected/8aa79999-7b85-4131-904d-957a3311a589-kube-api-access-gtbrw\") pod \"tigera-operator-7dcd859c48-mftxf\" (UID: \"8aa79999-7b85-4131-904d-957a3311a589\") " pod="tigera-operator/tigera-operator-7dcd859c48-mftxf" Nov 5 14:59:19.529746 kubelet[2690]: E1105 14:59:19.529607 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:19.530442 containerd[1553]: time="2025-11-05T14:59:19.530387049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7s48c,Uid:4a970ea8-d9f7-4ac4-bb55-6fed24589096,Namespace:kube-system,Attempt:0,}" Nov 5 14:59:19.549520 containerd[1553]: time="2025-11-05T14:59:19.549481861Z" level=info msg="connecting to shim 5d14e4beeeef79af8dd9b6bf62a33d259483959e4d6f90456672d85f3e5209c7" address="unix:///run/containerd/s/b39db4a76d0635ba5a1c566a1f3523ad0cb338344ee5f2a42234787e34d2c326" namespace=k8s.io protocol=ttrpc version=3 Nov 5 14:59:19.575496 systemd[1]: Started cri-containerd-5d14e4beeeef79af8dd9b6bf62a33d259483959e4d6f90456672d85f3e5209c7.scope - libcontainer container 5d14e4beeeef79af8dd9b6bf62a33d259483959e4d6f90456672d85f3e5209c7. Nov 5 14:59:19.597757 containerd[1553]: time="2025-11-05T14:59:19.597698548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7s48c,Uid:4a970ea8-d9f7-4ac4-bb55-6fed24589096,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d14e4beeeef79af8dd9b6bf62a33d259483959e4d6f90456672d85f3e5209c7\"" Nov 5 14:59:19.598507 kubelet[2690]: E1105 14:59:19.598477 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:19.601782 containerd[1553]: time="2025-11-05T14:59:19.601733686Z" level=info msg="CreateContainer within sandbox \"5d14e4beeeef79af8dd9b6bf62a33d259483959e4d6f90456672d85f3e5209c7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 5 14:59:19.612061 containerd[1553]: time="2025-11-05T14:59:19.610985553Z" level=info msg="Container 6d7149eae6067823ced3f91e311b6b8a7fb16e89bbd1a87ae7adbd59095b18c4: CDI devices from CRI Config.CDIDevices: []" Nov 5 14:59:19.620219 containerd[1553]: time="2025-11-05T14:59:19.620158181Z" level=info msg="CreateContainer within sandbox \"5d14e4beeeef79af8dd9b6bf62a33d259483959e4d6f90456672d85f3e5209c7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6d7149eae6067823ced3f91e311b6b8a7fb16e89bbd1a87ae7adbd59095b18c4\"" Nov 5 14:59:19.620751 containerd[1553]: time="2025-11-05T14:59:19.620712658Z" level=info msg="StartContainer for \"6d7149eae6067823ced3f91e311b6b8a7fb16e89bbd1a87ae7adbd59095b18c4\"" Nov 5 14:59:19.622030 containerd[1553]: time="2025-11-05T14:59:19.621993851Z" level=info msg="connecting to shim 6d7149eae6067823ced3f91e311b6b8a7fb16e89bbd1a87ae7adbd59095b18c4" address="unix:///run/containerd/s/b39db4a76d0635ba5a1c566a1f3523ad0cb338344ee5f2a42234787e34d2c326" protocol=ttrpc version=3 Nov 5 14:59:19.623224 containerd[1553]: time="2025-11-05T14:59:19.623100765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-mftxf,Uid:8aa79999-7b85-4131-904d-957a3311a589,Namespace:tigera-operator,Attempt:0,}" Nov 5 14:59:19.643396 systemd[1]: Started cri-containerd-6d7149eae6067823ced3f91e311b6b8a7fb16e89bbd1a87ae7adbd59095b18c4.scope - libcontainer container 6d7149eae6067823ced3f91e311b6b8a7fb16e89bbd1a87ae7adbd59095b18c4. Nov 5 14:59:19.650950 containerd[1553]: time="2025-11-05T14:59:19.650895688Z" level=info msg="connecting to shim 883bdd9e690251b1b84e8f256a6088898e5a84a69025b4dd4bd27ffdfd2f2a32" address="unix:///run/containerd/s/94353794a6fe3ba876bda6f69edaed21ac40df659ae141d9c10976e09ab836d1" namespace=k8s.io protocol=ttrpc version=3 Nov 5 14:59:19.673410 systemd[1]: Started cri-containerd-883bdd9e690251b1b84e8f256a6088898e5a84a69025b4dd4bd27ffdfd2f2a32.scope - libcontainer container 883bdd9e690251b1b84e8f256a6088898e5a84a69025b4dd4bd27ffdfd2f2a32. Nov 5 14:59:19.687688 containerd[1553]: time="2025-11-05T14:59:19.687652040Z" level=info msg="StartContainer for \"6d7149eae6067823ced3f91e311b6b8a7fb16e89bbd1a87ae7adbd59095b18c4\" returns successfully" Nov 5 14:59:19.713162 containerd[1553]: time="2025-11-05T14:59:19.712433580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-mftxf,Uid:8aa79999-7b85-4131-904d-957a3311a589,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"883bdd9e690251b1b84e8f256a6088898e5a84a69025b4dd4bd27ffdfd2f2a32\"" Nov 5 14:59:19.715562 containerd[1553]: time="2025-11-05T14:59:19.715521162Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 5 14:59:20.264790 kubelet[2690]: E1105 14:59:20.264764 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:20.265637 kubelet[2690]: E1105 14:59:20.265018 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:20.275382 kubelet[2690]: I1105 14:59:20.275326 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7s48c" podStartSLOduration=1.275310242 podStartE2EDuration="1.275310242s" podCreationTimestamp="2025-11-05 14:59:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 14:59:20.27378317 +0000 UTC m=+7.156356328" watchObservedRunningTime="2025-11-05 14:59:20.275310242 +0000 UTC m=+7.157883400" Nov 5 14:59:20.356811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3012203119.mount: Deactivated successfully. Nov 5 14:59:20.891232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3742272349.mount: Deactivated successfully. Nov 5 14:59:21.302340 containerd[1553]: time="2025-11-05T14:59:21.302288319Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:59:21.303747 containerd[1553]: time="2025-11-05T14:59:21.303725392Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Nov 5 14:59:21.304995 containerd[1553]: time="2025-11-05T14:59:21.304961586Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:59:21.308019 containerd[1553]: time="2025-11-05T14:59:21.307775692Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:59:21.308463 containerd[1553]: time="2025-11-05T14:59:21.308427208Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 1.592868966s" Nov 5 14:59:21.308463 containerd[1553]: time="2025-11-05T14:59:21.308460848Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Nov 5 14:59:21.311052 containerd[1553]: time="2025-11-05T14:59:21.311028275Z" level=info msg="CreateContainer within sandbox \"883bdd9e690251b1b84e8f256a6088898e5a84a69025b4dd4bd27ffdfd2f2a32\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 5 14:59:21.348944 containerd[1553]: time="2025-11-05T14:59:21.348252927Z" level=info msg="Container 53913785ebdb52abefc42c3543cd02273f15b5189c350ba3d09cc6cbffb1aa14: CDI devices from CRI Config.CDIDevices: []" Nov 5 14:59:21.353386 containerd[1553]: time="2025-11-05T14:59:21.353355581Z" level=info msg="CreateContainer within sandbox \"883bdd9e690251b1b84e8f256a6088898e5a84a69025b4dd4bd27ffdfd2f2a32\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"53913785ebdb52abefc42c3543cd02273f15b5189c350ba3d09cc6cbffb1aa14\"" Nov 5 14:59:21.354174 containerd[1553]: time="2025-11-05T14:59:21.354150497Z" level=info msg="StartContainer for \"53913785ebdb52abefc42c3543cd02273f15b5189c350ba3d09cc6cbffb1aa14\"" Nov 5 14:59:21.355127 containerd[1553]: time="2025-11-05T14:59:21.355068413Z" level=info msg="connecting to shim 53913785ebdb52abefc42c3543cd02273f15b5189c350ba3d09cc6cbffb1aa14" address="unix:///run/containerd/s/94353794a6fe3ba876bda6f69edaed21ac40df659ae141d9c10976e09ab836d1" protocol=ttrpc version=3 Nov 5 14:59:21.372396 systemd[1]: Started cri-containerd-53913785ebdb52abefc42c3543cd02273f15b5189c350ba3d09cc6cbffb1aa14.scope - libcontainer container 53913785ebdb52abefc42c3543cd02273f15b5189c350ba3d09cc6cbffb1aa14. Nov 5 14:59:21.400145 containerd[1553]: time="2025-11-05T14:59:21.400037305Z" level=info msg="StartContainer for \"53913785ebdb52abefc42c3543cd02273f15b5189c350ba3d09cc6cbffb1aa14\" returns successfully" Nov 5 14:59:25.467816 kubelet[2690]: E1105 14:59:25.467780 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:25.483746 kubelet[2690]: I1105 14:59:25.483684 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-mftxf" podStartSLOduration=4.88823143 podStartE2EDuration="6.483668463s" podCreationTimestamp="2025-11-05 14:59:19 +0000 UTC" firstStartedPulling="2025-11-05 14:59:19.714592447 +0000 UTC m=+6.597165605" lastFinishedPulling="2025-11-05 14:59:21.31002948 +0000 UTC m=+8.192602638" observedRunningTime="2025-11-05 14:59:22.289222449 +0000 UTC m=+9.171795647" watchObservedRunningTime="2025-11-05 14:59:25.483668463 +0000 UTC m=+12.366241621" Nov 5 14:59:26.922144 sudo[1765]: pam_unix(sudo:session): session closed for user root Nov 5 14:59:26.924638 sshd[1764]: Connection closed by 10.0.0.1 port 42124 Nov 5 14:59:26.926727 sshd-session[1761]: pam_unix(sshd:session): session closed for user core Nov 5 14:59:26.931621 systemd-logind[1534]: Session 7 logged out. Waiting for processes to exit. Nov 5 14:59:26.931770 systemd[1]: sshd@6-10.0.0.21:22-10.0.0.1:42124.service: Deactivated successfully. Nov 5 14:59:26.935017 systemd[1]: session-7.scope: Deactivated successfully. Nov 5 14:59:26.935302 systemd[1]: session-7.scope: Consumed 6.911s CPU time, 217.5M memory peak. Nov 5 14:59:26.937164 systemd-logind[1534]: Removed session 7. Nov 5 14:59:27.542930 kubelet[2690]: E1105 14:59:27.542891 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:28.290934 kubelet[2690]: E1105 14:59:28.290799 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:29.894356 update_engine[1539]: I20251105 14:59:29.894272 1539 update_attempter.cc:509] Updating boot flags... Nov 5 14:59:34.877578 systemd[1]: Created slice kubepods-besteffort-pod57170fa4_6b6e_4194_b5be_f33b992e6a2d.slice - libcontainer container kubepods-besteffort-pod57170fa4_6b6e_4194_b5be_f33b992e6a2d.slice. Nov 5 14:59:34.950897 kubelet[2690]: I1105 14:59:34.950790 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ql5q\" (UniqueName: \"kubernetes.io/projected/57170fa4-6b6e-4194-b5be-f33b992e6a2d-kube-api-access-9ql5q\") pod \"calico-typha-764759c6d8-lf79l\" (UID: \"57170fa4-6b6e-4194-b5be-f33b992e6a2d\") " pod="calico-system/calico-typha-764759c6d8-lf79l" Nov 5 14:59:34.950897 kubelet[2690]: I1105 14:59:34.950840 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57170fa4-6b6e-4194-b5be-f33b992e6a2d-tigera-ca-bundle\") pod \"calico-typha-764759c6d8-lf79l\" (UID: \"57170fa4-6b6e-4194-b5be-f33b992e6a2d\") " pod="calico-system/calico-typha-764759c6d8-lf79l" Nov 5 14:59:34.950897 kubelet[2690]: I1105 14:59:34.950858 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/57170fa4-6b6e-4194-b5be-f33b992e6a2d-typha-certs\") pod \"calico-typha-764759c6d8-lf79l\" (UID: \"57170fa4-6b6e-4194-b5be-f33b992e6a2d\") " pod="calico-system/calico-typha-764759c6d8-lf79l" Nov 5 14:59:35.185228 kubelet[2690]: E1105 14:59:35.185096 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:35.189248 containerd[1553]: time="2025-11-05T14:59:35.188961664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-764759c6d8-lf79l,Uid:57170fa4-6b6e-4194-b5be-f33b992e6a2d,Namespace:calico-system,Attempt:0,}" Nov 5 14:59:35.215017 systemd[1]: Created slice kubepods-besteffort-pod6697d135_ec5e_4e4a_bc53_7b7315379933.slice - libcontainer container kubepods-besteffort-pod6697d135_ec5e_4e4a_bc53_7b7315379933.slice. Nov 5 14:59:35.236234 containerd[1553]: time="2025-11-05T14:59:35.235595669Z" level=info msg="connecting to shim 896cf03b8f6ea109e2b7921e3b6dedeb9a2cd017f52eff170bbc04634d873917" address="unix:///run/containerd/s/8bb9a716a25e70408b6261dbc3a49523662a0136f16f83a3fc7908bb965552e4" namespace=k8s.io protocol=ttrpc version=3 Nov 5 14:59:35.252504 kubelet[2690]: I1105 14:59:35.252460 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6697d135-ec5e-4e4a-bc53-7b7315379933-cni-bin-dir\") pod \"calico-node-w8x67\" (UID: \"6697d135-ec5e-4e4a-bc53-7b7315379933\") " pod="calico-system/calico-node-w8x67" Nov 5 14:59:35.252504 kubelet[2690]: I1105 14:59:35.252502 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6697d135-ec5e-4e4a-bc53-7b7315379933-var-lib-calico\") pod \"calico-node-w8x67\" (UID: \"6697d135-ec5e-4e4a-bc53-7b7315379933\") " pod="calico-system/calico-node-w8x67" Nov 5 14:59:35.252635 kubelet[2690]: I1105 14:59:35.252521 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6697d135-ec5e-4e4a-bc53-7b7315379933-cni-log-dir\") pod \"calico-node-w8x67\" (UID: \"6697d135-ec5e-4e4a-bc53-7b7315379933\") " pod="calico-system/calico-node-w8x67" Nov 5 14:59:35.252635 kubelet[2690]: I1105 14:59:35.252550 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6697d135-ec5e-4e4a-bc53-7b7315379933-flexvol-driver-host\") pod \"calico-node-w8x67\" (UID: \"6697d135-ec5e-4e4a-bc53-7b7315379933\") " pod="calico-system/calico-node-w8x67" Nov 5 14:59:35.252635 kubelet[2690]: I1105 14:59:35.252580 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6697d135-ec5e-4e4a-bc53-7b7315379933-node-certs\") pod \"calico-node-w8x67\" (UID: \"6697d135-ec5e-4e4a-bc53-7b7315379933\") " pod="calico-system/calico-node-w8x67" Nov 5 14:59:35.252635 kubelet[2690]: I1105 14:59:35.252616 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6697d135-ec5e-4e4a-bc53-7b7315379933-policysync\") pod \"calico-node-w8x67\" (UID: \"6697d135-ec5e-4e4a-bc53-7b7315379933\") " pod="calico-system/calico-node-w8x67" Nov 5 14:59:35.252722 kubelet[2690]: I1105 14:59:35.252642 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmcxb\" (UniqueName: \"kubernetes.io/projected/6697d135-ec5e-4e4a-bc53-7b7315379933-kube-api-access-jmcxb\") pod \"calico-node-w8x67\" (UID: \"6697d135-ec5e-4e4a-bc53-7b7315379933\") " pod="calico-system/calico-node-w8x67" Nov 5 14:59:35.252722 kubelet[2690]: I1105 14:59:35.252686 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6697d135-ec5e-4e4a-bc53-7b7315379933-lib-modules\") pod \"calico-node-w8x67\" (UID: \"6697d135-ec5e-4e4a-bc53-7b7315379933\") " pod="calico-system/calico-node-w8x67" Nov 5 14:59:35.252764 kubelet[2690]: I1105 14:59:35.252728 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6697d135-ec5e-4e4a-bc53-7b7315379933-var-run-calico\") pod \"calico-node-w8x67\" (UID: \"6697d135-ec5e-4e4a-bc53-7b7315379933\") " pod="calico-system/calico-node-w8x67" Nov 5 14:59:35.252807 kubelet[2690]: I1105 14:59:35.252776 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6697d135-ec5e-4e4a-bc53-7b7315379933-tigera-ca-bundle\") pod \"calico-node-w8x67\" (UID: \"6697d135-ec5e-4e4a-bc53-7b7315379933\") " pod="calico-system/calico-node-w8x67" Nov 5 14:59:35.252834 kubelet[2690]: I1105 14:59:35.252813 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6697d135-ec5e-4e4a-bc53-7b7315379933-cni-net-dir\") pod \"calico-node-w8x67\" (UID: \"6697d135-ec5e-4e4a-bc53-7b7315379933\") " pod="calico-system/calico-node-w8x67" Nov 5 14:59:35.252834 kubelet[2690]: I1105 14:59:35.252830 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6697d135-ec5e-4e4a-bc53-7b7315379933-xtables-lock\") pod \"calico-node-w8x67\" (UID: \"6697d135-ec5e-4e4a-bc53-7b7315379933\") " pod="calico-system/calico-node-w8x67" Nov 5 14:59:35.285398 systemd[1]: Started cri-containerd-896cf03b8f6ea109e2b7921e3b6dedeb9a2cd017f52eff170bbc04634d873917.scope - libcontainer container 896cf03b8f6ea109e2b7921e3b6dedeb9a2cd017f52eff170bbc04634d873917. Nov 5 14:59:35.315649 containerd[1553]: time="2025-11-05T14:59:35.315583192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-764759c6d8-lf79l,Uid:57170fa4-6b6e-4194-b5be-f33b992e6a2d,Namespace:calico-system,Attempt:0,} returns sandbox id \"896cf03b8f6ea109e2b7921e3b6dedeb9a2cd017f52eff170bbc04634d873917\"" Nov 5 14:59:35.319275 kubelet[2690]: E1105 14:59:35.318890 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:35.325431 containerd[1553]: time="2025-11-05T14:59:35.325324847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 5 14:59:35.358560 kubelet[2690]: E1105 14:59:35.358522 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.358560 kubelet[2690]: W1105 14:59:35.358544 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.358683 kubelet[2690]: E1105 14:59:35.358581 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.367868 kubelet[2690]: E1105 14:59:35.367843 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.367868 kubelet[2690]: W1105 14:59:35.367864 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.367993 kubelet[2690]: E1105 14:59:35.367882 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.398190 kubelet[2690]: E1105 14:59:35.397659 2690 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9kr5x" podUID="1c59cb2f-c3de-4d4b-a46e-9bc0038d4b98" Nov 5 14:59:35.442773 kubelet[2690]: E1105 14:59:35.442677 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.443008 kubelet[2690]: W1105 14:59:35.442882 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.443008 kubelet[2690]: E1105 14:59:35.442912 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.443152 kubelet[2690]: E1105 14:59:35.443140 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.443373 kubelet[2690]: W1105 14:59:35.443183 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.443373 kubelet[2690]: E1105 14:59:35.443243 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.443503 kubelet[2690]: E1105 14:59:35.443489 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.443569 kubelet[2690]: W1105 14:59:35.443558 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.443634 kubelet[2690]: E1105 14:59:35.443614 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.443897 kubelet[2690]: E1105 14:59:35.443883 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.444035 kubelet[2690]: W1105 14:59:35.443959 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.444035 kubelet[2690]: E1105 14:59:35.443977 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.444267 kubelet[2690]: E1105 14:59:35.444252 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.444332 kubelet[2690]: W1105 14:59:35.444320 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.444418 kubelet[2690]: E1105 14:59:35.444407 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.444643 kubelet[2690]: E1105 14:59:35.444624 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.444783 kubelet[2690]: W1105 14:59:35.444711 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.444783 kubelet[2690]: E1105 14:59:35.444728 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.445075 kubelet[2690]: E1105 14:59:35.444973 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.445075 kubelet[2690]: W1105 14:59:35.444986 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.445075 kubelet[2690]: E1105 14:59:35.445002 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.445237 kubelet[2690]: E1105 14:59:35.445197 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.445307 kubelet[2690]: W1105 14:59:35.445288 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.445371 kubelet[2690]: E1105 14:59:35.445360 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.445664 kubelet[2690]: E1105 14:59:35.445610 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.445664 kubelet[2690]: W1105 14:59:35.445623 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.445664 kubelet[2690]: E1105 14:59:35.445633 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.445885 kubelet[2690]: E1105 14:59:35.445872 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.446037 kubelet[2690]: W1105 14:59:35.445918 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.446037 kubelet[2690]: E1105 14:59:35.445931 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.446146 kubelet[2690]: E1105 14:59:35.446134 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.446226 kubelet[2690]: W1105 14:59:35.446195 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.446391 kubelet[2690]: E1105 14:59:35.446292 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.446485 kubelet[2690]: E1105 14:59:35.446473 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.446534 kubelet[2690]: W1105 14:59:35.446524 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.453261 kubelet[2690]: E1105 14:59:35.453235 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.453676 kubelet[2690]: E1105 14:59:35.453578 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.453676 kubelet[2690]: W1105 14:59:35.453592 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.453676 kubelet[2690]: E1105 14:59:35.453603 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.453843 kubelet[2690]: E1105 14:59:35.453829 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.453896 kubelet[2690]: W1105 14:59:35.453885 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.453962 kubelet[2690]: E1105 14:59:35.453948 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.454311 kubelet[2690]: E1105 14:59:35.454186 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.454311 kubelet[2690]: W1105 14:59:35.454222 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.454311 kubelet[2690]: E1105 14:59:35.454234 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.454481 kubelet[2690]: E1105 14:59:35.454468 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.454543 kubelet[2690]: W1105 14:59:35.454531 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.454591 kubelet[2690]: E1105 14:59:35.454580 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.454814 kubelet[2690]: E1105 14:59:35.454801 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.454890 kubelet[2690]: W1105 14:59:35.454877 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.454940 kubelet[2690]: E1105 14:59:35.454929 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.455251 kubelet[2690]: E1105 14:59:35.455141 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.455251 kubelet[2690]: W1105 14:59:35.455153 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.455251 kubelet[2690]: E1105 14:59:35.455162 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.455417 kubelet[2690]: E1105 14:59:35.455403 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.455468 kubelet[2690]: W1105 14:59:35.455457 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.455522 kubelet[2690]: E1105 14:59:35.455512 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.455742 kubelet[2690]: E1105 14:59:35.455729 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.455818 kubelet[2690]: W1105 14:59:35.455805 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.455874 kubelet[2690]: E1105 14:59:35.455863 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.456152 kubelet[2690]: E1105 14:59:35.456138 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.456255 kubelet[2690]: W1105 14:59:35.456241 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.456322 kubelet[2690]: E1105 14:59:35.456311 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.456411 kubelet[2690]: I1105 14:59:35.456393 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1c59cb2f-c3de-4d4b-a46e-9bc0038d4b98-registration-dir\") pod \"csi-node-driver-9kr5x\" (UID: \"1c59cb2f-c3de-4d4b-a46e-9bc0038d4b98\") " pod="calico-system/csi-node-driver-9kr5x" Nov 5 14:59:35.456638 kubelet[2690]: E1105 14:59:35.456617 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.456638 kubelet[2690]: W1105 14:59:35.456635 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.456696 kubelet[2690]: E1105 14:59:35.456653 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.456814 kubelet[2690]: E1105 14:59:35.456802 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.456814 kubelet[2690]: W1105 14:59:35.456814 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.456866 kubelet[2690]: E1105 14:59:35.456827 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.456986 kubelet[2690]: E1105 14:59:35.456973 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.456986 kubelet[2690]: W1105 14:59:35.456985 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.457052 kubelet[2690]: E1105 14:59:35.456994 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.457052 kubelet[2690]: I1105 14:59:35.457018 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1c59cb2f-c3de-4d4b-a46e-9bc0038d4b98-varrun\") pod \"csi-node-driver-9kr5x\" (UID: \"1c59cb2f-c3de-4d4b-a46e-9bc0038d4b98\") " pod="calico-system/csi-node-driver-9kr5x" Nov 5 14:59:35.457191 kubelet[2690]: E1105 14:59:35.457178 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.457238 kubelet[2690]: W1105 14:59:35.457191 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.457238 kubelet[2690]: E1105 14:59:35.457213 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.457238 kubelet[2690]: I1105 14:59:35.457228 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2mgw\" (UniqueName: \"kubernetes.io/projected/1c59cb2f-c3de-4d4b-a46e-9bc0038d4b98-kube-api-access-m2mgw\") pod \"csi-node-driver-9kr5x\" (UID: \"1c59cb2f-c3de-4d4b-a46e-9bc0038d4b98\") " pod="calico-system/csi-node-driver-9kr5x" Nov 5 14:59:35.457409 kubelet[2690]: E1105 14:59:35.457362 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.457409 kubelet[2690]: W1105 14:59:35.457409 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.457467 kubelet[2690]: E1105 14:59:35.457423 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.457467 kubelet[2690]: I1105 14:59:35.457439 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c59cb2f-c3de-4d4b-a46e-9bc0038d4b98-kubelet-dir\") pod \"csi-node-driver-9kr5x\" (UID: \"1c59cb2f-c3de-4d4b-a46e-9bc0038d4b98\") " pod="calico-system/csi-node-driver-9kr5x" Nov 5 14:59:35.457621 kubelet[2690]: E1105 14:59:35.457609 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.457652 kubelet[2690]: W1105 14:59:35.457622 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.457652 kubelet[2690]: E1105 14:59:35.457635 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.457652 kubelet[2690]: I1105 14:59:35.457649 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1c59cb2f-c3de-4d4b-a46e-9bc0038d4b98-socket-dir\") pod \"csi-node-driver-9kr5x\" (UID: \"1c59cb2f-c3de-4d4b-a46e-9bc0038d4b98\") " pod="calico-system/csi-node-driver-9kr5x" Nov 5 14:59:35.457834 kubelet[2690]: E1105 14:59:35.457822 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.457870 kubelet[2690]: W1105 14:59:35.457834 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.457870 kubelet[2690]: E1105 14:59:35.457853 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.458010 kubelet[2690]: E1105 14:59:35.457999 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.458010 kubelet[2690]: W1105 14:59:35.458010 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.458084 kubelet[2690]: E1105 14:59:35.458022 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.458172 kubelet[2690]: E1105 14:59:35.458161 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.458232 kubelet[2690]: W1105 14:59:35.458171 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.458232 kubelet[2690]: E1105 14:59:35.458185 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.458433 kubelet[2690]: E1105 14:59:35.458419 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.458471 kubelet[2690]: W1105 14:59:35.458433 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.458471 kubelet[2690]: E1105 14:59:35.458449 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.458636 kubelet[2690]: E1105 14:59:35.458610 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.458670 kubelet[2690]: W1105 14:59:35.458635 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.458670 kubelet[2690]: E1105 14:59:35.458649 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.458779 kubelet[2690]: E1105 14:59:35.458761 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.458779 kubelet[2690]: W1105 14:59:35.458774 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.458779 kubelet[2690]: E1105 14:59:35.458806 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.458936 kubelet[2690]: E1105 14:59:35.458920 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.458936 kubelet[2690]: W1105 14:59:35.458931 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.458986 kubelet[2690]: E1105 14:59:35.458940 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.459077 kubelet[2690]: E1105 14:59:35.459067 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.459102 kubelet[2690]: W1105 14:59:35.459077 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.459102 kubelet[2690]: E1105 14:59:35.459085 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.522563 kubelet[2690]: E1105 14:59:35.522530 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:35.523335 containerd[1553]: time="2025-11-05T14:59:35.523274919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-w8x67,Uid:6697d135-ec5e-4e4a-bc53-7b7315379933,Namespace:calico-system,Attempt:0,}" Nov 5 14:59:35.539756 containerd[1553]: time="2025-11-05T14:59:35.539672599Z" level=info msg="connecting to shim c30ce7aa83bf91e9fd152889bdfeb3484c02e00f4d7eef8b4a91b654bdfd28d2" address="unix:///run/containerd/s/dd2c4f12e337a84bf5b77871cf261ca316a9a609183d943b86f442ddf257edf7" namespace=k8s.io protocol=ttrpc version=3 Nov 5 14:59:35.558961 kubelet[2690]: E1105 14:59:35.558931 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.558961 kubelet[2690]: W1105 14:59:35.558959 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.559197 kubelet[2690]: E1105 14:59:35.558979 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.559197 kubelet[2690]: E1105 14:59:35.559158 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.559197 kubelet[2690]: W1105 14:59:35.559167 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.559197 kubelet[2690]: E1105 14:59:35.559183 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.559397 kubelet[2690]: E1105 14:59:35.559334 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.559397 kubelet[2690]: W1105 14:59:35.559353 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.559397 kubelet[2690]: E1105 14:59:35.559394 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.559699 kubelet[2690]: E1105 14:59:35.559686 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.559699 kubelet[2690]: W1105 14:59:35.559699 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.559746 kubelet[2690]: E1105 14:59:35.559722 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.560587 kubelet[2690]: E1105 14:59:35.560571 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.560587 kubelet[2690]: W1105 14:59:35.560588 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.560665 kubelet[2690]: E1105 14:59:35.560625 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.560877 kubelet[2690]: E1105 14:59:35.560863 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.560917 kubelet[2690]: W1105 14:59:35.560878 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.560943 kubelet[2690]: E1105 14:59:35.560907 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.561923 kubelet[2690]: E1105 14:59:35.561898 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.561923 kubelet[2690]: W1105 14:59:35.561918 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.562091 kubelet[2690]: E1105 14:59:35.561944 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.562341 kubelet[2690]: E1105 14:59:35.562325 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.562397 kubelet[2690]: W1105 14:59:35.562340 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.562397 kubelet[2690]: E1105 14:59:35.562365 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.562530 kubelet[2690]: E1105 14:59:35.562516 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.562530 kubelet[2690]: W1105 14:59:35.562530 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.562595 kubelet[2690]: E1105 14:59:35.562552 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.562682 kubelet[2690]: E1105 14:59:35.562669 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.562682 kubelet[2690]: W1105 14:59:35.562681 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.562816 kubelet[2690]: E1105 14:59:35.562731 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.562816 kubelet[2690]: E1105 14:59:35.562813 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.562908 kubelet[2690]: W1105 14:59:35.562822 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.562908 kubelet[2690]: E1105 14:59:35.562856 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.563000 kubelet[2690]: E1105 14:59:35.562985 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.563000 kubelet[2690]: W1105 14:59:35.562996 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.563050 kubelet[2690]: E1105 14:59:35.563007 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.563172 kubelet[2690]: E1105 14:59:35.563156 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.563172 kubelet[2690]: W1105 14:59:35.563170 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.563256 kubelet[2690]: E1105 14:59:35.563186 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.563485 kubelet[2690]: E1105 14:59:35.563467 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.563485 kubelet[2690]: W1105 14:59:35.563482 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.563550 kubelet[2690]: E1105 14:59:35.563501 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.563679 kubelet[2690]: E1105 14:59:35.563665 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.563679 kubelet[2690]: W1105 14:59:35.563678 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.563747 kubelet[2690]: E1105 14:59:35.563693 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.563851 kubelet[2690]: E1105 14:59:35.563836 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.563851 kubelet[2690]: W1105 14:59:35.563849 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.563927 kubelet[2690]: E1105 14:59:35.563895 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.564149 kubelet[2690]: E1105 14:59:35.564129 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.564149 kubelet[2690]: W1105 14:59:35.564148 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.564302 kubelet[2690]: E1105 14:59:35.564195 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.564302 kubelet[2690]: E1105 14:59:35.564300 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.564350 kubelet[2690]: W1105 14:59:35.564309 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.564453 kubelet[2690]: E1105 14:59:35.564395 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.564662 kubelet[2690]: E1105 14:59:35.564646 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.564662 kubelet[2690]: W1105 14:59:35.564659 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.564732 kubelet[2690]: E1105 14:59:35.564693 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.564926 kubelet[2690]: E1105 14:59:35.564889 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.564926 kubelet[2690]: W1105 14:59:35.564902 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.564926 kubelet[2690]: E1105 14:59:35.564917 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.565275 kubelet[2690]: E1105 14:59:35.565127 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.565275 kubelet[2690]: W1105 14:59:35.565144 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.565275 kubelet[2690]: E1105 14:59:35.565162 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.565623 kubelet[2690]: E1105 14:59:35.565606 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.565623 kubelet[2690]: W1105 14:59:35.565622 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.565683 kubelet[2690]: E1105 14:59:35.565642 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.565847 kubelet[2690]: E1105 14:59:35.565833 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.565847 kubelet[2690]: W1105 14:59:35.565846 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.565903 kubelet[2690]: E1105 14:59:35.565860 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.566311 kubelet[2690]: E1105 14:59:35.566293 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.566349 kubelet[2690]: W1105 14:59:35.566314 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.566349 kubelet[2690]: E1105 14:59:35.566334 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.566582 kubelet[2690]: E1105 14:59:35.566568 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.566614 kubelet[2690]: W1105 14:59:35.566584 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.566614 kubelet[2690]: E1105 14:59:35.566596 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.568695 systemd[1]: Started cri-containerd-c30ce7aa83bf91e9fd152889bdfeb3484c02e00f4d7eef8b4a91b654bdfd28d2.scope - libcontainer container c30ce7aa83bf91e9fd152889bdfeb3484c02e00f4d7eef8b4a91b654bdfd28d2. Nov 5 14:59:35.577917 kubelet[2690]: E1105 14:59:35.577846 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:35.577917 kubelet[2690]: W1105 14:59:35.577865 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:35.577917 kubelet[2690]: E1105 14:59:35.577883 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:35.601350 containerd[1553]: time="2025-11-05T14:59:35.601293807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-w8x67,Uid:6697d135-ec5e-4e4a-bc53-7b7315379933,Namespace:calico-system,Attempt:0,} returns sandbox id \"c30ce7aa83bf91e9fd152889bdfeb3484c02e00f4d7eef8b4a91b654bdfd28d2\"" Nov 5 14:59:35.602506 kubelet[2690]: E1105 14:59:35.602377 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:36.291037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount562530770.mount: Deactivated successfully. Nov 5 14:59:36.609786 containerd[1553]: time="2025-11-05T14:59:36.609674106Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:59:36.610444 containerd[1553]: time="2025-11-05T14:59:36.610418264Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Nov 5 14:59:36.611296 containerd[1553]: time="2025-11-05T14:59:36.611267342Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:59:36.613484 containerd[1553]: time="2025-11-05T14:59:36.613456977Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:59:36.613965 containerd[1553]: time="2025-11-05T14:59:36.613941856Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.288572409s" Nov 5 14:59:36.614010 containerd[1553]: time="2025-11-05T14:59:36.613973136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Nov 5 14:59:36.615874 containerd[1553]: time="2025-11-05T14:59:36.615846811Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 5 14:59:36.634955 containerd[1553]: time="2025-11-05T14:59:36.634915686Z" level=info msg="CreateContainer within sandbox \"896cf03b8f6ea109e2b7921e3b6dedeb9a2cd017f52eff170bbc04634d873917\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 5 14:59:36.640148 containerd[1553]: time="2025-11-05T14:59:36.640117914Z" level=info msg="Container 07b69c0475dbcdfb66b46f075f26e4f6c14caf85105403021a6401f6f4bacf85: CDI devices from CRI Config.CDIDevices: []" Nov 5 14:59:36.646019 containerd[1553]: time="2025-11-05T14:59:36.645957700Z" level=info msg="CreateContainer within sandbox \"896cf03b8f6ea109e2b7921e3b6dedeb9a2cd017f52eff170bbc04634d873917\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"07b69c0475dbcdfb66b46f075f26e4f6c14caf85105403021a6401f6f4bacf85\"" Nov 5 14:59:36.646473 containerd[1553]: time="2025-11-05T14:59:36.646442579Z" level=info msg="StartContainer for \"07b69c0475dbcdfb66b46f075f26e4f6c14caf85105403021a6401f6f4bacf85\"" Nov 5 14:59:36.647743 containerd[1553]: time="2025-11-05T14:59:36.647719096Z" level=info msg="connecting to shim 07b69c0475dbcdfb66b46f075f26e4f6c14caf85105403021a6401f6f4bacf85" address="unix:///run/containerd/s/8bb9a716a25e70408b6261dbc3a49523662a0136f16f83a3fc7908bb965552e4" protocol=ttrpc version=3 Nov 5 14:59:36.672415 systemd[1]: Started cri-containerd-07b69c0475dbcdfb66b46f075f26e4f6c14caf85105403021a6401f6f4bacf85.scope - libcontainer container 07b69c0475dbcdfb66b46f075f26e4f6c14caf85105403021a6401f6f4bacf85. Nov 5 14:59:36.710544 containerd[1553]: time="2025-11-05T14:59:36.710502588Z" level=info msg="StartContainer for \"07b69c0475dbcdfb66b46f075f26e4f6c14caf85105403021a6401f6f4bacf85\" returns successfully" Nov 5 14:59:37.243994 kubelet[2690]: E1105 14:59:37.243947 2690 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9kr5x" podUID="1c59cb2f-c3de-4d4b-a46e-9bc0038d4b98" Nov 5 14:59:37.327240 kubelet[2690]: E1105 14:59:37.327140 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:37.347658 kubelet[2690]: I1105 14:59:37.347580 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-764759c6d8-lf79l" podStartSLOduration=2.056821679 podStartE2EDuration="3.347548883s" podCreationTimestamp="2025-11-05 14:59:34 +0000 UTC" firstStartedPulling="2025-11-05 14:59:35.324936888 +0000 UTC m=+22.207510046" lastFinishedPulling="2025-11-05 14:59:36.615664092 +0000 UTC m=+23.498237250" observedRunningTime="2025-11-05 14:59:37.347373883 +0000 UTC m=+24.229947041" watchObservedRunningTime="2025-11-05 14:59:37.347548883 +0000 UTC m=+24.230122041" Nov 5 14:59:37.367979 kubelet[2690]: E1105 14:59:37.367930 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:37.367979 kubelet[2690]: W1105 14:59:37.367968 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:37.368221 kubelet[2690]: E1105 14:59:37.367990 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:37.368797 kubelet[2690]: E1105 14:59:37.368782 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:37.368797 kubelet[2690]: W1105 14:59:37.368797 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:37.368864 kubelet[2690]: E1105 14:59:37.368810 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:37.368987 kubelet[2690]: E1105 14:59:37.368975 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:37.369015 kubelet[2690]: W1105 14:59:37.368987 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:37.369015 kubelet[2690]: E1105 14:59:37.368998 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:37.369176 kubelet[2690]: E1105 14:59:37.369151 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:37.369213 kubelet[2690]: W1105 14:59:37.369176 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:37.369213 kubelet[2690]: E1105 14:59:37.369187 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:37.369562 kubelet[2690]: E1105 14:59:37.369543 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:37.369562 kubelet[2690]: W1105 14:59:37.369556 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:37.369655 kubelet[2690]: E1105 14:59:37.369567 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:37.369719 kubelet[2690]: E1105 14:59:37.369705 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:37.369719 kubelet[2690]: W1105 14:59:37.369717 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:37.369780 kubelet[2690]: E1105 14:59:37.369726 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:37.369851 kubelet[2690]: E1105 14:59:37.369837 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:37.369851 kubelet[2690]: W1105 14:59:37.369846 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:37.369899 kubelet[2690]: E1105 14:59:37.369854 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:37.370074 kubelet[2690]: E1105 14:59:37.370060 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:37.370105 kubelet[2690]: W1105 14:59:37.370073 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:37.370105 kubelet[2690]: E1105 14:59:37.370083 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:37.370247 kubelet[2690]: E1105 14:59:37.370234 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:37.370284 kubelet[2690]: W1105 14:59:37.370247 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:37.370284 kubelet[2690]: E1105 14:59:37.370255 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:37.370376 kubelet[2690]: E1105 14:59:37.370366 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:37.370410 kubelet[2690]: W1105 14:59:37.370376 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:37.370410 kubelet[2690]: E1105 14:59:37.370384 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:37.370515 kubelet[2690]: E1105 14:59:37.370504 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:37.370515 kubelet[2690]: W1105 14:59:37.370514 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:37.370561 kubelet[2690]: E1105 14:59:37.370522 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:37.370654 kubelet[2690]: E1105 14:59:37.370644 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:37.370676 kubelet[2690]: W1105 14:59:37.370654 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:37.370676 kubelet[2690]: E1105 14:59:37.370662 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:37.370835 kubelet[2690]: E1105 14:59:37.370820 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:37.370835 kubelet[2690]: W1105 14:59:37.370833 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:37.370900 kubelet[2690]: E1105 14:59:37.370842 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:37.371084 kubelet[2690]: E1105 14:59:37.371024 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:37.371084 kubelet[2690]: W1105 14:59:37.371038 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:37.371084 kubelet[2690]: E1105 14:59:37.371048 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:37.371262 kubelet[2690]: E1105 14:59:37.371244 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:37.371321 kubelet[2690]: W1105 14:59:37.371282 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:37.371321 kubelet[2690]: E1105 14:59:37.371295 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:37.376750 kubelet[2690]: E1105 14:59:37.376628 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:37.376750 kubelet[2690]: W1105 14:59:37.376649 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:37.376750 kubelet[2690]: E1105 14:59:37.376662 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:37.376750 kubelet[2690]: E1105 14:59:37.376842 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:37.376750 kubelet[2690]: W1105 14:59:37.376850 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:37.376750 kubelet[2690]: E1105 14:59:37.376863 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:37.377063 kubelet[2690]: E1105 14:59:37.377040 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:37.377063 kubelet[2690]: W1105 14:59:37.377052 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:37.377111 kubelet[2690]: E1105 14:59:37.377064 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:37.377528 kubelet[2690]: E1105 14:59:37.377329 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:37.377528 kubelet[2690]: W1105 14:59:37.377343 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:37.377528 kubelet[2690]: E1105 14:59:37.377363 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:37.377528 kubelet[2690]: E1105 14:59:37.377525 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:37.377528 kubelet[2690]: W1105 14:59:37.377534 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:37.377680 kubelet[2690]: E1105 14:59:37.377552 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:37.377720 kubelet[2690]: E1105 14:59:37.377703 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:37.377720 kubelet[2690]: W1105 14:59:37.377715 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:37.377769 kubelet[2690]: E1105 14:59:37.377729 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:37.378259 kubelet[2690]: E1105 14:59:37.377903 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:37.378259 kubelet[2690]: W1105 14:59:37.377917 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:37.378259 kubelet[2690]: E1105 14:59:37.377941 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:37.378259 kubelet[2690]: E1105 14:59:37.378065 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:37.378259 kubelet[2690]: W1105 14:59:37.378080 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:37.378259 kubelet[2690]: E1105 14:59:37.378113 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:37.378259 kubelet[2690]: E1105 14:59:37.378247 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:37.378259 kubelet[2690]: W1105 14:59:37.378255 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:37.378259 kubelet[2690]: E1105 14:59:37.378269 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:37.378555 kubelet[2690]: E1105 14:59:37.378429 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:37.378555 kubelet[2690]: W1105 14:59:37.378437 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:37.378555 kubelet[2690]: E1105 14:59:37.378449 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:37.378766 kubelet[2690]: E1105 14:59:37.378635 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:37.378766 kubelet[2690]: W1105 14:59:37.378650 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:37.378766 kubelet[2690]: E1105 14:59:37.378663 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:37.378856 kubelet[2690]: E1105 14:59:37.378827 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:37.378856 kubelet[2690]: W1105 14:59:37.378836 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:37.378856 kubelet[2690]: E1105 14:59:37.378851 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:37.379496 kubelet[2690]: E1105 14:59:37.379428 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:37.379496 kubelet[2690]: W1105 14:59:37.379453 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:37.379496 kubelet[2690]: E1105 14:59:37.379471 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:37.379726 kubelet[2690]: E1105 14:59:37.379659 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:37.379726 kubelet[2690]: W1105 14:59:37.379669 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:37.379726 kubelet[2690]: E1105 14:59:37.379683 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:37.379827 kubelet[2690]: E1105 14:59:37.379811 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:37.379827 kubelet[2690]: W1105 14:59:37.379823 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:37.379877 kubelet[2690]: E1105 14:59:37.379836 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:37.380017 kubelet[2690]: E1105 14:59:37.380005 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:37.380017 kubelet[2690]: W1105 14:59:37.380015 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:37.380072 kubelet[2690]: E1105 14:59:37.380029 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:37.380272 kubelet[2690]: E1105 14:59:37.380253 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:37.380299 kubelet[2690]: W1105 14:59:37.380270 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:37.380299 kubelet[2690]: E1105 14:59:37.380290 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:37.380490 kubelet[2690]: E1105 14:59:37.380475 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:59:37.380529 kubelet[2690]: W1105 14:59:37.380491 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:59:37.380529 kubelet[2690]: E1105 14:59:37.380501 2690 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:59:37.719372 containerd[1553]: time="2025-11-05T14:59:37.719328565Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:59:37.720160 containerd[1553]: time="2025-11-05T14:59:37.720118283Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Nov 5 14:59:37.720997 containerd[1553]: time="2025-11-05T14:59:37.720932801Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:59:37.723723 containerd[1553]: time="2025-11-05T14:59:37.723632475Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:59:37.723812 containerd[1553]: time="2025-11-05T14:59:37.723730315Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.107851704s" Nov 5 14:59:37.723812 containerd[1553]: time="2025-11-05T14:59:37.723760675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Nov 5 14:59:37.727874 containerd[1553]: time="2025-11-05T14:59:37.727828666Z" level=info msg="CreateContainer within sandbox \"c30ce7aa83bf91e9fd152889bdfeb3484c02e00f4d7eef8b4a91b654bdfd28d2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 5 14:59:37.735242 containerd[1553]: time="2025-11-05T14:59:37.734816050Z" level=info msg="Container fae3c629a0bfa08b440651cf357e14878e39531d129f015f4dcb60af204fb8af: CDI devices from CRI Config.CDIDevices: []" Nov 5 14:59:37.743471 containerd[1553]: time="2025-11-05T14:59:37.743416991Z" level=info msg="CreateContainer within sandbox \"c30ce7aa83bf91e9fd152889bdfeb3484c02e00f4d7eef8b4a91b654bdfd28d2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"fae3c629a0bfa08b440651cf357e14878e39531d129f015f4dcb60af204fb8af\"" Nov 5 14:59:37.744283 containerd[1553]: time="2025-11-05T14:59:37.744197429Z" level=info msg="StartContainer for \"fae3c629a0bfa08b440651cf357e14878e39531d129f015f4dcb60af204fb8af\"" Nov 5 14:59:37.746092 containerd[1553]: time="2025-11-05T14:59:37.746032465Z" level=info msg="connecting to shim fae3c629a0bfa08b440651cf357e14878e39531d129f015f4dcb60af204fb8af" address="unix:///run/containerd/s/dd2c4f12e337a84bf5b77871cf261ca316a9a609183d943b86f442ddf257edf7" protocol=ttrpc version=3 Nov 5 14:59:37.769426 systemd[1]: Started cri-containerd-fae3c629a0bfa08b440651cf357e14878e39531d129f015f4dcb60af204fb8af.scope - libcontainer container fae3c629a0bfa08b440651cf357e14878e39531d129f015f4dcb60af204fb8af. Nov 5 14:59:37.808459 containerd[1553]: time="2025-11-05T14:59:37.808402124Z" level=info msg="StartContainer for \"fae3c629a0bfa08b440651cf357e14878e39531d129f015f4dcb60af204fb8af\" returns successfully" Nov 5 14:59:37.820299 systemd[1]: cri-containerd-fae3c629a0bfa08b440651cf357e14878e39531d129f015f4dcb60af204fb8af.scope: Deactivated successfully. Nov 5 14:59:37.850432 containerd[1553]: time="2025-11-05T14:59:37.850375270Z" level=info msg="received exit event container_id:\"fae3c629a0bfa08b440651cf357e14878e39531d129f015f4dcb60af204fb8af\" id:\"fae3c629a0bfa08b440651cf357e14878e39531d129f015f4dcb60af204fb8af\" pid:3391 exited_at:{seconds:1762354777 nanos:842669487}" Nov 5 14:59:37.850708 containerd[1553]: time="2025-11-05T14:59:37.850504149Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fae3c629a0bfa08b440651cf357e14878e39531d129f015f4dcb60af204fb8af\" id:\"fae3c629a0bfa08b440651cf357e14878e39531d129f015f4dcb60af204fb8af\" pid:3391 exited_at:{seconds:1762354777 nanos:842669487}" Nov 5 14:59:37.882250 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fae3c629a0bfa08b440651cf357e14878e39531d129f015f4dcb60af204fb8af-rootfs.mount: Deactivated successfully. Nov 5 14:59:38.332486 kubelet[2690]: E1105 14:59:38.332444 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:38.333672 containerd[1553]: time="2025-11-05T14:59:38.333511653Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 5 14:59:38.335900 kubelet[2690]: I1105 14:59:38.335457 2690 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 5 14:59:38.335900 kubelet[2690]: E1105 14:59:38.335754 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:39.240333 kubelet[2690]: E1105 14:59:39.238317 2690 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9kr5x" podUID="1c59cb2f-c3de-4d4b-a46e-9bc0038d4b98" Nov 5 14:59:41.033078 containerd[1553]: time="2025-11-05T14:59:41.033017866Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:59:41.033510 containerd[1553]: time="2025-11-05T14:59:41.033487985Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Nov 5 14:59:41.034353 containerd[1553]: time="2025-11-05T14:59:41.034315184Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:59:41.036773 containerd[1553]: time="2025-11-05T14:59:41.036735019Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:59:41.037102 containerd[1553]: time="2025-11-05T14:59:41.037061538Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.703514925s" Nov 5 14:59:41.037137 containerd[1553]: time="2025-11-05T14:59:41.037098538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Nov 5 14:59:41.044247 containerd[1553]: time="2025-11-05T14:59:41.043856366Z" level=info msg="CreateContainer within sandbox \"c30ce7aa83bf91e9fd152889bdfeb3484c02e00f4d7eef8b4a91b654bdfd28d2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 5 14:59:41.053453 containerd[1553]: time="2025-11-05T14:59:41.053398027Z" level=info msg="Container a192d91ec14eb2f44088996c45f7f2f0961525ea9c32e340bf042ac511b32a33: CDI devices from CRI Config.CDIDevices: []" Nov 5 14:59:41.074727 containerd[1553]: time="2025-11-05T14:59:41.074667907Z" level=info msg="CreateContainer within sandbox \"c30ce7aa83bf91e9fd152889bdfeb3484c02e00f4d7eef8b4a91b654bdfd28d2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a192d91ec14eb2f44088996c45f7f2f0961525ea9c32e340bf042ac511b32a33\"" Nov 5 14:59:41.075360 containerd[1553]: time="2025-11-05T14:59:41.075341066Z" level=info msg="StartContainer for \"a192d91ec14eb2f44088996c45f7f2f0961525ea9c32e340bf042ac511b32a33\"" Nov 5 14:59:41.077315 containerd[1553]: time="2025-11-05T14:59:41.077265182Z" level=info msg="connecting to shim a192d91ec14eb2f44088996c45f7f2f0961525ea9c32e340bf042ac511b32a33" address="unix:///run/containerd/s/dd2c4f12e337a84bf5b77871cf261ca316a9a609183d943b86f442ddf257edf7" protocol=ttrpc version=3 Nov 5 14:59:41.101388 systemd[1]: Started cri-containerd-a192d91ec14eb2f44088996c45f7f2f0961525ea9c32e340bf042ac511b32a33.scope - libcontainer container a192d91ec14eb2f44088996c45f7f2f0961525ea9c32e340bf042ac511b32a33. Nov 5 14:59:41.132627 containerd[1553]: time="2025-11-05T14:59:41.132591677Z" level=info msg="StartContainer for \"a192d91ec14eb2f44088996c45f7f2f0961525ea9c32e340bf042ac511b32a33\" returns successfully" Nov 5 14:59:41.237983 kubelet[2690]: E1105 14:59:41.237912 2690 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9kr5x" podUID="1c59cb2f-c3de-4d4b-a46e-9bc0038d4b98" Nov 5 14:59:41.372013 kubelet[2690]: E1105 14:59:41.370950 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:41.794706 systemd[1]: cri-containerd-a192d91ec14eb2f44088996c45f7f2f0961525ea9c32e340bf042ac511b32a33.scope: Deactivated successfully. Nov 5 14:59:41.794984 systemd[1]: cri-containerd-a192d91ec14eb2f44088996c45f7f2f0961525ea9c32e340bf042ac511b32a33.scope: Consumed 468ms CPU time, 177.3M memory peak, 2.9M read from disk, 165.9M written to disk. Nov 5 14:59:41.797522 containerd[1553]: time="2025-11-05T14:59:41.797423413Z" level=info msg="received exit event container_id:\"a192d91ec14eb2f44088996c45f7f2f0961525ea9c32e340bf042ac511b32a33\" id:\"a192d91ec14eb2f44088996c45f7f2f0961525ea9c32e340bf042ac511b32a33\" pid:3451 exited_at:{seconds:1762354781 nanos:797169414}" Nov 5 14:59:41.797719 containerd[1553]: time="2025-11-05T14:59:41.797489933Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a192d91ec14eb2f44088996c45f7f2f0961525ea9c32e340bf042ac511b32a33\" id:\"a192d91ec14eb2f44088996c45f7f2f0961525ea9c32e340bf042ac511b32a33\" pid:3451 exited_at:{seconds:1762354781 nanos:797169414}" Nov 5 14:59:41.816946 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a192d91ec14eb2f44088996c45f7f2f0961525ea9c32e340bf042ac511b32a33-rootfs.mount: Deactivated successfully. Nov 5 14:59:41.872996 kubelet[2690]: I1105 14:59:41.872945 2690 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 5 14:59:41.946629 systemd[1]: Created slice kubepods-besteffort-podbac974d5_2052_4432_9839_70f531dc6657.slice - libcontainer container kubepods-besteffort-podbac974d5_2052_4432_9839_70f531dc6657.slice. Nov 5 14:59:41.957856 kubelet[2690]: W1105 14:59:41.955953 2690 reflector.go:569] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Nov 5 14:59:41.957376 systemd[1]: Created slice kubepods-besteffort-pod947e4c3f_edaa_4455_9701_8eca3788c1c9.slice - libcontainer container kubepods-besteffort-pod947e4c3f_edaa_4455_9701_8eca3788c1c9.slice. Nov 5 14:59:41.958267 kubelet[2690]: E1105 14:59:41.958229 2690 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Nov 5 14:59:41.965131 systemd[1]: Created slice kubepods-burstable-pod8848db76_24a9_4a44_87aa_bfca385d1093.slice - libcontainer container kubepods-burstable-pod8848db76_24a9_4a44_87aa_bfca385d1093.slice. Nov 5 14:59:41.973161 systemd[1]: Created slice kubepods-besteffort-podbcf60f8c_179e_4bff_8ac8_93bb2db7eacf.slice - libcontainer container kubepods-besteffort-podbcf60f8c_179e_4bff_8ac8_93bb2db7eacf.slice. Nov 5 14:59:41.981257 systemd[1]: Created slice kubepods-burstable-pod0e669a08_2258_4b09_8c26_2c12b7651335.slice - libcontainer container kubepods-burstable-pod0e669a08_2258_4b09_8c26_2c12b7651335.slice. Nov 5 14:59:41.988939 systemd[1]: Created slice kubepods-besteffort-pod870c178e_41c4_4ab0_8a1e_1bcbcc89ae10.slice - libcontainer container kubepods-besteffort-pod870c178e_41c4_4ab0_8a1e_1bcbcc89ae10.slice. Nov 5 14:59:41.997554 systemd[1]: Created slice kubepods-besteffort-pod9eac046d_fe02_4a83_bfff_937118044d14.slice - libcontainer container kubepods-besteffort-pod9eac046d_fe02_4a83_bfff_937118044d14.slice. Nov 5 14:59:42.004105 systemd[1]: Created slice kubepods-besteffort-pod75461a76_a686_4ba2_aacc_266a6fc4971c.slice - libcontainer container kubepods-besteffort-pod75461a76_a686_4ba2_aacc_266a6fc4971c.slice. Nov 5 14:59:42.010472 kubelet[2690]: I1105 14:59:42.010426 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvjgv\" (UniqueName: \"kubernetes.io/projected/870c178e-41c4-4ab0-8a1e-1bcbcc89ae10-kube-api-access-mvjgv\") pod \"calico-apiserver-59c8c4d79f-vmsfn\" (UID: \"870c178e-41c4-4ab0-8a1e-1bcbcc89ae10\") " pod="calico-apiserver/calico-apiserver-59c8c4d79f-vmsfn" Nov 5 14:59:42.011771 kubelet[2690]: I1105 14:59:42.010788 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndxtp\" (UniqueName: \"kubernetes.io/projected/8848db76-24a9-4a44-87aa-bfca385d1093-kube-api-access-ndxtp\") pod \"coredns-668d6bf9bc-p5z9n\" (UID: \"8848db76-24a9-4a44-87aa-bfca385d1093\") " pod="kube-system/coredns-668d6bf9bc-p5z9n" Nov 5 14:59:42.011771 kubelet[2690]: I1105 14:59:42.010845 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/947e4c3f-edaa-4455-9701-8eca3788c1c9-calico-apiserver-certs\") pod \"calico-apiserver-fbd6ccfdb-9smqx\" (UID: \"947e4c3f-edaa-4455-9701-8eca3788c1c9\") " pod="calico-apiserver/calico-apiserver-fbd6ccfdb-9smqx" Nov 5 14:59:42.011771 kubelet[2690]: I1105 14:59:42.010867 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgvn7\" (UniqueName: \"kubernetes.io/projected/947e4c3f-edaa-4455-9701-8eca3788c1c9-kube-api-access-bgvn7\") pod \"calico-apiserver-fbd6ccfdb-9smqx\" (UID: \"947e4c3f-edaa-4455-9701-8eca3788c1c9\") " pod="calico-apiserver/calico-apiserver-fbd6ccfdb-9smqx" Nov 5 14:59:42.011771 kubelet[2690]: I1105 14:59:42.010888 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e669a08-2258-4b09-8c26-2c12b7651335-config-volume\") pod \"coredns-668d6bf9bc-6z6fv\" (UID: \"0e669a08-2258-4b09-8c26-2c12b7651335\") " pod="kube-system/coredns-668d6bf9bc-6z6fv" Nov 5 14:59:42.011771 kubelet[2690]: I1105 14:59:42.010924 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75461a76-a686-4ba2-aacc-266a6fc4971c-config\") pod \"goldmane-666569f655-rthls\" (UID: \"75461a76-a686-4ba2-aacc-266a6fc4971c\") " pod="calico-system/goldmane-666569f655-rthls" Nov 5 14:59:42.011905 kubelet[2690]: I1105 14:59:42.010945 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/75461a76-a686-4ba2-aacc-266a6fc4971c-goldmane-key-pair\") pod \"goldmane-666569f655-rthls\" (UID: \"75461a76-a686-4ba2-aacc-266a6fc4971c\") " pod="calico-system/goldmane-666569f655-rthls" Nov 5 14:59:42.011905 kubelet[2690]: I1105 14:59:42.010963 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcf60f8c-179e-4bff-8ac8-93bb2db7eacf-tigera-ca-bundle\") pod \"calico-kube-controllers-7cd5cf7f85-nwqhp\" (UID: \"bcf60f8c-179e-4bff-8ac8-93bb2db7eacf\") " pod="calico-system/calico-kube-controllers-7cd5cf7f85-nwqhp" Nov 5 14:59:42.011905 kubelet[2690]: I1105 14:59:42.010981 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9eac046d-fe02-4a83-bfff-937118044d14-whisker-backend-key-pair\") pod \"whisker-588f9895b5-dl7jw\" (UID: \"9eac046d-fe02-4a83-bfff-937118044d14\") " pod="calico-system/whisker-588f9895b5-dl7jw" Nov 5 14:59:42.011905 kubelet[2690]: I1105 14:59:42.011014 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92d65\" (UniqueName: \"kubernetes.io/projected/0e669a08-2258-4b09-8c26-2c12b7651335-kube-api-access-92d65\") pod \"coredns-668d6bf9bc-6z6fv\" (UID: \"0e669a08-2258-4b09-8c26-2c12b7651335\") " pod="kube-system/coredns-668d6bf9bc-6z6fv" Nov 5 14:59:42.011905 kubelet[2690]: I1105 14:59:42.011041 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn4xz\" (UniqueName: \"kubernetes.io/projected/bac974d5-2052-4432-9839-70f531dc6657-kube-api-access-tn4xz\") pod \"calico-apiserver-fbd6ccfdb-hgtng\" (UID: \"bac974d5-2052-4432-9839-70f531dc6657\") " pod="calico-apiserver/calico-apiserver-fbd6ccfdb-hgtng" Nov 5 14:59:42.012015 kubelet[2690]: I1105 14:59:42.011057 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gtsq\" (UniqueName: \"kubernetes.io/projected/75461a76-a686-4ba2-aacc-266a6fc4971c-kube-api-access-5gtsq\") pod \"goldmane-666569f655-rthls\" (UID: \"75461a76-a686-4ba2-aacc-266a6fc4971c\") " pod="calico-system/goldmane-666569f655-rthls" Nov 5 14:59:42.012015 kubelet[2690]: I1105 14:59:42.011073 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bac974d5-2052-4432-9839-70f531dc6657-calico-apiserver-certs\") pod \"calico-apiserver-fbd6ccfdb-hgtng\" (UID: \"bac974d5-2052-4432-9839-70f531dc6657\") " pod="calico-apiserver/calico-apiserver-fbd6ccfdb-hgtng" Nov 5 14:59:42.012015 kubelet[2690]: I1105 14:59:42.011094 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8848db76-24a9-4a44-87aa-bfca385d1093-config-volume\") pod \"coredns-668d6bf9bc-p5z9n\" (UID: \"8848db76-24a9-4a44-87aa-bfca385d1093\") " pod="kube-system/coredns-668d6bf9bc-p5z9n" Nov 5 14:59:42.012015 kubelet[2690]: I1105 14:59:42.011111 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/870c178e-41c4-4ab0-8a1e-1bcbcc89ae10-calico-apiserver-certs\") pod \"calico-apiserver-59c8c4d79f-vmsfn\" (UID: \"870c178e-41c4-4ab0-8a1e-1bcbcc89ae10\") " pod="calico-apiserver/calico-apiserver-59c8c4d79f-vmsfn" Nov 5 14:59:42.012015 kubelet[2690]: I1105 14:59:42.011146 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlcwt\" (UniqueName: \"kubernetes.io/projected/bcf60f8c-179e-4bff-8ac8-93bb2db7eacf-kube-api-access-dlcwt\") pod \"calico-kube-controllers-7cd5cf7f85-nwqhp\" (UID: \"bcf60f8c-179e-4bff-8ac8-93bb2db7eacf\") " pod="calico-system/calico-kube-controllers-7cd5cf7f85-nwqhp" Nov 5 14:59:42.012117 kubelet[2690]: I1105 14:59:42.011163 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9eac046d-fe02-4a83-bfff-937118044d14-whisker-ca-bundle\") pod \"whisker-588f9895b5-dl7jw\" (UID: \"9eac046d-fe02-4a83-bfff-937118044d14\") " pod="calico-system/whisker-588f9895b5-dl7jw" Nov 5 14:59:42.012117 kubelet[2690]: I1105 14:59:42.011179 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw4bv\" (UniqueName: \"kubernetes.io/projected/9eac046d-fe02-4a83-bfff-937118044d14-kube-api-access-kw4bv\") pod \"whisker-588f9895b5-dl7jw\" (UID: \"9eac046d-fe02-4a83-bfff-937118044d14\") " pod="calico-system/whisker-588f9895b5-dl7jw" Nov 5 14:59:42.012117 kubelet[2690]: I1105 14:59:42.011196 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75461a76-a686-4ba2-aacc-266a6fc4971c-goldmane-ca-bundle\") pod \"goldmane-666569f655-rthls\" (UID: \"75461a76-a686-4ba2-aacc-266a6fc4971c\") " pod="calico-system/goldmane-666569f655-rthls" Nov 5 14:59:42.252260 containerd[1553]: time="2025-11-05T14:59:42.251966248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fbd6ccfdb-hgtng,Uid:bac974d5-2052-4432-9839-70f531dc6657,Namespace:calico-apiserver,Attempt:0,}" Nov 5 14:59:42.262030 containerd[1553]: time="2025-11-05T14:59:42.261800150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fbd6ccfdb-9smqx,Uid:947e4c3f-edaa-4455-9701-8eca3788c1c9,Namespace:calico-apiserver,Attempt:0,}" Nov 5 14:59:42.268138 kubelet[2690]: E1105 14:59:42.268080 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:42.277888 containerd[1553]: time="2025-11-05T14:59:42.277745241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cd5cf7f85-nwqhp,Uid:bcf60f8c-179e-4bff-8ac8-93bb2db7eacf,Namespace:calico-system,Attempt:0,}" Nov 5 14:59:42.278310 containerd[1553]: time="2025-11-05T14:59:42.278281080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p5z9n,Uid:8848db76-24a9-4a44-87aa-bfca385d1093,Namespace:kube-system,Attempt:0,}" Nov 5 14:59:42.287153 kubelet[2690]: E1105 14:59:42.287107 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:42.288744 containerd[1553]: time="2025-11-05T14:59:42.288704741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6z6fv,Uid:0e669a08-2258-4b09-8c26-2c12b7651335,Namespace:kube-system,Attempt:0,}" Nov 5 14:59:42.295819 containerd[1553]: time="2025-11-05T14:59:42.295715088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59c8c4d79f-vmsfn,Uid:870c178e-41c4-4ab0-8a1e-1bcbcc89ae10,Namespace:calico-apiserver,Attempt:0,}" Nov 5 14:59:42.307213 containerd[1553]: time="2025-11-05T14:59:42.307160107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-rthls,Uid:75461a76-a686-4ba2-aacc-266a6fc4971c,Namespace:calico-system,Attempt:0,}" Nov 5 14:59:42.379478 kubelet[2690]: E1105 14:59:42.379341 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:42.381381 containerd[1553]: time="2025-11-05T14:59:42.381144172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 5 14:59:42.385327 containerd[1553]: time="2025-11-05T14:59:42.385273804Z" level=error msg="Failed to destroy network for sandbox \"e7246830723856872941fdd1e8199d28602f604a84b4db39e79776b60220c092\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:59:42.390980 containerd[1553]: time="2025-11-05T14:59:42.390927314Z" level=error msg="Failed to destroy network for sandbox \"cce7de7c11ea8cc3d0c96f92948b1883a5d59b3d6e27cca6b4fe50aa778c8418\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:59:42.397781 containerd[1553]: time="2025-11-05T14:59:42.397727782Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fbd6ccfdb-hgtng,Uid:bac974d5-2052-4432-9839-70f531dc6657,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7246830723856872941fdd1e8199d28602f604a84b4db39e79776b60220c092\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:59:42.398998 containerd[1553]: time="2025-11-05T14:59:42.398951779Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cd5cf7f85-nwqhp,Uid:bcf60f8c-179e-4bff-8ac8-93bb2db7eacf,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cce7de7c11ea8cc3d0c96f92948b1883a5d59b3d6e27cca6b4fe50aa778c8418\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:59:42.399482 containerd[1553]: time="2025-11-05T14:59:42.399437819Z" level=error msg="Failed to destroy network for sandbox \"84b1a0640c15af8c18131598b73bc52afaf29f456b9274904b461b3b896465d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:59:42.401525 kubelet[2690]: E1105 14:59:42.401433 2690 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7246830723856872941fdd1e8199d28602f604a84b4db39e79776b60220c092\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:59:42.401836 kubelet[2690]: E1105 14:59:42.401810 2690 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7246830723856872941fdd1e8199d28602f604a84b4db39e79776b60220c092\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-fbd6ccfdb-hgtng" Nov 5 14:59:42.402115 kubelet[2690]: E1105 14:59:42.402040 2690 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7246830723856872941fdd1e8199d28602f604a84b4db39e79776b60220c092\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-fbd6ccfdb-hgtng" Nov 5 14:59:42.402216 kubelet[2690]: E1105 14:59:42.402094 2690 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-fbd6ccfdb-hgtng_calico-apiserver(bac974d5-2052-4432-9839-70f531dc6657)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-fbd6ccfdb-hgtng_calico-apiserver(bac974d5-2052-4432-9839-70f531dc6657)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e7246830723856872941fdd1e8199d28602f604a84b4db39e79776b60220c092\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-fbd6ccfdb-hgtng" podUID="bac974d5-2052-4432-9839-70f531dc6657" Nov 5 14:59:42.402402 kubelet[2690]: E1105 14:59:42.401433 2690 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cce7de7c11ea8cc3d0c96f92948b1883a5d59b3d6e27cca6b4fe50aa778c8418\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:59:42.402472 kubelet[2690]: E1105 14:59:42.402419 2690 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cce7de7c11ea8cc3d0c96f92948b1883a5d59b3d6e27cca6b4fe50aa778c8418\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cd5cf7f85-nwqhp" Nov 5 14:59:42.402472 kubelet[2690]: E1105 14:59:42.402439 2690 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cce7de7c11ea8cc3d0c96f92948b1883a5d59b3d6e27cca6b4fe50aa778c8418\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cd5cf7f85-nwqhp" Nov 5 14:59:42.402710 kubelet[2690]: E1105 14:59:42.402483 2690 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7cd5cf7f85-nwqhp_calico-system(bcf60f8c-179e-4bff-8ac8-93bb2db7eacf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7cd5cf7f85-nwqhp_calico-system(bcf60f8c-179e-4bff-8ac8-93bb2db7eacf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cce7de7c11ea8cc3d0c96f92948b1883a5d59b3d6e27cca6b4fe50aa778c8418\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7cd5cf7f85-nwqhp" podUID="bcf60f8c-179e-4bff-8ac8-93bb2db7eacf" Nov 5 14:59:42.404375 containerd[1553]: time="2025-11-05T14:59:42.404016250Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6z6fv,Uid:0e669a08-2258-4b09-8c26-2c12b7651335,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"84b1a0640c15af8c18131598b73bc52afaf29f456b9274904b461b3b896465d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:59:42.404980 kubelet[2690]: E1105 14:59:42.404948 2690 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84b1a0640c15af8c18131598b73bc52afaf29f456b9274904b461b3b896465d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:59:42.405061 kubelet[2690]: E1105 14:59:42.404994 2690 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84b1a0640c15af8c18131598b73bc52afaf29f456b9274904b461b3b896465d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6z6fv" Nov 5 14:59:42.405061 kubelet[2690]: E1105 14:59:42.405013 2690 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84b1a0640c15af8c18131598b73bc52afaf29f456b9274904b461b3b896465d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6z6fv" Nov 5 14:59:42.405061 kubelet[2690]: E1105 14:59:42.405048 2690 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-6z6fv_kube-system(0e669a08-2258-4b09-8c26-2c12b7651335)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-6z6fv_kube-system(0e669a08-2258-4b09-8c26-2c12b7651335)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"84b1a0640c15af8c18131598b73bc52afaf29f456b9274904b461b3b896465d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6z6fv" podUID="0e669a08-2258-4b09-8c26-2c12b7651335" Nov 5 14:59:42.409320 containerd[1553]: time="2025-11-05T14:59:42.409261161Z" level=error msg="Failed to destroy network for sandbox \"e277d5bd0eeb8cb3452b80b26fd3544e4773a4efe52b9c54eae78f94b4c7d944\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:59:42.415075 containerd[1553]: time="2025-11-05T14:59:42.414938350Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p5z9n,Uid:8848db76-24a9-4a44-87aa-bfca385d1093,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e277d5bd0eeb8cb3452b80b26fd3544e4773a4efe52b9c54eae78f94b4c7d944\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:59:42.415220 kubelet[2690]: E1105 14:59:42.415170 2690 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e277d5bd0eeb8cb3452b80b26fd3544e4773a4efe52b9c54eae78f94b4c7d944\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:59:42.415769 kubelet[2690]: E1105 14:59:42.415719 2690 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e277d5bd0eeb8cb3452b80b26fd3544e4773a4efe52b9c54eae78f94b4c7d944\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-p5z9n" Nov 5 14:59:42.415817 kubelet[2690]: E1105 14:59:42.415766 2690 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e277d5bd0eeb8cb3452b80b26fd3544e4773a4efe52b9c54eae78f94b4c7d944\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-p5z9n" Nov 5 14:59:42.415840 kubelet[2690]: E1105 14:59:42.415812 2690 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-p5z9n_kube-system(8848db76-24a9-4a44-87aa-bfca385d1093)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-p5z9n_kube-system(8848db76-24a9-4a44-87aa-bfca385d1093)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e277d5bd0eeb8cb3452b80b26fd3544e4773a4efe52b9c54eae78f94b4c7d944\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-p5z9n" podUID="8848db76-24a9-4a44-87aa-bfca385d1093" Nov 5 14:59:42.429426 containerd[1553]: time="2025-11-05T14:59:42.429345124Z" level=error msg="Failed to destroy network for sandbox \"b26dcfce0ac161cb2a991ec9c7df647081a70455b7224a0af8711bb835d4d44d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:59:42.429426 containerd[1553]: time="2025-11-05T14:59:42.429345724Z" level=error msg="Failed to destroy network for sandbox \"59c30023e582b44b8d3c15cecc1ae2713860d123e47bc83f702730df5349061f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:59:42.430279 containerd[1553]: time="2025-11-05T14:59:42.430250882Z" level=error msg="Failed to destroy network for sandbox \"2fa660ed5f0448a2d8edfb9f74965f83cf6315d4f98a1dcc121d6d9892b04bc1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:59:42.430803 containerd[1553]: time="2025-11-05T14:59:42.430752881Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-rthls,Uid:75461a76-a686-4ba2-aacc-266a6fc4971c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b26dcfce0ac161cb2a991ec9c7df647081a70455b7224a0af8711bb835d4d44d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:59:42.431033 kubelet[2690]: E1105 14:59:42.430992 2690 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b26dcfce0ac161cb2a991ec9c7df647081a70455b7224a0af8711bb835d4d44d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:59:42.431097 kubelet[2690]: E1105 14:59:42.431057 2690 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b26dcfce0ac161cb2a991ec9c7df647081a70455b7224a0af8711bb835d4d44d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-rthls" Nov 5 14:59:42.431097 kubelet[2690]: E1105 14:59:42.431079 2690 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b26dcfce0ac161cb2a991ec9c7df647081a70455b7224a0af8711bb835d4d44d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-rthls" Nov 5 14:59:42.431143 kubelet[2690]: E1105 14:59:42.431123 2690 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-rthls_calico-system(75461a76-a686-4ba2-aacc-266a6fc4971c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-rthls_calico-system(75461a76-a686-4ba2-aacc-266a6fc4971c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b26dcfce0ac161cb2a991ec9c7df647081a70455b7224a0af8711bb835d4d44d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-rthls" podUID="75461a76-a686-4ba2-aacc-266a6fc4971c" Nov 5 14:59:42.432041 containerd[1553]: time="2025-11-05T14:59:42.431926039Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59c8c4d79f-vmsfn,Uid:870c178e-41c4-4ab0-8a1e-1bcbcc89ae10,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"59c30023e582b44b8d3c15cecc1ae2713860d123e47bc83f702730df5349061f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:59:42.432146 kubelet[2690]: E1105 14:59:42.432100 2690 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59c30023e582b44b8d3c15cecc1ae2713860d123e47bc83f702730df5349061f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:59:42.432188 kubelet[2690]: E1105 14:59:42.432139 2690 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59c30023e582b44b8d3c15cecc1ae2713860d123e47bc83f702730df5349061f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59c8c4d79f-vmsfn" Nov 5 14:59:42.432188 kubelet[2690]: E1105 14:59:42.432162 2690 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59c30023e582b44b8d3c15cecc1ae2713860d123e47bc83f702730df5349061f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59c8c4d79f-vmsfn" Nov 5 14:59:42.432188 kubelet[2690]: E1105 14:59:42.432210 2690 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59c8c4d79f-vmsfn_calico-apiserver(870c178e-41c4-4ab0-8a1e-1bcbcc89ae10)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59c8c4d79f-vmsfn_calico-apiserver(870c178e-41c4-4ab0-8a1e-1bcbcc89ae10)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"59c30023e582b44b8d3c15cecc1ae2713860d123e47bc83f702730df5349061f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59c8c4d79f-vmsfn" podUID="870c178e-41c4-4ab0-8a1e-1bcbcc89ae10" Nov 5 14:59:42.433155 containerd[1553]: time="2025-11-05T14:59:42.433068957Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fbd6ccfdb-9smqx,Uid:947e4c3f-edaa-4455-9701-8eca3788c1c9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fa660ed5f0448a2d8edfb9f74965f83cf6315d4f98a1dcc121d6d9892b04bc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:59:42.433262 kubelet[2690]: E1105 14:59:42.433227 2690 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fa660ed5f0448a2d8edfb9f74965f83cf6315d4f98a1dcc121d6d9892b04bc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:59:42.433294 kubelet[2690]: E1105 14:59:42.433274 2690 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fa660ed5f0448a2d8edfb9f74965f83cf6315d4f98a1dcc121d6d9892b04bc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-fbd6ccfdb-9smqx" Nov 5 14:59:42.433319 kubelet[2690]: E1105 14:59:42.433290 2690 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fa660ed5f0448a2d8edfb9f74965f83cf6315d4f98a1dcc121d6d9892b04bc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-fbd6ccfdb-9smqx" Nov 5 14:59:42.433343 kubelet[2690]: E1105 14:59:42.433321 2690 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-fbd6ccfdb-9smqx_calico-apiserver(947e4c3f-edaa-4455-9701-8eca3788c1c9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-fbd6ccfdb-9smqx_calico-apiserver(947e4c3f-edaa-4455-9701-8eca3788c1c9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2fa660ed5f0448a2d8edfb9f74965f83cf6315d4f98a1dcc121d6d9892b04bc1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-fbd6ccfdb-9smqx" podUID="947e4c3f-edaa-4455-9701-8eca3788c1c9" Nov 5 14:59:43.126595 systemd[1]: run-netns-cni\x2d99a1a148\x2d2fb3\x2dd8ae\x2d6468\x2d4ac1f7c410cc.mount: Deactivated successfully. Nov 5 14:59:43.203732 containerd[1553]: time="2025-11-05T14:59:43.203669364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-588f9895b5-dl7jw,Uid:9eac046d-fe02-4a83-bfff-937118044d14,Namespace:calico-system,Attempt:0,}" Nov 5 14:59:43.246075 systemd[1]: Created slice kubepods-besteffort-pod1c59cb2f_c3de_4d4b_a46e_9bc0038d4b98.slice - libcontainer container kubepods-besteffort-pod1c59cb2f_c3de_4d4b_a46e_9bc0038d4b98.slice. Nov 5 14:59:43.248064 containerd[1553]: time="2025-11-05T14:59:43.248032766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9kr5x,Uid:1c59cb2f-c3de-4d4b-a46e-9bc0038d4b98,Namespace:calico-system,Attempt:0,}" Nov 5 14:59:43.256334 containerd[1553]: time="2025-11-05T14:59:43.256294832Z" level=error msg="Failed to destroy network for sandbox \"e11c346c9d1a6960cded3c92a55f51da67f04174ac23fd398e4eb6270c734419\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:59:43.257916 systemd[1]: run-netns-cni\x2d42c9c719\x2d402e\x2dfd34\x2dfabb\x2d645271ad74fd.mount: Deactivated successfully. Nov 5 14:59:43.259069 containerd[1553]: time="2025-11-05T14:59:43.258998507Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-588f9895b5-dl7jw,Uid:9eac046d-fe02-4a83-bfff-937118044d14,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e11c346c9d1a6960cded3c92a55f51da67f04174ac23fd398e4eb6270c734419\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:59:43.259272 kubelet[2690]: E1105 14:59:43.259229 2690 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e11c346c9d1a6960cded3c92a55f51da67f04174ac23fd398e4eb6270c734419\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:59:43.259335 kubelet[2690]: E1105 14:59:43.259286 2690 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e11c346c9d1a6960cded3c92a55f51da67f04174ac23fd398e4eb6270c734419\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-588f9895b5-dl7jw" Nov 5 14:59:43.259335 kubelet[2690]: E1105 14:59:43.259306 2690 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e11c346c9d1a6960cded3c92a55f51da67f04174ac23fd398e4eb6270c734419\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-588f9895b5-dl7jw" Nov 5 14:59:43.259391 kubelet[2690]: E1105 14:59:43.259345 2690 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-588f9895b5-dl7jw_calico-system(9eac046d-fe02-4a83-bfff-937118044d14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-588f9895b5-dl7jw_calico-system(9eac046d-fe02-4a83-bfff-937118044d14)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e11c346c9d1a6960cded3c92a55f51da67f04174ac23fd398e4eb6270c734419\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-588f9895b5-dl7jw" podUID="9eac046d-fe02-4a83-bfff-937118044d14" Nov 5 14:59:43.299104 containerd[1553]: time="2025-11-05T14:59:43.298927837Z" level=error msg="Failed to destroy network for sandbox \"2d1c0f6fb3ca495c87aced3acabf481bfe2761d5bddc431ca3979c18270f5d99\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:59:43.300823 systemd[1]: run-netns-cni\x2d82c86fd3\x2d2b02\x2dbbf7\x2d2623\x2dfeba2dda7bea.mount: Deactivated successfully. Nov 5 14:59:43.302995 containerd[1553]: time="2025-11-05T14:59:43.302914590Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9kr5x,Uid:1c59cb2f-c3de-4d4b-a46e-9bc0038d4b98,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d1c0f6fb3ca495c87aced3acabf481bfe2761d5bddc431ca3979c18270f5d99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:59:43.304976 kubelet[2690]: E1105 14:59:43.303186 2690 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d1c0f6fb3ca495c87aced3acabf481bfe2761d5bddc431ca3979c18270f5d99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:59:43.305253 kubelet[2690]: E1105 14:59:43.304995 2690 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d1c0f6fb3ca495c87aced3acabf481bfe2761d5bddc431ca3979c18270f5d99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9kr5x" Nov 5 14:59:43.305253 kubelet[2690]: E1105 14:59:43.305018 2690 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d1c0f6fb3ca495c87aced3acabf481bfe2761d5bddc431ca3979c18270f5d99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9kr5x" Nov 5 14:59:43.305253 kubelet[2690]: E1105 14:59:43.305055 2690 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9kr5x_calico-system(1c59cb2f-c3de-4d4b-a46e-9bc0038d4b98)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9kr5x_calico-system(1c59cb2f-c3de-4d4b-a46e-9bc0038d4b98)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d1c0f6fb3ca495c87aced3acabf481bfe2761d5bddc431ca3979c18270f5d99\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9kr5x" podUID="1c59cb2f-c3de-4d4b-a46e-9bc0038d4b98" Nov 5 14:59:45.551448 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2704522894.mount: Deactivated successfully. Nov 5 14:59:45.676721 containerd[1553]: time="2025-11-05T14:59:45.676650853Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:59:45.677272 containerd[1553]: time="2025-11-05T14:59:45.677234932Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Nov 5 14:59:45.678228 containerd[1553]: time="2025-11-05T14:59:45.678150971Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:59:45.679792 containerd[1553]: time="2025-11-05T14:59:45.679698208Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:59:45.680947 containerd[1553]: time="2025-11-05T14:59:45.680820447Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 3.299636155s" Nov 5 14:59:45.680947 containerd[1553]: time="2025-11-05T14:59:45.680853527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Nov 5 14:59:45.691062 containerd[1553]: time="2025-11-05T14:59:45.691024790Z" level=info msg="CreateContainer within sandbox \"c30ce7aa83bf91e9fd152889bdfeb3484c02e00f4d7eef8b4a91b654bdfd28d2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 5 14:59:45.757779 containerd[1553]: time="2025-11-05T14:59:45.757324602Z" level=info msg="Container 286acd6536114b96812e286abcc3e1860b5fdc7fdec4dcb37726b7e1627e4e76: CDI devices from CRI Config.CDIDevices: []" Nov 5 14:59:45.770023 containerd[1553]: time="2025-11-05T14:59:45.769951581Z" level=info msg="CreateContainer within sandbox \"c30ce7aa83bf91e9fd152889bdfeb3484c02e00f4d7eef8b4a91b654bdfd28d2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"286acd6536114b96812e286abcc3e1860b5fdc7fdec4dcb37726b7e1627e4e76\"" Nov 5 14:59:45.770792 containerd[1553]: time="2025-11-05T14:59:45.770764660Z" level=info msg="StartContainer for \"286acd6536114b96812e286abcc3e1860b5fdc7fdec4dcb37726b7e1627e4e76\"" Nov 5 14:59:45.772709 containerd[1553]: time="2025-11-05T14:59:45.772680417Z" level=info msg="connecting to shim 286acd6536114b96812e286abcc3e1860b5fdc7fdec4dcb37726b7e1627e4e76" address="unix:///run/containerd/s/dd2c4f12e337a84bf5b77871cf261ca316a9a609183d943b86f442ddf257edf7" protocol=ttrpc version=3 Nov 5 14:59:45.795434 systemd[1]: Started cri-containerd-286acd6536114b96812e286abcc3e1860b5fdc7fdec4dcb37726b7e1627e4e76.scope - libcontainer container 286acd6536114b96812e286abcc3e1860b5fdc7fdec4dcb37726b7e1627e4e76. Nov 5 14:59:45.831036 containerd[1553]: time="2025-11-05T14:59:45.830667083Z" level=info msg="StartContainer for \"286acd6536114b96812e286abcc3e1860b5fdc7fdec4dcb37726b7e1627e4e76\" returns successfully" Nov 5 14:59:45.954210 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 5 14:59:45.954386 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 5 14:59:46.141521 kubelet[2690]: I1105 14:59:46.141407 2690 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9eac046d-fe02-4a83-bfff-937118044d14-whisker-ca-bundle\") pod \"9eac046d-fe02-4a83-bfff-937118044d14\" (UID: \"9eac046d-fe02-4a83-bfff-937118044d14\") " Nov 5 14:59:46.141521 kubelet[2690]: I1105 14:59:46.141472 2690 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9eac046d-fe02-4a83-bfff-937118044d14-whisker-backend-key-pair\") pod \"9eac046d-fe02-4a83-bfff-937118044d14\" (UID: \"9eac046d-fe02-4a83-bfff-937118044d14\") " Nov 5 14:59:46.141521 kubelet[2690]: I1105 14:59:46.141521 2690 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kw4bv\" (UniqueName: \"kubernetes.io/projected/9eac046d-fe02-4a83-bfff-937118044d14-kube-api-access-kw4bv\") pod \"9eac046d-fe02-4a83-bfff-937118044d14\" (UID: \"9eac046d-fe02-4a83-bfff-937118044d14\") " Nov 5 14:59:46.146618 kubelet[2690]: I1105 14:59:46.146575 2690 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9eac046d-fe02-4a83-bfff-937118044d14-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "9eac046d-fe02-4a83-bfff-937118044d14" (UID: "9eac046d-fe02-4a83-bfff-937118044d14"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 5 14:59:46.149553 kubelet[2690]: I1105 14:59:46.149515 2690 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eac046d-fe02-4a83-bfff-937118044d14-kube-api-access-kw4bv" (OuterVolumeSpecName: "kube-api-access-kw4bv") pod "9eac046d-fe02-4a83-bfff-937118044d14" (UID: "9eac046d-fe02-4a83-bfff-937118044d14"). InnerVolumeSpecName "kube-api-access-kw4bv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 5 14:59:46.150123 kubelet[2690]: I1105 14:59:46.150097 2690 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eac046d-fe02-4a83-bfff-937118044d14-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "9eac046d-fe02-4a83-bfff-937118044d14" (UID: "9eac046d-fe02-4a83-bfff-937118044d14"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 5 14:59:46.242678 kubelet[2690]: I1105 14:59:46.242639 2690 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kw4bv\" (UniqueName: \"kubernetes.io/projected/9eac046d-fe02-4a83-bfff-937118044d14-kube-api-access-kw4bv\") on node \"localhost\" DevicePath \"\"" Nov 5 14:59:46.242851 kubelet[2690]: I1105 14:59:46.242816 2690 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9eac046d-fe02-4a83-bfff-937118044d14-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 5 14:59:46.242851 kubelet[2690]: I1105 14:59:46.242834 2690 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9eac046d-fe02-4a83-bfff-937118044d14-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 5 14:59:46.391889 kubelet[2690]: E1105 14:59:46.391613 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:46.398235 systemd[1]: Removed slice kubepods-besteffort-pod9eac046d_fe02_4a83_bfff_937118044d14.slice - libcontainer container kubepods-besteffort-pod9eac046d_fe02_4a83_bfff_937118044d14.slice. Nov 5 14:59:46.409863 kubelet[2690]: I1105 14:59:46.409762 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-w8x67" podStartSLOduration=1.33153452 podStartE2EDuration="11.409742803s" podCreationTimestamp="2025-11-05 14:59:35 +0000 UTC" firstStartedPulling="2025-11-05 14:59:35.603308882 +0000 UTC m=+22.485882040" lastFinishedPulling="2025-11-05 14:59:45.681517165 +0000 UTC m=+32.564090323" observedRunningTime="2025-11-05 14:59:46.409315084 +0000 UTC m=+33.291888282" watchObservedRunningTime="2025-11-05 14:59:46.409742803 +0000 UTC m=+33.292315961" Nov 5 14:59:46.466513 systemd[1]: Created slice kubepods-besteffort-pod4b663924_3b8e_4932_b608_4cd05d743871.slice - libcontainer container kubepods-besteffort-pod4b663924_3b8e_4932_b608_4cd05d743871.slice. Nov 5 14:59:46.546398 kubelet[2690]: I1105 14:59:46.546352 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4b663924-3b8e-4932-b608-4cd05d743871-whisker-backend-key-pair\") pod \"whisker-6fbd6bddd9-f68mf\" (UID: \"4b663924-3b8e-4932-b608-4cd05d743871\") " pod="calico-system/whisker-6fbd6bddd9-f68mf" Nov 5 14:59:46.546722 kubelet[2690]: I1105 14:59:46.546702 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kpwl\" (UniqueName: \"kubernetes.io/projected/4b663924-3b8e-4932-b608-4cd05d743871-kube-api-access-8kpwl\") pod \"whisker-6fbd6bddd9-f68mf\" (UID: \"4b663924-3b8e-4932-b608-4cd05d743871\") " pod="calico-system/whisker-6fbd6bddd9-f68mf" Nov 5 14:59:46.546867 kubelet[2690]: I1105 14:59:46.546839 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b663924-3b8e-4932-b608-4cd05d743871-whisker-ca-bundle\") pod \"whisker-6fbd6bddd9-f68mf\" (UID: \"4b663924-3b8e-4932-b608-4cd05d743871\") " pod="calico-system/whisker-6fbd6bddd9-f68mf" Nov 5 14:59:46.552681 systemd[1]: var-lib-kubelet-pods-9eac046d\x2dfe02\x2d4a83\x2dbfff\x2d937118044d14-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 5 14:59:46.552783 systemd[1]: var-lib-kubelet-pods-9eac046d\x2dfe02\x2d4a83\x2dbfff\x2d937118044d14-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkw4bv.mount: Deactivated successfully. Nov 5 14:59:46.770550 containerd[1553]: time="2025-11-05T14:59:46.770487996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6fbd6bddd9-f68mf,Uid:4b663924-3b8e-4932-b608-4cd05d743871,Namespace:calico-system,Attempt:0,}" Nov 5 14:59:46.931166 systemd-networkd[1463]: cali5f87041b66b: Link UP Nov 5 14:59:46.932086 systemd-networkd[1463]: cali5f87041b66b: Gained carrier Nov 5 14:59:46.945823 containerd[1553]: 2025-11-05 14:59:46.803 [INFO][3856] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 14:59:46.945823 containerd[1553]: 2025-11-05 14:59:46.833 [INFO][3856] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6fbd6bddd9--f68mf-eth0 whisker-6fbd6bddd9- calico-system 4b663924-3b8e-4932-b608-4cd05d743871 949 0 2025-11-05 14:59:46 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6fbd6bddd9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6fbd6bddd9-f68mf eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5f87041b66b [] [] }} ContainerID="e168b5a12aee8b6561ffeddaa97ce5bfab24ed5e6e55aea151d76c16bb9fec91" Namespace="calico-system" Pod="whisker-6fbd6bddd9-f68mf" WorkloadEndpoint="localhost-k8s-whisker--6fbd6bddd9--f68mf-" Nov 5 14:59:46.945823 containerd[1553]: 2025-11-05 14:59:46.833 [INFO][3856] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e168b5a12aee8b6561ffeddaa97ce5bfab24ed5e6e55aea151d76c16bb9fec91" Namespace="calico-system" Pod="whisker-6fbd6bddd9-f68mf" WorkloadEndpoint="localhost-k8s-whisker--6fbd6bddd9--f68mf-eth0" Nov 5 14:59:46.945823 containerd[1553]: 2025-11-05 14:59:46.890 [INFO][3870] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e168b5a12aee8b6561ffeddaa97ce5bfab24ed5e6e55aea151d76c16bb9fec91" HandleID="k8s-pod-network.e168b5a12aee8b6561ffeddaa97ce5bfab24ed5e6e55aea151d76c16bb9fec91" Workload="localhost-k8s-whisker--6fbd6bddd9--f68mf-eth0" Nov 5 14:59:46.946038 containerd[1553]: 2025-11-05 14:59:46.890 [INFO][3870] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e168b5a12aee8b6561ffeddaa97ce5bfab24ed5e6e55aea151d76c16bb9fec91" HandleID="k8s-pod-network.e168b5a12aee8b6561ffeddaa97ce5bfab24ed5e6e55aea151d76c16bb9fec91" Workload="localhost-k8s-whisker--6fbd6bddd9--f68mf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002a3b40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6fbd6bddd9-f68mf", "timestamp":"2025-11-05 14:59:46.890257728 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 14:59:46.946038 containerd[1553]: 2025-11-05 14:59:46.890 [INFO][3870] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 14:59:46.946038 containerd[1553]: 2025-11-05 14:59:46.890 [INFO][3870] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 14:59:46.946038 containerd[1553]: 2025-11-05 14:59:46.890 [INFO][3870] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 14:59:46.946038 containerd[1553]: 2025-11-05 14:59:46.900 [INFO][3870] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e168b5a12aee8b6561ffeddaa97ce5bfab24ed5e6e55aea151d76c16bb9fec91" host="localhost" Nov 5 14:59:46.946038 containerd[1553]: 2025-11-05 14:59:46.905 [INFO][3870] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 14:59:46.946038 containerd[1553]: 2025-11-05 14:59:46.909 [INFO][3870] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 14:59:46.946038 containerd[1553]: 2025-11-05 14:59:46.911 [INFO][3870] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 14:59:46.946038 containerd[1553]: 2025-11-05 14:59:46.912 [INFO][3870] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 14:59:46.946038 containerd[1553]: 2025-11-05 14:59:46.912 [INFO][3870] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e168b5a12aee8b6561ffeddaa97ce5bfab24ed5e6e55aea151d76c16bb9fec91" host="localhost" Nov 5 14:59:46.946338 containerd[1553]: 2025-11-05 14:59:46.914 [INFO][3870] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e168b5a12aee8b6561ffeddaa97ce5bfab24ed5e6e55aea151d76c16bb9fec91 Nov 5 14:59:46.946338 containerd[1553]: 2025-11-05 14:59:46.917 [INFO][3870] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e168b5a12aee8b6561ffeddaa97ce5bfab24ed5e6e55aea151d76c16bb9fec91" host="localhost" Nov 5 14:59:46.946338 containerd[1553]: 2025-11-05 14:59:46.921 [INFO][3870] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.e168b5a12aee8b6561ffeddaa97ce5bfab24ed5e6e55aea151d76c16bb9fec91" host="localhost" Nov 5 14:59:46.946338 containerd[1553]: 2025-11-05 14:59:46.922 [INFO][3870] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.e168b5a12aee8b6561ffeddaa97ce5bfab24ed5e6e55aea151d76c16bb9fec91" host="localhost" Nov 5 14:59:46.946338 containerd[1553]: 2025-11-05 14:59:46.922 [INFO][3870] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 14:59:46.946338 containerd[1553]: 2025-11-05 14:59:46.922 [INFO][3870] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="e168b5a12aee8b6561ffeddaa97ce5bfab24ed5e6e55aea151d76c16bb9fec91" HandleID="k8s-pod-network.e168b5a12aee8b6561ffeddaa97ce5bfab24ed5e6e55aea151d76c16bb9fec91" Workload="localhost-k8s-whisker--6fbd6bddd9--f68mf-eth0" Nov 5 14:59:46.946458 containerd[1553]: 2025-11-05 14:59:46.924 [INFO][3856] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e168b5a12aee8b6561ffeddaa97ce5bfab24ed5e6e55aea151d76c16bb9fec91" Namespace="calico-system" Pod="whisker-6fbd6bddd9-f68mf" WorkloadEndpoint="localhost-k8s-whisker--6fbd6bddd9--f68mf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6fbd6bddd9--f68mf-eth0", GenerateName:"whisker-6fbd6bddd9-", Namespace:"calico-system", SelfLink:"", UID:"4b663924-3b8e-4932-b608-4cd05d743871", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 14, 59, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6fbd6bddd9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6fbd6bddd9-f68mf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5f87041b66b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 14:59:46.946458 containerd[1553]: 2025-11-05 14:59:46.924 [INFO][3856] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="e168b5a12aee8b6561ffeddaa97ce5bfab24ed5e6e55aea151d76c16bb9fec91" Namespace="calico-system" Pod="whisker-6fbd6bddd9-f68mf" WorkloadEndpoint="localhost-k8s-whisker--6fbd6bddd9--f68mf-eth0" Nov 5 14:59:46.946541 containerd[1553]: 2025-11-05 14:59:46.924 [INFO][3856] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5f87041b66b ContainerID="e168b5a12aee8b6561ffeddaa97ce5bfab24ed5e6e55aea151d76c16bb9fec91" Namespace="calico-system" Pod="whisker-6fbd6bddd9-f68mf" WorkloadEndpoint="localhost-k8s-whisker--6fbd6bddd9--f68mf-eth0" Nov 5 14:59:46.946541 containerd[1553]: 2025-11-05 14:59:46.932 [INFO][3856] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e168b5a12aee8b6561ffeddaa97ce5bfab24ed5e6e55aea151d76c16bb9fec91" Namespace="calico-system" Pod="whisker-6fbd6bddd9-f68mf" WorkloadEndpoint="localhost-k8s-whisker--6fbd6bddd9--f68mf-eth0" Nov 5 14:59:46.946582 containerd[1553]: 2025-11-05 14:59:46.933 [INFO][3856] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e168b5a12aee8b6561ffeddaa97ce5bfab24ed5e6e55aea151d76c16bb9fec91" Namespace="calico-system" Pod="whisker-6fbd6bddd9-f68mf" WorkloadEndpoint="localhost-k8s-whisker--6fbd6bddd9--f68mf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6fbd6bddd9--f68mf-eth0", GenerateName:"whisker-6fbd6bddd9-", Namespace:"calico-system", SelfLink:"", UID:"4b663924-3b8e-4932-b608-4cd05d743871", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 14, 59, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6fbd6bddd9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e168b5a12aee8b6561ffeddaa97ce5bfab24ed5e6e55aea151d76c16bb9fec91", Pod:"whisker-6fbd6bddd9-f68mf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5f87041b66b", MAC:"26:23:00:88:85:7c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 14:59:46.946626 containerd[1553]: 2025-11-05 14:59:46.942 [INFO][3856] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e168b5a12aee8b6561ffeddaa97ce5bfab24ed5e6e55aea151d76c16bb9fec91" Namespace="calico-system" Pod="whisker-6fbd6bddd9-f68mf" WorkloadEndpoint="localhost-k8s-whisker--6fbd6bddd9--f68mf-eth0" Nov 5 14:59:46.989825 containerd[1553]: time="2025-11-05T14:59:46.989355573Z" level=info msg="connecting to shim e168b5a12aee8b6561ffeddaa97ce5bfab24ed5e6e55aea151d76c16bb9fec91" address="unix:///run/containerd/s/f953259bafe4dfe39116343eb75dc9240939e50d2d48c0f7039a5c34e1a8e0fc" namespace=k8s.io protocol=ttrpc version=3 Nov 5 14:59:47.020424 systemd[1]: Started cri-containerd-e168b5a12aee8b6561ffeddaa97ce5bfab24ed5e6e55aea151d76c16bb9fec91.scope - libcontainer container e168b5a12aee8b6561ffeddaa97ce5bfab24ed5e6e55aea151d76c16bb9fec91. Nov 5 14:59:47.031435 systemd-resolved[1272]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 14:59:47.052392 containerd[1553]: time="2025-11-05T14:59:47.052352876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6fbd6bddd9-f68mf,Uid:4b663924-3b8e-4932-b608-4cd05d743871,Namespace:calico-system,Attempt:0,} returns sandbox id \"e168b5a12aee8b6561ffeddaa97ce5bfab24ed5e6e55aea151d76c16bb9fec91\"" Nov 5 14:59:47.054017 containerd[1553]: time="2025-11-05T14:59:47.053993034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 14:59:47.241146 kubelet[2690]: I1105 14:59:47.241086 2690 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9eac046d-fe02-4a83-bfff-937118044d14" path="/var/lib/kubelet/pods/9eac046d-fe02-4a83-bfff-937118044d14/volumes" Nov 5 14:59:47.282837 containerd[1553]: time="2025-11-05T14:59:47.282733007Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 14:59:47.286362 containerd[1553]: time="2025-11-05T14:59:47.286295442Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 14:59:47.287423 containerd[1553]: time="2025-11-05T14:59:47.286399202Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 14:59:47.293051 kubelet[2690]: E1105 14:59:47.286556 2690 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 14:59:47.293051 kubelet[2690]: E1105 14:59:47.286617 2690 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 14:59:47.293217 containerd[1553]: time="2025-11-05T14:59:47.289176717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 14:59:47.293252 kubelet[2690]: E1105 14:59:47.287161 2690 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6dc3985fa5584b0d9571186f991028cd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8kpwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6fbd6bddd9-f68mf_calico-system(4b663924-3b8e-4932-b608-4cd05d743871): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 14:59:47.397319 kubelet[2690]: I1105 14:59:47.397256 2690 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 5 14:59:47.398280 kubelet[2690]: E1105 14:59:47.398211 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:47.552404 containerd[1553]: time="2025-11-05T14:59:47.552272158Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 14:59:47.553471 containerd[1553]: time="2025-11-05T14:59:47.553434037Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 14:59:47.553987 containerd[1553]: time="2025-11-05T14:59:47.553530996Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 14:59:47.554045 kubelet[2690]: E1105 14:59:47.553855 2690 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 14:59:47.554045 kubelet[2690]: E1105 14:59:47.553901 2690 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 14:59:47.555252 kubelet[2690]: E1105 14:59:47.554229 2690 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8kpwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6fbd6bddd9-f68mf_calico-system(4b663924-3b8e-4932-b608-4cd05d743871): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 14:59:47.555523 kubelet[2690]: E1105 14:59:47.555411 2690 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fbd6bddd9-f68mf" podUID="4b663924-3b8e-4932-b608-4cd05d743871" Nov 5 14:59:48.397487 kubelet[2690]: E1105 14:59:48.397411 2690 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fbd6bddd9-f68mf" podUID="4b663924-3b8e-4932-b608-4cd05d743871" Nov 5 14:59:48.755364 systemd-networkd[1463]: cali5f87041b66b: Gained IPv6LL Nov 5 14:59:49.675473 systemd[1]: Started sshd@7-10.0.0.21:22-10.0.0.1:55170.service - OpenSSH per-connection server daemon (10.0.0.1:55170). Nov 5 14:59:49.733786 sshd[4086]: Accepted publickey for core from 10.0.0.1 port 55170 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 14:59:49.735215 sshd-session[4086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 14:59:49.739322 systemd-logind[1534]: New session 8 of user core. Nov 5 14:59:49.750389 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 5 14:59:49.893502 sshd[4089]: Connection closed by 10.0.0.1 port 55170 Nov 5 14:59:49.894414 sshd-session[4086]: pam_unix(sshd:session): session closed for user core Nov 5 14:59:49.898459 systemd[1]: sshd@7-10.0.0.21:22-10.0.0.1:55170.service: Deactivated successfully. Nov 5 14:59:49.901474 systemd[1]: session-8.scope: Deactivated successfully. Nov 5 14:59:49.902329 systemd-logind[1534]: Session 8 logged out. Waiting for processes to exit. Nov 5 14:59:49.903459 systemd-logind[1534]: Removed session 8. Nov 5 14:59:50.809398 kubelet[2690]: I1105 14:59:50.809340 2690 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 5 14:59:50.810732 kubelet[2690]: E1105 14:59:50.809833 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:50.913577 containerd[1553]: time="2025-11-05T14:59:50.913527699Z" level=info msg="TaskExit event in podsandbox handler container_id:\"286acd6536114b96812e286abcc3e1860b5fdc7fdec4dcb37726b7e1627e4e76\" id:\"b4d7423624f2e043a133f5975ced51fffe9b7f35a2144e008e8c0e3e9f69a833\" pid:4139 exit_status:1 exited_at:{seconds:1762354790 nanos:913190580}" Nov 5 14:59:51.004088 containerd[1553]: time="2025-11-05T14:59:51.004029535Z" level=info msg="TaskExit event in podsandbox handler container_id:\"286acd6536114b96812e286abcc3e1860b5fdc7fdec4dcb37726b7e1627e4e76\" id:\"d642bb087026bf8abefe80ab5ac9dfad8b6e2a5ace64f137cdadbbe0633b7b7e\" pid:4164 exit_status:1 exited_at:{seconds:1762354791 nanos:3701296}" Nov 5 14:59:53.238840 containerd[1553]: time="2025-11-05T14:59:53.238512176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cd5cf7f85-nwqhp,Uid:bcf60f8c-179e-4bff-8ac8-93bb2db7eacf,Namespace:calico-system,Attempt:0,}" Nov 5 14:59:53.411104 systemd-networkd[1463]: cali9db817f4839: Link UP Nov 5 14:59:53.411904 systemd-networkd[1463]: cali9db817f4839: Gained carrier Nov 5 14:59:53.427367 containerd[1553]: 2025-11-05 14:59:53.334 [INFO][4225] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 14:59:53.427367 containerd[1553]: 2025-11-05 14:59:53.348 [INFO][4225] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7cd5cf7f85--nwqhp-eth0 calico-kube-controllers-7cd5cf7f85- calico-system bcf60f8c-179e-4bff-8ac8-93bb2db7eacf 885 0 2025-11-05 14:59:35 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7cd5cf7f85 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7cd5cf7f85-nwqhp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9db817f4839 [] [] }} ContainerID="9b429b7690c815e30b2e9f0b42ebd52837885ef14eef028411751e4bc5a670b7" Namespace="calico-system" Pod="calico-kube-controllers-7cd5cf7f85-nwqhp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cd5cf7f85--nwqhp-" Nov 5 14:59:53.427367 containerd[1553]: 2025-11-05 14:59:53.348 [INFO][4225] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9b429b7690c815e30b2e9f0b42ebd52837885ef14eef028411751e4bc5a670b7" Namespace="calico-system" Pod="calico-kube-controllers-7cd5cf7f85-nwqhp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cd5cf7f85--nwqhp-eth0" Nov 5 14:59:53.427367 containerd[1553]: 2025-11-05 14:59:53.369 [INFO][4242] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9b429b7690c815e30b2e9f0b42ebd52837885ef14eef028411751e4bc5a670b7" HandleID="k8s-pod-network.9b429b7690c815e30b2e9f0b42ebd52837885ef14eef028411751e4bc5a670b7" Workload="localhost-k8s-calico--kube--controllers--7cd5cf7f85--nwqhp-eth0" Nov 5 14:59:53.427586 containerd[1553]: 2025-11-05 14:59:53.369 [INFO][4242] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9b429b7690c815e30b2e9f0b42ebd52837885ef14eef028411751e4bc5a670b7" HandleID="k8s-pod-network.9b429b7690c815e30b2e9f0b42ebd52837885ef14eef028411751e4bc5a670b7" Workload="localhost-k8s-calico--kube--controllers--7cd5cf7f85--nwqhp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d36d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7cd5cf7f85-nwqhp", "timestamp":"2025-11-05 14:59:53.369127892 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 14:59:53.427586 containerd[1553]: 2025-11-05 14:59:53.369 [INFO][4242] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 14:59:53.427586 containerd[1553]: 2025-11-05 14:59:53.369 [INFO][4242] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 14:59:53.427586 containerd[1553]: 2025-11-05 14:59:53.369 [INFO][4242] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 14:59:53.427586 containerd[1553]: 2025-11-05 14:59:53.378 [INFO][4242] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9b429b7690c815e30b2e9f0b42ebd52837885ef14eef028411751e4bc5a670b7" host="localhost" Nov 5 14:59:53.427586 containerd[1553]: 2025-11-05 14:59:53.383 [INFO][4242] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 14:59:53.427586 containerd[1553]: 2025-11-05 14:59:53.388 [INFO][4242] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 14:59:53.427586 containerd[1553]: 2025-11-05 14:59:53.390 [INFO][4242] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 14:59:53.427586 containerd[1553]: 2025-11-05 14:59:53.392 [INFO][4242] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 14:59:53.427586 containerd[1553]: 2025-11-05 14:59:53.392 [INFO][4242] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9b429b7690c815e30b2e9f0b42ebd52837885ef14eef028411751e4bc5a670b7" host="localhost" Nov 5 14:59:53.427802 containerd[1553]: 2025-11-05 14:59:53.395 [INFO][4242] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9b429b7690c815e30b2e9f0b42ebd52837885ef14eef028411751e4bc5a670b7 Nov 5 14:59:53.427802 containerd[1553]: 2025-11-05 14:59:53.399 [INFO][4242] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9b429b7690c815e30b2e9f0b42ebd52837885ef14eef028411751e4bc5a670b7" host="localhost" Nov 5 14:59:53.427802 containerd[1553]: 2025-11-05 14:59:53.404 [INFO][4242] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.9b429b7690c815e30b2e9f0b42ebd52837885ef14eef028411751e4bc5a670b7" host="localhost" Nov 5 14:59:53.427802 containerd[1553]: 2025-11-05 14:59:53.404 [INFO][4242] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.9b429b7690c815e30b2e9f0b42ebd52837885ef14eef028411751e4bc5a670b7" host="localhost" Nov 5 14:59:53.427802 containerd[1553]: 2025-11-05 14:59:53.405 [INFO][4242] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 14:59:53.427802 containerd[1553]: 2025-11-05 14:59:53.405 [INFO][4242] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="9b429b7690c815e30b2e9f0b42ebd52837885ef14eef028411751e4bc5a670b7" HandleID="k8s-pod-network.9b429b7690c815e30b2e9f0b42ebd52837885ef14eef028411751e4bc5a670b7" Workload="localhost-k8s-calico--kube--controllers--7cd5cf7f85--nwqhp-eth0" Nov 5 14:59:53.427919 containerd[1553]: 2025-11-05 14:59:53.408 [INFO][4225] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9b429b7690c815e30b2e9f0b42ebd52837885ef14eef028411751e4bc5a670b7" Namespace="calico-system" Pod="calico-kube-controllers-7cd5cf7f85-nwqhp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cd5cf7f85--nwqhp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7cd5cf7f85--nwqhp-eth0", GenerateName:"calico-kube-controllers-7cd5cf7f85-", Namespace:"calico-system", SelfLink:"", UID:"bcf60f8c-179e-4bff-8ac8-93bb2db7eacf", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 14, 59, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cd5cf7f85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7cd5cf7f85-nwqhp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9db817f4839", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 14:59:53.427968 containerd[1553]: 2025-11-05 14:59:53.408 [INFO][4225] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="9b429b7690c815e30b2e9f0b42ebd52837885ef14eef028411751e4bc5a670b7" Namespace="calico-system" Pod="calico-kube-controllers-7cd5cf7f85-nwqhp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cd5cf7f85--nwqhp-eth0" Nov 5 14:59:53.427968 containerd[1553]: 2025-11-05 14:59:53.408 [INFO][4225] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9db817f4839 ContainerID="9b429b7690c815e30b2e9f0b42ebd52837885ef14eef028411751e4bc5a670b7" Namespace="calico-system" Pod="calico-kube-controllers-7cd5cf7f85-nwqhp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cd5cf7f85--nwqhp-eth0" Nov 5 14:59:53.427968 containerd[1553]: 2025-11-05 14:59:53.411 [INFO][4225] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9b429b7690c815e30b2e9f0b42ebd52837885ef14eef028411751e4bc5a670b7" Namespace="calico-system" Pod="calico-kube-controllers-7cd5cf7f85-nwqhp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cd5cf7f85--nwqhp-eth0" Nov 5 14:59:53.428027 containerd[1553]: 2025-11-05 14:59:53.413 [INFO][4225] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9b429b7690c815e30b2e9f0b42ebd52837885ef14eef028411751e4bc5a670b7" Namespace="calico-system" Pod="calico-kube-controllers-7cd5cf7f85-nwqhp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cd5cf7f85--nwqhp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7cd5cf7f85--nwqhp-eth0", GenerateName:"calico-kube-controllers-7cd5cf7f85-", Namespace:"calico-system", SelfLink:"", UID:"bcf60f8c-179e-4bff-8ac8-93bb2db7eacf", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 14, 59, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cd5cf7f85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9b429b7690c815e30b2e9f0b42ebd52837885ef14eef028411751e4bc5a670b7", Pod:"calico-kube-controllers-7cd5cf7f85-nwqhp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9db817f4839", MAC:"42:54:80:e7:16:88", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 14:59:53.428071 containerd[1553]: 2025-11-05 14:59:53.425 [INFO][4225] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9b429b7690c815e30b2e9f0b42ebd52837885ef14eef028411751e4bc5a670b7" Namespace="calico-system" Pod="calico-kube-controllers-7cd5cf7f85-nwqhp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cd5cf7f85--nwqhp-eth0" Nov 5 14:59:53.446913 containerd[1553]: time="2025-11-05T14:59:53.446849555Z" level=info msg="connecting to shim 9b429b7690c815e30b2e9f0b42ebd52837885ef14eef028411751e4bc5a670b7" address="unix:///run/containerd/s/8da38ddd0a8d480b9cbdf48ce52bcf84ae11c243a4bbce0e6962f46473c931a9" namespace=k8s.io protocol=ttrpc version=3 Nov 5 14:59:53.467378 systemd[1]: Started cri-containerd-9b429b7690c815e30b2e9f0b42ebd52837885ef14eef028411751e4bc5a670b7.scope - libcontainer container 9b429b7690c815e30b2e9f0b42ebd52837885ef14eef028411751e4bc5a670b7. Nov 5 14:59:53.477360 systemd-resolved[1272]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 14:59:53.497094 containerd[1553]: time="2025-11-05T14:59:53.496894252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cd5cf7f85-nwqhp,Uid:bcf60f8c-179e-4bff-8ac8-93bb2db7eacf,Namespace:calico-system,Attempt:0,} returns sandbox id \"9b429b7690c815e30b2e9f0b42ebd52837885ef14eef028411751e4bc5a670b7\"" Nov 5 14:59:53.501150 containerd[1553]: time="2025-11-05T14:59:53.501112407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 14:59:53.741466 containerd[1553]: time="2025-11-05T14:59:53.741415025Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 14:59:53.742321 containerd[1553]: time="2025-11-05T14:59:53.742278144Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 14:59:53.742392 containerd[1553]: time="2025-11-05T14:59:53.742349064Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 14:59:53.742550 kubelet[2690]: E1105 14:59:53.742500 2690 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 14:59:53.742812 kubelet[2690]: E1105 14:59:53.742571 2690 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 14:59:53.742812 kubelet[2690]: E1105 14:59:53.742710 2690 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dlcwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7cd5cf7f85-nwqhp_calico-system(bcf60f8c-179e-4bff-8ac8-93bb2db7eacf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 14:59:53.745225 kubelet[2690]: E1105 14:59:53.744771 2690 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd5cf7f85-nwqhp" podUID="bcf60f8c-179e-4bff-8ac8-93bb2db7eacf" Nov 5 14:59:54.412355 kubelet[2690]: E1105 14:59:54.412283 2690 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd5cf7f85-nwqhp" podUID="bcf60f8c-179e-4bff-8ac8-93bb2db7eacf" Nov 5 14:59:54.707368 systemd-networkd[1463]: cali9db817f4839: Gained IPv6LL Nov 5 14:59:54.910328 systemd[1]: Started sshd@8-10.0.0.21:22-10.0.0.1:55178.service - OpenSSH per-connection server daemon (10.0.0.1:55178). Nov 5 14:59:54.977872 sshd[4335]: Accepted publickey for core from 10.0.0.1 port 55178 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 14:59:54.979259 sshd-session[4335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 14:59:54.983446 systemd-logind[1534]: New session 9 of user core. Nov 5 14:59:54.989375 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 5 14:59:55.106930 sshd[4355]: Connection closed by 10.0.0.1 port 55178 Nov 5 14:59:55.107246 sshd-session[4335]: pam_unix(sshd:session): session closed for user core Nov 5 14:59:55.110773 systemd[1]: sshd@8-10.0.0.21:22-10.0.0.1:55178.service: Deactivated successfully. Nov 5 14:59:55.112439 systemd[1]: session-9.scope: Deactivated successfully. Nov 5 14:59:55.113174 systemd-logind[1534]: Session 9 logged out. Waiting for processes to exit. Nov 5 14:59:55.114188 systemd-logind[1534]: Removed session 9. Nov 5 14:59:55.239136 kubelet[2690]: E1105 14:59:55.238842 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:55.239631 containerd[1553]: time="2025-11-05T14:59:55.239511634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p5z9n,Uid:8848db76-24a9-4a44-87aa-bfca385d1093,Namespace:kube-system,Attempt:0,}" Nov 5 14:59:55.239631 containerd[1553]: time="2025-11-05T14:59:55.239546394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59c8c4d79f-vmsfn,Uid:870c178e-41c4-4ab0-8a1e-1bcbcc89ae10,Namespace:calico-apiserver,Attempt:0,}" Nov 5 14:59:55.377822 systemd-networkd[1463]: cali368fe800cf0: Link UP Nov 5 14:59:55.378338 systemd-networkd[1463]: cali368fe800cf0: Gained carrier Nov 5 14:59:55.391802 containerd[1553]: 2025-11-05 14:59:55.272 [INFO][4371] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 14:59:55.391802 containerd[1553]: 2025-11-05 14:59:55.289 [INFO][4371] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--p5z9n-eth0 coredns-668d6bf9bc- kube-system 8848db76-24a9-4a44-87aa-bfca385d1093 881 0 2025-11-05 14:59:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-p5z9n eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali368fe800cf0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c047541a08a3b967c43d6656e313a777a81144cabd7068916c1c904c34a519f8" Namespace="kube-system" Pod="coredns-668d6bf9bc-p5z9n" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--p5z9n-" Nov 5 14:59:55.391802 containerd[1553]: 2025-11-05 14:59:55.289 [INFO][4371] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c047541a08a3b967c43d6656e313a777a81144cabd7068916c1c904c34a519f8" Namespace="kube-system" Pod="coredns-668d6bf9bc-p5z9n" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--p5z9n-eth0" Nov 5 14:59:55.391802 containerd[1553]: 2025-11-05 14:59:55.314 [INFO][4405] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c047541a08a3b967c43d6656e313a777a81144cabd7068916c1c904c34a519f8" HandleID="k8s-pod-network.c047541a08a3b967c43d6656e313a777a81144cabd7068916c1c904c34a519f8" Workload="localhost-k8s-coredns--668d6bf9bc--p5z9n-eth0" Nov 5 14:59:55.392046 containerd[1553]: 2025-11-05 14:59:55.314 [INFO][4405] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c047541a08a3b967c43d6656e313a777a81144cabd7068916c1c904c34a519f8" HandleID="k8s-pod-network.c047541a08a3b967c43d6656e313a777a81144cabd7068916c1c904c34a519f8" Workload="localhost-k8s-coredns--668d6bf9bc--p5z9n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3600), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-p5z9n", "timestamp":"2025-11-05 14:59:55.314798985 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 14:59:55.392046 containerd[1553]: 2025-11-05 14:59:55.314 [INFO][4405] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 14:59:55.392046 containerd[1553]: 2025-11-05 14:59:55.315 [INFO][4405] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 14:59:55.392046 containerd[1553]: 2025-11-05 14:59:55.315 [INFO][4405] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 14:59:55.392046 containerd[1553]: 2025-11-05 14:59:55.324 [INFO][4405] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c047541a08a3b967c43d6656e313a777a81144cabd7068916c1c904c34a519f8" host="localhost" Nov 5 14:59:55.392046 containerd[1553]: 2025-11-05 14:59:55.328 [INFO][4405] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 14:59:55.392046 containerd[1553]: 2025-11-05 14:59:55.334 [INFO][4405] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 14:59:55.392046 containerd[1553]: 2025-11-05 14:59:55.336 [INFO][4405] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 14:59:55.392046 containerd[1553]: 2025-11-05 14:59:55.338 [INFO][4405] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 14:59:55.392046 containerd[1553]: 2025-11-05 14:59:55.338 [INFO][4405] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c047541a08a3b967c43d6656e313a777a81144cabd7068916c1c904c34a519f8" host="localhost" Nov 5 14:59:55.392268 containerd[1553]: 2025-11-05 14:59:55.340 [INFO][4405] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c047541a08a3b967c43d6656e313a777a81144cabd7068916c1c904c34a519f8 Nov 5 14:59:55.392268 containerd[1553]: 2025-11-05 14:59:55.346 [INFO][4405] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c047541a08a3b967c43d6656e313a777a81144cabd7068916c1c904c34a519f8" host="localhost" Nov 5 14:59:55.392268 containerd[1553]: 2025-11-05 14:59:55.372 [INFO][4405] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.c047541a08a3b967c43d6656e313a777a81144cabd7068916c1c904c34a519f8" host="localhost" Nov 5 14:59:55.392268 containerd[1553]: 2025-11-05 14:59:55.373 [INFO][4405] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.c047541a08a3b967c43d6656e313a777a81144cabd7068916c1c904c34a519f8" host="localhost" Nov 5 14:59:55.392268 containerd[1553]: 2025-11-05 14:59:55.373 [INFO][4405] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 14:59:55.392268 containerd[1553]: 2025-11-05 14:59:55.373 [INFO][4405] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="c047541a08a3b967c43d6656e313a777a81144cabd7068916c1c904c34a519f8" HandleID="k8s-pod-network.c047541a08a3b967c43d6656e313a777a81144cabd7068916c1c904c34a519f8" Workload="localhost-k8s-coredns--668d6bf9bc--p5z9n-eth0" Nov 5 14:59:55.392377 containerd[1553]: 2025-11-05 14:59:55.375 [INFO][4371] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c047541a08a3b967c43d6656e313a777a81144cabd7068916c1c904c34a519f8" Namespace="kube-system" Pod="coredns-668d6bf9bc-p5z9n" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--p5z9n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--p5z9n-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8848db76-24a9-4a44-87aa-bfca385d1093", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 14, 59, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-p5z9n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali368fe800cf0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 14:59:55.392433 containerd[1553]: 2025-11-05 14:59:55.375 [INFO][4371] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="c047541a08a3b967c43d6656e313a777a81144cabd7068916c1c904c34a519f8" Namespace="kube-system" Pod="coredns-668d6bf9bc-p5z9n" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--p5z9n-eth0" Nov 5 14:59:55.392433 containerd[1553]: 2025-11-05 14:59:55.375 [INFO][4371] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali368fe800cf0 ContainerID="c047541a08a3b967c43d6656e313a777a81144cabd7068916c1c904c34a519f8" Namespace="kube-system" Pod="coredns-668d6bf9bc-p5z9n" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--p5z9n-eth0" Nov 5 14:59:55.392433 containerd[1553]: 2025-11-05 14:59:55.378 [INFO][4371] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c047541a08a3b967c43d6656e313a777a81144cabd7068916c1c904c34a519f8" Namespace="kube-system" Pod="coredns-668d6bf9bc-p5z9n" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--p5z9n-eth0" Nov 5 14:59:55.392494 containerd[1553]: 2025-11-05 14:59:55.380 [INFO][4371] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c047541a08a3b967c43d6656e313a777a81144cabd7068916c1c904c34a519f8" Namespace="kube-system" Pod="coredns-668d6bf9bc-p5z9n" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--p5z9n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--p5z9n-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8848db76-24a9-4a44-87aa-bfca385d1093", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 14, 59, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c047541a08a3b967c43d6656e313a777a81144cabd7068916c1c904c34a519f8", Pod:"coredns-668d6bf9bc-p5z9n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali368fe800cf0", MAC:"8a:a7:1c:6a:6d:c9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 14:59:55.392494 containerd[1553]: 2025-11-05 14:59:55.390 [INFO][4371] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c047541a08a3b967c43d6656e313a777a81144cabd7068916c1c904c34a519f8" Namespace="kube-system" Pod="coredns-668d6bf9bc-p5z9n" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--p5z9n-eth0" Nov 5 14:59:55.409692 containerd[1553]: time="2025-11-05T14:59:55.409654712Z" level=info msg="connecting to shim c047541a08a3b967c43d6656e313a777a81144cabd7068916c1c904c34a519f8" address="unix:///run/containerd/s/2c978b2dc96a77f89a608d9d1659c33126d7d4a00716391b89d51a85a4ac124e" namespace=k8s.io protocol=ttrpc version=3 Nov 5 14:59:55.414071 kubelet[2690]: E1105 14:59:55.414018 2690 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd5cf7f85-nwqhp" podUID="bcf60f8c-179e-4bff-8ac8-93bb2db7eacf" Nov 5 14:59:55.447557 systemd[1]: Started cri-containerd-c047541a08a3b967c43d6656e313a777a81144cabd7068916c1c904c34a519f8.scope - libcontainer container c047541a08a3b967c43d6656e313a777a81144cabd7068916c1c904c34a519f8. Nov 5 14:59:55.465334 systemd-networkd[1463]: calif2871f9c9dd: Link UP Nov 5 14:59:55.466537 systemd-networkd[1463]: calif2871f9c9dd: Gained carrier Nov 5 14:59:55.467121 systemd-resolved[1272]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 14:59:55.483286 containerd[1553]: 2025-11-05 14:59:55.267 [INFO][4369] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 14:59:55.483286 containerd[1553]: 2025-11-05 14:59:55.284 [INFO][4369] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--59c8c4d79f--vmsfn-eth0 calico-apiserver-59c8c4d79f- calico-apiserver 870c178e-41c4-4ab0-8a1e-1bcbcc89ae10 880 0 2025-11-05 14:59:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59c8c4d79f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-59c8c4d79f-vmsfn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif2871f9c9dd [] [] }} ContainerID="5a73390ba3277fa612f55578692f0de171a1afebbb6d1e961fc123b650ebecfa" Namespace="calico-apiserver" Pod="calico-apiserver-59c8c4d79f-vmsfn" WorkloadEndpoint="localhost-k8s-calico--apiserver--59c8c4d79f--vmsfn-" Nov 5 14:59:55.483286 containerd[1553]: 2025-11-05 14:59:55.284 [INFO][4369] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5a73390ba3277fa612f55578692f0de171a1afebbb6d1e961fc123b650ebecfa" Namespace="calico-apiserver" Pod="calico-apiserver-59c8c4d79f-vmsfn" WorkloadEndpoint="localhost-k8s-calico--apiserver--59c8c4d79f--vmsfn-eth0" Nov 5 14:59:55.483286 containerd[1553]: 2025-11-05 14:59:55.316 [INFO][4399] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5a73390ba3277fa612f55578692f0de171a1afebbb6d1e961fc123b650ebecfa" HandleID="k8s-pod-network.5a73390ba3277fa612f55578692f0de171a1afebbb6d1e961fc123b650ebecfa" Workload="localhost-k8s-calico--apiserver--59c8c4d79f--vmsfn-eth0" Nov 5 14:59:55.483286 containerd[1553]: 2025-11-05 14:59:55.317 [INFO][4399] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5a73390ba3277fa612f55578692f0de171a1afebbb6d1e961fc123b650ebecfa" HandleID="k8s-pod-network.5a73390ba3277fa612f55578692f0de171a1afebbb6d1e961fc123b650ebecfa" Workload="localhost-k8s-calico--apiserver--59c8c4d79f--vmsfn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000118dd0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-59c8c4d79f-vmsfn", "timestamp":"2025-11-05 14:59:55.316061903 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 14:59:55.483286 containerd[1553]: 2025-11-05 14:59:55.317 [INFO][4399] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 14:59:55.483286 containerd[1553]: 2025-11-05 14:59:55.373 [INFO][4399] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 14:59:55.483286 containerd[1553]: 2025-11-05 14:59:55.373 [INFO][4399] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 14:59:55.483286 containerd[1553]: 2025-11-05 14:59:55.429 [INFO][4399] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5a73390ba3277fa612f55578692f0de171a1afebbb6d1e961fc123b650ebecfa" host="localhost" Nov 5 14:59:55.483286 containerd[1553]: 2025-11-05 14:59:55.434 [INFO][4399] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 14:59:55.483286 containerd[1553]: 2025-11-05 14:59:55.438 [INFO][4399] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 14:59:55.483286 containerd[1553]: 2025-11-05 14:59:55.439 [INFO][4399] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 14:59:55.483286 containerd[1553]: 2025-11-05 14:59:55.442 [INFO][4399] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 14:59:55.483286 containerd[1553]: 2025-11-05 14:59:55.442 [INFO][4399] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5a73390ba3277fa612f55578692f0de171a1afebbb6d1e961fc123b650ebecfa" host="localhost" Nov 5 14:59:55.483286 containerd[1553]: 2025-11-05 14:59:55.443 [INFO][4399] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5a73390ba3277fa612f55578692f0de171a1afebbb6d1e961fc123b650ebecfa Nov 5 14:59:55.483286 containerd[1553]: 2025-11-05 14:59:55.447 [INFO][4399] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5a73390ba3277fa612f55578692f0de171a1afebbb6d1e961fc123b650ebecfa" host="localhost" Nov 5 14:59:55.483286 containerd[1553]: 2025-11-05 14:59:55.458 [INFO][4399] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.5a73390ba3277fa612f55578692f0de171a1afebbb6d1e961fc123b650ebecfa" host="localhost" Nov 5 14:59:55.483286 containerd[1553]: 2025-11-05 14:59:55.458 [INFO][4399] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.5a73390ba3277fa612f55578692f0de171a1afebbb6d1e961fc123b650ebecfa" host="localhost" Nov 5 14:59:55.483286 containerd[1553]: 2025-11-05 14:59:55.458 [INFO][4399] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 14:59:55.483286 containerd[1553]: 2025-11-05 14:59:55.458 [INFO][4399] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="5a73390ba3277fa612f55578692f0de171a1afebbb6d1e961fc123b650ebecfa" HandleID="k8s-pod-network.5a73390ba3277fa612f55578692f0de171a1afebbb6d1e961fc123b650ebecfa" Workload="localhost-k8s-calico--apiserver--59c8c4d79f--vmsfn-eth0" Nov 5 14:59:55.484351 containerd[1553]: 2025-11-05 14:59:55.461 [INFO][4369] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5a73390ba3277fa612f55578692f0de171a1afebbb6d1e961fc123b650ebecfa" Namespace="calico-apiserver" Pod="calico-apiserver-59c8c4d79f-vmsfn" WorkloadEndpoint="localhost-k8s-calico--apiserver--59c8c4d79f--vmsfn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59c8c4d79f--vmsfn-eth0", GenerateName:"calico-apiserver-59c8c4d79f-", Namespace:"calico-apiserver", SelfLink:"", UID:"870c178e-41c4-4ab0-8a1e-1bcbcc89ae10", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 14, 59, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59c8c4d79f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-59c8c4d79f-vmsfn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif2871f9c9dd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 14:59:55.484351 containerd[1553]: 2025-11-05 14:59:55.462 [INFO][4369] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="5a73390ba3277fa612f55578692f0de171a1afebbb6d1e961fc123b650ebecfa" Namespace="calico-apiserver" Pod="calico-apiserver-59c8c4d79f-vmsfn" WorkloadEndpoint="localhost-k8s-calico--apiserver--59c8c4d79f--vmsfn-eth0" Nov 5 14:59:55.484351 containerd[1553]: 2025-11-05 14:59:55.462 [INFO][4369] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif2871f9c9dd ContainerID="5a73390ba3277fa612f55578692f0de171a1afebbb6d1e961fc123b650ebecfa" Namespace="calico-apiserver" Pod="calico-apiserver-59c8c4d79f-vmsfn" WorkloadEndpoint="localhost-k8s-calico--apiserver--59c8c4d79f--vmsfn-eth0" Nov 5 14:59:55.484351 containerd[1553]: 2025-11-05 14:59:55.467 [INFO][4369] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5a73390ba3277fa612f55578692f0de171a1afebbb6d1e961fc123b650ebecfa" Namespace="calico-apiserver" Pod="calico-apiserver-59c8c4d79f-vmsfn" WorkloadEndpoint="localhost-k8s-calico--apiserver--59c8c4d79f--vmsfn-eth0" Nov 5 14:59:55.484351 containerd[1553]: 2025-11-05 14:59:55.468 [INFO][4369] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5a73390ba3277fa612f55578692f0de171a1afebbb6d1e961fc123b650ebecfa" Namespace="calico-apiserver" Pod="calico-apiserver-59c8c4d79f-vmsfn" WorkloadEndpoint="localhost-k8s-calico--apiserver--59c8c4d79f--vmsfn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59c8c4d79f--vmsfn-eth0", GenerateName:"calico-apiserver-59c8c4d79f-", Namespace:"calico-apiserver", SelfLink:"", UID:"870c178e-41c4-4ab0-8a1e-1bcbcc89ae10", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 14, 59, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59c8c4d79f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5a73390ba3277fa612f55578692f0de171a1afebbb6d1e961fc123b650ebecfa", Pod:"calico-apiserver-59c8c4d79f-vmsfn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif2871f9c9dd", MAC:"ba:7b:0f:2d:30:f4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 14:59:55.484351 containerd[1553]: 2025-11-05 14:59:55.479 [INFO][4369] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5a73390ba3277fa612f55578692f0de171a1afebbb6d1e961fc123b650ebecfa" Namespace="calico-apiserver" Pod="calico-apiserver-59c8c4d79f-vmsfn" WorkloadEndpoint="localhost-k8s-calico--apiserver--59c8c4d79f--vmsfn-eth0" Nov 5 14:59:55.495666 containerd[1553]: time="2025-11-05T14:59:55.495564730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p5z9n,Uid:8848db76-24a9-4a44-87aa-bfca385d1093,Namespace:kube-system,Attempt:0,} returns sandbox id \"c047541a08a3b967c43d6656e313a777a81144cabd7068916c1c904c34a519f8\"" Nov 5 14:59:55.497437 kubelet[2690]: E1105 14:59:55.497412 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:55.500146 containerd[1553]: time="2025-11-05T14:59:55.499835885Z" level=info msg="CreateContainer within sandbox \"c047541a08a3b967c43d6656e313a777a81144cabd7068916c1c904c34a519f8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 14:59:55.508375 containerd[1553]: time="2025-11-05T14:59:55.508330555Z" level=info msg="connecting to shim 5a73390ba3277fa612f55578692f0de171a1afebbb6d1e961fc123b650ebecfa" address="unix:///run/containerd/s/4e45cc6b4ff868b1ef65c0681e4735714c3cee79f42462d5cb4f2594788f6000" namespace=k8s.io protocol=ttrpc version=3 Nov 5 14:59:55.516342 containerd[1553]: time="2025-11-05T14:59:55.516306345Z" level=info msg="Container 916df06fa6d96c8b808c028c57907dff6d317f079a308b15130094a285c75d4d: CDI devices from CRI Config.CDIDevices: []" Nov 5 14:59:55.521741 containerd[1553]: time="2025-11-05T14:59:55.521706099Z" level=info msg="CreateContainer within sandbox \"c047541a08a3b967c43d6656e313a777a81144cabd7068916c1c904c34a519f8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"916df06fa6d96c8b808c028c57907dff6d317f079a308b15130094a285c75d4d\"" Nov 5 14:59:55.522812 containerd[1553]: time="2025-11-05T14:59:55.522783417Z" level=info msg="StartContainer for \"916df06fa6d96c8b808c028c57907dff6d317f079a308b15130094a285c75d4d\"" Nov 5 14:59:55.523598 containerd[1553]: time="2025-11-05T14:59:55.523561296Z" level=info msg="connecting to shim 916df06fa6d96c8b808c028c57907dff6d317f079a308b15130094a285c75d4d" address="unix:///run/containerd/s/2c978b2dc96a77f89a608d9d1659c33126d7d4a00716391b89d51a85a4ac124e" protocol=ttrpc version=3 Nov 5 14:59:55.535385 systemd[1]: Started cri-containerd-5a73390ba3277fa612f55578692f0de171a1afebbb6d1e961fc123b650ebecfa.scope - libcontainer container 5a73390ba3277fa612f55578692f0de171a1afebbb6d1e961fc123b650ebecfa. Nov 5 14:59:55.540652 systemd[1]: Started cri-containerd-916df06fa6d96c8b808c028c57907dff6d317f079a308b15130094a285c75d4d.scope - libcontainer container 916df06fa6d96c8b808c028c57907dff6d317f079a308b15130094a285c75d4d. Nov 5 14:59:55.550884 systemd-resolved[1272]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 14:59:55.572658 containerd[1553]: time="2025-11-05T14:59:55.572491318Z" level=info msg="StartContainer for \"916df06fa6d96c8b808c028c57907dff6d317f079a308b15130094a285c75d4d\" returns successfully" Nov 5 14:59:55.586070 containerd[1553]: time="2025-11-05T14:59:55.585990782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59c8c4d79f-vmsfn,Uid:870c178e-41c4-4ab0-8a1e-1bcbcc89ae10,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5a73390ba3277fa612f55578692f0de171a1afebbb6d1e961fc123b650ebecfa\"" Nov 5 14:59:55.588320 containerd[1553]: time="2025-11-05T14:59:55.588285779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 14:59:55.799023 containerd[1553]: time="2025-11-05T14:59:55.798884249Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 14:59:55.808043 containerd[1553]: time="2025-11-05T14:59:55.807986478Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 14:59:55.808043 containerd[1553]: time="2025-11-05T14:59:55.808064358Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 14:59:55.808260 kubelet[2690]: E1105 14:59:55.808223 2690 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 14:59:55.808331 kubelet[2690]: E1105 14:59:55.808276 2690 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 14:59:55.808501 kubelet[2690]: E1105 14:59:55.808440 2690 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvjgv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59c8c4d79f-vmsfn_calico-apiserver(870c178e-41c4-4ab0-8a1e-1bcbcc89ae10): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 14:59:55.810098 kubelet[2690]: E1105 14:59:55.810046 2690 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59c8c4d79f-vmsfn" podUID="870c178e-41c4-4ab0-8a1e-1bcbcc89ae10" Nov 5 14:59:56.238661 kubelet[2690]: E1105 14:59:56.238627 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:56.239064 containerd[1553]: time="2025-11-05T14:59:56.239032453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6z6fv,Uid:0e669a08-2258-4b09-8c26-2c12b7651335,Namespace:kube-system,Attempt:0,}" Nov 5 14:59:56.239126 containerd[1553]: time="2025-11-05T14:59:56.239075133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fbd6ccfdb-hgtng,Uid:bac974d5-2052-4432-9839-70f531dc6657,Namespace:calico-apiserver,Attempt:0,}" Nov 5 14:59:56.239315 containerd[1553]: time="2025-11-05T14:59:56.239033973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9kr5x,Uid:1c59cb2f-c3de-4d4b-a46e-9bc0038d4b98,Namespace:calico-system,Attempt:0,}" Nov 5 14:59:56.395670 systemd-networkd[1463]: cali239a3f74777: Link UP Nov 5 14:59:56.396369 systemd-networkd[1463]: cali239a3f74777: Gained carrier Nov 5 14:59:56.409134 containerd[1553]: 2025-11-05 14:59:56.277 [INFO][4578] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 14:59:56.409134 containerd[1553]: 2025-11-05 14:59:56.326 [INFO][4578] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--6z6fv-eth0 coredns-668d6bf9bc- kube-system 0e669a08-2258-4b09-8c26-2c12b7651335 882 0 2025-11-05 14:59:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-6z6fv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali239a3f74777 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9044f85edd1c3cdb53d1d930f1b37de8f78bb5a9dbcc8ab954e328bd4cef3ded" Namespace="kube-system" Pod="coredns-668d6bf9bc-6z6fv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6z6fv-" Nov 5 14:59:56.409134 containerd[1553]: 2025-11-05 14:59:56.326 [INFO][4578] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9044f85edd1c3cdb53d1d930f1b37de8f78bb5a9dbcc8ab954e328bd4cef3ded" Namespace="kube-system" Pod="coredns-668d6bf9bc-6z6fv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6z6fv-eth0" Nov 5 14:59:56.409134 containerd[1553]: 2025-11-05 14:59:56.354 [INFO][4625] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9044f85edd1c3cdb53d1d930f1b37de8f78bb5a9dbcc8ab954e328bd4cef3ded" HandleID="k8s-pod-network.9044f85edd1c3cdb53d1d930f1b37de8f78bb5a9dbcc8ab954e328bd4cef3ded" Workload="localhost-k8s-coredns--668d6bf9bc--6z6fv-eth0" Nov 5 14:59:56.409134 containerd[1553]: 2025-11-05 14:59:56.354 [INFO][4625] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9044f85edd1c3cdb53d1d930f1b37de8f78bb5a9dbcc8ab954e328bd4cef3ded" HandleID="k8s-pod-network.9044f85edd1c3cdb53d1d930f1b37de8f78bb5a9dbcc8ab954e328bd4cef3ded" Workload="localhost-k8s-coredns--668d6bf9bc--6z6fv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000428080), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-6z6fv", "timestamp":"2025-11-05 14:59:56.354388519 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 14:59:56.409134 containerd[1553]: 2025-11-05 14:59:56.354 [INFO][4625] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 14:59:56.409134 containerd[1553]: 2025-11-05 14:59:56.354 [INFO][4625] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 14:59:56.409134 containerd[1553]: 2025-11-05 14:59:56.354 [INFO][4625] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 14:59:56.409134 containerd[1553]: 2025-11-05 14:59:56.364 [INFO][4625] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9044f85edd1c3cdb53d1d930f1b37de8f78bb5a9dbcc8ab954e328bd4cef3ded" host="localhost" Nov 5 14:59:56.409134 containerd[1553]: 2025-11-05 14:59:56.369 [INFO][4625] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 14:59:56.409134 containerd[1553]: 2025-11-05 14:59:56.373 [INFO][4625] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 14:59:56.409134 containerd[1553]: 2025-11-05 14:59:56.375 [INFO][4625] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 14:59:56.409134 containerd[1553]: 2025-11-05 14:59:56.377 [INFO][4625] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 14:59:56.409134 containerd[1553]: 2025-11-05 14:59:56.377 [INFO][4625] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9044f85edd1c3cdb53d1d930f1b37de8f78bb5a9dbcc8ab954e328bd4cef3ded" host="localhost" Nov 5 14:59:56.409134 containerd[1553]: 2025-11-05 14:59:56.378 [INFO][4625] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9044f85edd1c3cdb53d1d930f1b37de8f78bb5a9dbcc8ab954e328bd4cef3ded Nov 5 14:59:56.409134 containerd[1553]: 2025-11-05 14:59:56.384 [INFO][4625] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9044f85edd1c3cdb53d1d930f1b37de8f78bb5a9dbcc8ab954e328bd4cef3ded" host="localhost" Nov 5 14:59:56.409134 containerd[1553]: 2025-11-05 14:59:56.389 [INFO][4625] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.9044f85edd1c3cdb53d1d930f1b37de8f78bb5a9dbcc8ab954e328bd4cef3ded" host="localhost" Nov 5 14:59:56.409134 containerd[1553]: 2025-11-05 14:59:56.390 [INFO][4625] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.9044f85edd1c3cdb53d1d930f1b37de8f78bb5a9dbcc8ab954e328bd4cef3ded" host="localhost" Nov 5 14:59:56.409134 containerd[1553]: 2025-11-05 14:59:56.390 [INFO][4625] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 14:59:56.409134 containerd[1553]: 2025-11-05 14:59:56.390 [INFO][4625] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="9044f85edd1c3cdb53d1d930f1b37de8f78bb5a9dbcc8ab954e328bd4cef3ded" HandleID="k8s-pod-network.9044f85edd1c3cdb53d1d930f1b37de8f78bb5a9dbcc8ab954e328bd4cef3ded" Workload="localhost-k8s-coredns--668d6bf9bc--6z6fv-eth0" Nov 5 14:59:56.409972 containerd[1553]: 2025-11-05 14:59:56.391 [INFO][4578] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9044f85edd1c3cdb53d1d930f1b37de8f78bb5a9dbcc8ab954e328bd4cef3ded" Namespace="kube-system" Pod="coredns-668d6bf9bc-6z6fv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6z6fv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--6z6fv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0e669a08-2258-4b09-8c26-2c12b7651335", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 14, 59, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-6z6fv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali239a3f74777", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 14:59:56.409972 containerd[1553]: 2025-11-05 14:59:56.392 [INFO][4578] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="9044f85edd1c3cdb53d1d930f1b37de8f78bb5a9dbcc8ab954e328bd4cef3ded" Namespace="kube-system" Pod="coredns-668d6bf9bc-6z6fv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6z6fv-eth0" Nov 5 14:59:56.409972 containerd[1553]: 2025-11-05 14:59:56.392 [INFO][4578] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali239a3f74777 ContainerID="9044f85edd1c3cdb53d1d930f1b37de8f78bb5a9dbcc8ab954e328bd4cef3ded" Namespace="kube-system" Pod="coredns-668d6bf9bc-6z6fv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6z6fv-eth0" Nov 5 14:59:56.409972 containerd[1553]: 2025-11-05 14:59:56.396 [INFO][4578] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9044f85edd1c3cdb53d1d930f1b37de8f78bb5a9dbcc8ab954e328bd4cef3ded" Namespace="kube-system" Pod="coredns-668d6bf9bc-6z6fv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6z6fv-eth0" Nov 5 14:59:56.409972 containerd[1553]: 2025-11-05 14:59:56.397 [INFO][4578] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9044f85edd1c3cdb53d1d930f1b37de8f78bb5a9dbcc8ab954e328bd4cef3ded" Namespace="kube-system" Pod="coredns-668d6bf9bc-6z6fv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6z6fv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--6z6fv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0e669a08-2258-4b09-8c26-2c12b7651335", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 14, 59, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9044f85edd1c3cdb53d1d930f1b37de8f78bb5a9dbcc8ab954e328bd4cef3ded", Pod:"coredns-668d6bf9bc-6z6fv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali239a3f74777", MAC:"6e:bb:c0:47:e0:48", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 14:59:56.409972 containerd[1553]: 2025-11-05 14:59:56.407 [INFO][4578] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9044f85edd1c3cdb53d1d930f1b37de8f78bb5a9dbcc8ab954e328bd4cef3ded" Namespace="kube-system" Pod="coredns-668d6bf9bc-6z6fv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6z6fv-eth0" Nov 5 14:59:56.418060 kubelet[2690]: E1105 14:59:56.417973 2690 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59c8c4d79f-vmsfn" podUID="870c178e-41c4-4ab0-8a1e-1bcbcc89ae10" Nov 5 14:59:56.419256 kubelet[2690]: E1105 14:59:56.419172 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:56.436756 containerd[1553]: time="2025-11-05T14:59:56.436277224Z" level=info msg="connecting to shim 9044f85edd1c3cdb53d1d930f1b37de8f78bb5a9dbcc8ab954e328bd4cef3ded" address="unix:///run/containerd/s/8b3f6953a633c91d2ba149f6bc3024fb318aa877a02e39509e82aeeb810cfbd6" namespace=k8s.io protocol=ttrpc version=3 Nov 5 14:59:56.443423 kubelet[2690]: I1105 14:59:56.443361 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-p5z9n" podStartSLOduration=37.443330696 podStartE2EDuration="37.443330696s" podCreationTimestamp="2025-11-05 14:59:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 14:59:56.441670298 +0000 UTC m=+43.324243456" watchObservedRunningTime="2025-11-05 14:59:56.443330696 +0000 UTC m=+43.325903854" Nov 5 14:59:56.476405 systemd[1]: Started cri-containerd-9044f85edd1c3cdb53d1d930f1b37de8f78bb5a9dbcc8ab954e328bd4cef3ded.scope - libcontainer container 9044f85edd1c3cdb53d1d930f1b37de8f78bb5a9dbcc8ab954e328bd4cef3ded. Nov 5 14:59:56.496315 systemd-resolved[1272]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 14:59:56.523658 systemd-networkd[1463]: cali961e37b83fa: Link UP Nov 5 14:59:56.524398 systemd-networkd[1463]: cali961e37b83fa: Gained carrier Nov 5 14:59:56.534119 containerd[1553]: time="2025-11-05T14:59:56.534054591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6z6fv,Uid:0e669a08-2258-4b09-8c26-2c12b7651335,Namespace:kube-system,Attempt:0,} returns sandbox id \"9044f85edd1c3cdb53d1d930f1b37de8f78bb5a9dbcc8ab954e328bd4cef3ded\"" Nov 5 14:59:56.535079 kubelet[2690]: E1105 14:59:56.535024 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:56.539147 containerd[1553]: time="2025-11-05T14:59:56.539110225Z" level=info msg="CreateContainer within sandbox \"9044f85edd1c3cdb53d1d930f1b37de8f78bb5a9dbcc8ab954e328bd4cef3ded\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 14:59:56.540369 containerd[1553]: 2025-11-05 14:59:56.286 [INFO][4588] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 14:59:56.540369 containerd[1553]: 2025-11-05 14:59:56.328 [INFO][4588] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--fbd6ccfdb--hgtng-eth0 calico-apiserver-fbd6ccfdb- calico-apiserver bac974d5-2052-4432-9839-70f531dc6657 883 0 2025-11-05 14:59:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:fbd6ccfdb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-fbd6ccfdb-hgtng eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali961e37b83fa [] [] }} ContainerID="35b7458ee52b61ca864ea0e75342401c0931662ae69528541c72d33c317e43b3" Namespace="calico-apiserver" Pod="calico-apiserver-fbd6ccfdb-hgtng" WorkloadEndpoint="localhost-k8s-calico--apiserver--fbd6ccfdb--hgtng-" Nov 5 14:59:56.540369 containerd[1553]: 2025-11-05 14:59:56.328 [INFO][4588] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="35b7458ee52b61ca864ea0e75342401c0931662ae69528541c72d33c317e43b3" Namespace="calico-apiserver" Pod="calico-apiserver-fbd6ccfdb-hgtng" WorkloadEndpoint="localhost-k8s-calico--apiserver--fbd6ccfdb--hgtng-eth0" Nov 5 14:59:56.540369 containerd[1553]: 2025-11-05 14:59:56.358 [INFO][4631] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="35b7458ee52b61ca864ea0e75342401c0931662ae69528541c72d33c317e43b3" HandleID="k8s-pod-network.35b7458ee52b61ca864ea0e75342401c0931662ae69528541c72d33c317e43b3" Workload="localhost-k8s-calico--apiserver--fbd6ccfdb--hgtng-eth0" Nov 5 14:59:56.540369 containerd[1553]: 2025-11-05 14:59:56.358 [INFO][4631] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="35b7458ee52b61ca864ea0e75342401c0931662ae69528541c72d33c317e43b3" HandleID="k8s-pod-network.35b7458ee52b61ca864ea0e75342401c0931662ae69528541c72d33c317e43b3" Workload="localhost-k8s-calico--apiserver--fbd6ccfdb--hgtng-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000323360), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-fbd6ccfdb-hgtng", "timestamp":"2025-11-05 14:59:56.358449715 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 14:59:56.540369 containerd[1553]: 2025-11-05 14:59:56.358 [INFO][4631] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 14:59:56.540369 containerd[1553]: 2025-11-05 14:59:56.390 [INFO][4631] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 14:59:56.540369 containerd[1553]: 2025-11-05 14:59:56.390 [INFO][4631] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 14:59:56.540369 containerd[1553]: 2025-11-05 14:59:56.465 [INFO][4631] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.35b7458ee52b61ca864ea0e75342401c0931662ae69528541c72d33c317e43b3" host="localhost" Nov 5 14:59:56.540369 containerd[1553]: 2025-11-05 14:59:56.470 [INFO][4631] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 14:59:56.540369 containerd[1553]: 2025-11-05 14:59:56.475 [INFO][4631] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 14:59:56.540369 containerd[1553]: 2025-11-05 14:59:56.476 [INFO][4631] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 14:59:56.540369 containerd[1553]: 2025-11-05 14:59:56.481 [INFO][4631] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 14:59:56.540369 containerd[1553]: 2025-11-05 14:59:56.481 [INFO][4631] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.35b7458ee52b61ca864ea0e75342401c0931662ae69528541c72d33c317e43b3" host="localhost" Nov 5 14:59:56.540369 containerd[1553]: 2025-11-05 14:59:56.486 [INFO][4631] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.35b7458ee52b61ca864ea0e75342401c0931662ae69528541c72d33c317e43b3 Nov 5 14:59:56.540369 containerd[1553]: 2025-11-05 14:59:56.493 [INFO][4631] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.35b7458ee52b61ca864ea0e75342401c0931662ae69528541c72d33c317e43b3" host="localhost" Nov 5 14:59:56.540369 containerd[1553]: 2025-11-05 14:59:56.507 [INFO][4631] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.35b7458ee52b61ca864ea0e75342401c0931662ae69528541c72d33c317e43b3" host="localhost" Nov 5 14:59:56.540369 containerd[1553]: 2025-11-05 14:59:56.507 [INFO][4631] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.35b7458ee52b61ca864ea0e75342401c0931662ae69528541c72d33c317e43b3" host="localhost" Nov 5 14:59:56.540369 containerd[1553]: 2025-11-05 14:59:56.507 [INFO][4631] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 14:59:56.540369 containerd[1553]: 2025-11-05 14:59:56.507 [INFO][4631] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="35b7458ee52b61ca864ea0e75342401c0931662ae69528541c72d33c317e43b3" HandleID="k8s-pod-network.35b7458ee52b61ca864ea0e75342401c0931662ae69528541c72d33c317e43b3" Workload="localhost-k8s-calico--apiserver--fbd6ccfdb--hgtng-eth0" Nov 5 14:59:56.540962 containerd[1553]: 2025-11-05 14:59:56.511 [INFO][4588] cni-plugin/k8s.go 418: Populated endpoint ContainerID="35b7458ee52b61ca864ea0e75342401c0931662ae69528541c72d33c317e43b3" Namespace="calico-apiserver" Pod="calico-apiserver-fbd6ccfdb-hgtng" WorkloadEndpoint="localhost-k8s-calico--apiserver--fbd6ccfdb--hgtng-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--fbd6ccfdb--hgtng-eth0", GenerateName:"calico-apiserver-fbd6ccfdb-", Namespace:"calico-apiserver", SelfLink:"", UID:"bac974d5-2052-4432-9839-70f531dc6657", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 14, 59, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fbd6ccfdb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-fbd6ccfdb-hgtng", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali961e37b83fa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 14:59:56.540962 containerd[1553]: 2025-11-05 14:59:56.512 [INFO][4588] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="35b7458ee52b61ca864ea0e75342401c0931662ae69528541c72d33c317e43b3" Namespace="calico-apiserver" Pod="calico-apiserver-fbd6ccfdb-hgtng" WorkloadEndpoint="localhost-k8s-calico--apiserver--fbd6ccfdb--hgtng-eth0" Nov 5 14:59:56.540962 containerd[1553]: 2025-11-05 14:59:56.512 [INFO][4588] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali961e37b83fa ContainerID="35b7458ee52b61ca864ea0e75342401c0931662ae69528541c72d33c317e43b3" Namespace="calico-apiserver" Pod="calico-apiserver-fbd6ccfdb-hgtng" WorkloadEndpoint="localhost-k8s-calico--apiserver--fbd6ccfdb--hgtng-eth0" Nov 5 14:59:56.540962 containerd[1553]: 2025-11-05 14:59:56.524 [INFO][4588] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="35b7458ee52b61ca864ea0e75342401c0931662ae69528541c72d33c317e43b3" Namespace="calico-apiserver" Pod="calico-apiserver-fbd6ccfdb-hgtng" WorkloadEndpoint="localhost-k8s-calico--apiserver--fbd6ccfdb--hgtng-eth0" Nov 5 14:59:56.540962 containerd[1553]: 2025-11-05 14:59:56.525 [INFO][4588] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="35b7458ee52b61ca864ea0e75342401c0931662ae69528541c72d33c317e43b3" Namespace="calico-apiserver" Pod="calico-apiserver-fbd6ccfdb-hgtng" WorkloadEndpoint="localhost-k8s-calico--apiserver--fbd6ccfdb--hgtng-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--fbd6ccfdb--hgtng-eth0", GenerateName:"calico-apiserver-fbd6ccfdb-", Namespace:"calico-apiserver", SelfLink:"", UID:"bac974d5-2052-4432-9839-70f531dc6657", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 14, 59, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fbd6ccfdb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"35b7458ee52b61ca864ea0e75342401c0931662ae69528541c72d33c317e43b3", Pod:"calico-apiserver-fbd6ccfdb-hgtng", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali961e37b83fa", MAC:"9a:17:b8:a1:69:44", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 14:59:56.540962 containerd[1553]: 2025-11-05 14:59:56.537 [INFO][4588] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="35b7458ee52b61ca864ea0e75342401c0931662ae69528541c72d33c317e43b3" Namespace="calico-apiserver" Pod="calico-apiserver-fbd6ccfdb-hgtng" WorkloadEndpoint="localhost-k8s-calico--apiserver--fbd6ccfdb--hgtng-eth0" Nov 5 14:59:56.577994 containerd[1553]: time="2025-11-05T14:59:56.577951300Z" level=info msg="Container 5fa0a8a634285bdf5c38a49f6e69b13c96c005f4a8320b97e4ae94e28ad635d6: CDI devices from CRI Config.CDIDevices: []" Nov 5 14:59:56.613414 systemd-networkd[1463]: cali15a8a5dd73d: Link UP Nov 5 14:59:56.614130 systemd-networkd[1463]: cali15a8a5dd73d: Gained carrier Nov 5 14:59:56.615219 containerd[1553]: time="2025-11-05T14:59:56.615028537Z" level=info msg="connecting to shim 35b7458ee52b61ca864ea0e75342401c0931662ae69528541c72d33c317e43b3" address="unix:///run/containerd/s/3002b5a80f56507a09d24be2a3e4a8cdf036d467f60cd11dfcbea5aaaf8eab7e" namespace=k8s.io protocol=ttrpc version=3 Nov 5 14:59:56.623319 containerd[1553]: time="2025-11-05T14:59:56.623113328Z" level=info msg="CreateContainer within sandbox \"9044f85edd1c3cdb53d1d930f1b37de8f78bb5a9dbcc8ab954e328bd4cef3ded\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5fa0a8a634285bdf5c38a49f6e69b13c96c005f4a8320b97e4ae94e28ad635d6\"" Nov 5 14:59:56.626005 containerd[1553]: time="2025-11-05T14:59:56.625640165Z" level=info msg="StartContainer for \"5fa0a8a634285bdf5c38a49f6e69b13c96c005f4a8320b97e4ae94e28ad635d6\"" Nov 5 14:59:56.629932 containerd[1553]: time="2025-11-05T14:59:56.629897800Z" level=info msg="connecting to shim 5fa0a8a634285bdf5c38a49f6e69b13c96c005f4a8320b97e4ae94e28ad635d6" address="unix:///run/containerd/s/8b3f6953a633c91d2ba149f6bc3024fb318aa877a02e39509e82aeeb810cfbd6" protocol=ttrpc version=3 Nov 5 14:59:56.635848 containerd[1553]: 2025-11-05 14:59:56.287 [INFO][4594] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 14:59:56.635848 containerd[1553]: 2025-11-05 14:59:56.334 [INFO][4594] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--9kr5x-eth0 csi-node-driver- calico-system 1c59cb2f-c3de-4d4b-a46e-9bc0038d4b98 785 0 2025-11-05 14:59:35 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-9kr5x eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali15a8a5dd73d [] [] }} ContainerID="055dc0c0cdded203fc4967a937c9ff5f441a68126f820f33460d846e2d92b0c0" Namespace="calico-system" Pod="csi-node-driver-9kr5x" WorkloadEndpoint="localhost-k8s-csi--node--driver--9kr5x-" Nov 5 14:59:56.635848 containerd[1553]: 2025-11-05 14:59:56.335 [INFO][4594] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="055dc0c0cdded203fc4967a937c9ff5f441a68126f820f33460d846e2d92b0c0" Namespace="calico-system" Pod="csi-node-driver-9kr5x" WorkloadEndpoint="localhost-k8s-csi--node--driver--9kr5x-eth0" Nov 5 14:59:56.635848 containerd[1553]: 2025-11-05 14:59:56.364 [INFO][4637] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="055dc0c0cdded203fc4967a937c9ff5f441a68126f820f33460d846e2d92b0c0" HandleID="k8s-pod-network.055dc0c0cdded203fc4967a937c9ff5f441a68126f820f33460d846e2d92b0c0" Workload="localhost-k8s-csi--node--driver--9kr5x-eth0" Nov 5 14:59:56.635848 containerd[1553]: 2025-11-05 14:59:56.364 [INFO][4637] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="055dc0c0cdded203fc4967a937c9ff5f441a68126f820f33460d846e2d92b0c0" HandleID="k8s-pod-network.055dc0c0cdded203fc4967a937c9ff5f441a68126f820f33460d846e2d92b0c0" Workload="localhost-k8s-csi--node--driver--9kr5x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a1720), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-9kr5x", "timestamp":"2025-11-05 14:59:56.364434828 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 14:59:56.635848 containerd[1553]: 2025-11-05 14:59:56.364 [INFO][4637] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 14:59:56.635848 containerd[1553]: 2025-11-05 14:59:56.507 [INFO][4637] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 14:59:56.635848 containerd[1553]: 2025-11-05 14:59:56.507 [INFO][4637] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 14:59:56.635848 containerd[1553]: 2025-11-05 14:59:56.566 [INFO][4637] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.055dc0c0cdded203fc4967a937c9ff5f441a68126f820f33460d846e2d92b0c0" host="localhost" Nov 5 14:59:56.635848 containerd[1553]: 2025-11-05 14:59:56.570 [INFO][4637] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 14:59:56.635848 containerd[1553]: 2025-11-05 14:59:56.574 [INFO][4637] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 14:59:56.635848 containerd[1553]: 2025-11-05 14:59:56.577 [INFO][4637] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 14:59:56.635848 containerd[1553]: 2025-11-05 14:59:56.580 [INFO][4637] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 14:59:56.635848 containerd[1553]: 2025-11-05 14:59:56.580 [INFO][4637] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.055dc0c0cdded203fc4967a937c9ff5f441a68126f820f33460d846e2d92b0c0" host="localhost" Nov 5 14:59:56.635848 containerd[1553]: 2025-11-05 14:59:56.581 [INFO][4637] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.055dc0c0cdded203fc4967a937c9ff5f441a68126f820f33460d846e2d92b0c0 Nov 5 14:59:56.635848 containerd[1553]: 2025-11-05 14:59:56.598 [INFO][4637] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.055dc0c0cdded203fc4967a937c9ff5f441a68126f820f33460d846e2d92b0c0" host="localhost" Nov 5 14:59:56.635848 containerd[1553]: 2025-11-05 14:59:56.606 [INFO][4637] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.055dc0c0cdded203fc4967a937c9ff5f441a68126f820f33460d846e2d92b0c0" host="localhost" Nov 5 14:59:56.635848 containerd[1553]: 2025-11-05 14:59:56.606 [INFO][4637] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.055dc0c0cdded203fc4967a937c9ff5f441a68126f820f33460d846e2d92b0c0" host="localhost" Nov 5 14:59:56.635848 containerd[1553]: 2025-11-05 14:59:56.606 [INFO][4637] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 14:59:56.635848 containerd[1553]: 2025-11-05 14:59:56.606 [INFO][4637] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="055dc0c0cdded203fc4967a937c9ff5f441a68126f820f33460d846e2d92b0c0" HandleID="k8s-pod-network.055dc0c0cdded203fc4967a937c9ff5f441a68126f820f33460d846e2d92b0c0" Workload="localhost-k8s-csi--node--driver--9kr5x-eth0" Nov 5 14:59:56.636352 containerd[1553]: 2025-11-05 14:59:56.610 [INFO][4594] cni-plugin/k8s.go 418: Populated endpoint ContainerID="055dc0c0cdded203fc4967a937c9ff5f441a68126f820f33460d846e2d92b0c0" Namespace="calico-system" Pod="csi-node-driver-9kr5x" WorkloadEndpoint="localhost-k8s-csi--node--driver--9kr5x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9kr5x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1c59cb2f-c3de-4d4b-a46e-9bc0038d4b98", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 14, 59, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-9kr5x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali15a8a5dd73d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 14:59:56.636352 containerd[1553]: 2025-11-05 14:59:56.610 [INFO][4594] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="055dc0c0cdded203fc4967a937c9ff5f441a68126f820f33460d846e2d92b0c0" Namespace="calico-system" Pod="csi-node-driver-9kr5x" WorkloadEndpoint="localhost-k8s-csi--node--driver--9kr5x-eth0" Nov 5 14:59:56.636352 containerd[1553]: 2025-11-05 14:59:56.610 [INFO][4594] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali15a8a5dd73d ContainerID="055dc0c0cdded203fc4967a937c9ff5f441a68126f820f33460d846e2d92b0c0" Namespace="calico-system" Pod="csi-node-driver-9kr5x" WorkloadEndpoint="localhost-k8s-csi--node--driver--9kr5x-eth0" Nov 5 14:59:56.636352 containerd[1553]: 2025-11-05 14:59:56.614 [INFO][4594] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="055dc0c0cdded203fc4967a937c9ff5f441a68126f820f33460d846e2d92b0c0" Namespace="calico-system" Pod="csi-node-driver-9kr5x" WorkloadEndpoint="localhost-k8s-csi--node--driver--9kr5x-eth0" Nov 5 14:59:56.636352 containerd[1553]: 2025-11-05 14:59:56.615 [INFO][4594] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="055dc0c0cdded203fc4967a937c9ff5f441a68126f820f33460d846e2d92b0c0" Namespace="calico-system" Pod="csi-node-driver-9kr5x" WorkloadEndpoint="localhost-k8s-csi--node--driver--9kr5x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9kr5x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1c59cb2f-c3de-4d4b-a46e-9bc0038d4b98", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 14, 59, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"055dc0c0cdded203fc4967a937c9ff5f441a68126f820f33460d846e2d92b0c0", Pod:"csi-node-driver-9kr5x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali15a8a5dd73d", MAC:"3a:f8:72:90:02:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 14:59:56.636352 containerd[1553]: 2025-11-05 14:59:56.632 [INFO][4594] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="055dc0c0cdded203fc4967a937c9ff5f441a68126f820f33460d846e2d92b0c0" Namespace="calico-system" Pod="csi-node-driver-9kr5x" WorkloadEndpoint="localhost-k8s-csi--node--driver--9kr5x-eth0" Nov 5 14:59:56.648540 systemd[1]: Started cri-containerd-35b7458ee52b61ca864ea0e75342401c0931662ae69528541c72d33c317e43b3.scope - libcontainer container 35b7458ee52b61ca864ea0e75342401c0931662ae69528541c72d33c317e43b3. Nov 5 14:59:56.652651 systemd[1]: Started cri-containerd-5fa0a8a634285bdf5c38a49f6e69b13c96c005f4a8320b97e4ae94e28ad635d6.scope - libcontainer container 5fa0a8a634285bdf5c38a49f6e69b13c96c005f4a8320b97e4ae94e28ad635d6. Nov 5 14:59:56.667441 systemd-resolved[1272]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 14:59:56.676237 containerd[1553]: time="2025-11-05T14:59:56.675522747Z" level=info msg="connecting to shim 055dc0c0cdded203fc4967a937c9ff5f441a68126f820f33460d846e2d92b0c0" address="unix:///run/containerd/s/215b7c31bcd61a4a0796e0cad7ba207bc0d2801f7e73afbe96133f6cfb7b098b" namespace=k8s.io protocol=ttrpc version=3 Nov 5 14:59:56.694954 containerd[1553]: time="2025-11-05T14:59:56.694916365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fbd6ccfdb-hgtng,Uid:bac974d5-2052-4432-9839-70f531dc6657,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"35b7458ee52b61ca864ea0e75342401c0931662ae69528541c72d33c317e43b3\"" Nov 5 14:59:56.696637 containerd[1553]: time="2025-11-05T14:59:56.696566123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 14:59:56.711534 systemd[1]: Started cri-containerd-055dc0c0cdded203fc4967a937c9ff5f441a68126f820f33460d846e2d92b0c0.scope - libcontainer container 055dc0c0cdded203fc4967a937c9ff5f441a68126f820f33460d846e2d92b0c0. Nov 5 14:59:56.715135 containerd[1553]: time="2025-11-05T14:59:56.715078861Z" level=info msg="StartContainer for \"5fa0a8a634285bdf5c38a49f6e69b13c96c005f4a8320b97e4ae94e28ad635d6\" returns successfully" Nov 5 14:59:56.727358 systemd-resolved[1272]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 14:59:56.761422 containerd[1553]: time="2025-11-05T14:59:56.761311568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9kr5x,Uid:1c59cb2f-c3de-4d4b-a46e-9bc0038d4b98,Namespace:calico-system,Attempt:0,} returns sandbox id \"055dc0c0cdded203fc4967a937c9ff5f441a68126f820f33460d846e2d92b0c0\"" Nov 5 14:59:56.861925 kubelet[2690]: I1105 14:59:56.861749 2690 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 5 14:59:56.862406 kubelet[2690]: E1105 14:59:56.862385 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:56.889713 containerd[1553]: time="2025-11-05T14:59:56.889661219Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 14:59:56.947366 systemd-networkd[1463]: calif2871f9c9dd: Gained IPv6LL Nov 5 14:59:56.949529 containerd[1553]: time="2025-11-05T14:59:56.949458350Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 14:59:56.949619 containerd[1553]: time="2025-11-05T14:59:56.949541190Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 14:59:56.949832 kubelet[2690]: E1105 14:59:56.949785 2690 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 14:59:56.949911 kubelet[2690]: E1105 14:59:56.949837 2690 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 14:59:56.950158 containerd[1553]: time="2025-11-05T14:59:56.950131989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 14:59:56.950389 kubelet[2690]: E1105 14:59:56.950228 2690 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tn4xz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-fbd6ccfdb-hgtng_calico-apiserver(bac974d5-2052-4432-9839-70f531dc6657): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 14:59:56.951417 kubelet[2690]: E1105 14:59:56.951380 2690 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbd6ccfdb-hgtng" podUID="bac974d5-2052-4432-9839-70f531dc6657" Nov 5 14:59:57.075832 systemd-networkd[1463]: cali368fe800cf0: Gained IPv6LL Nov 5 14:59:57.190654 containerd[1553]: time="2025-11-05T14:59:57.190588316Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 14:59:57.196273 containerd[1553]: time="2025-11-05T14:59:57.196189349Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 14:59:57.196365 containerd[1553]: time="2025-11-05T14:59:57.196300709Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 14:59:57.196536 kubelet[2690]: E1105 14:59:57.196471 2690 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 14:59:57.196626 kubelet[2690]: E1105 14:59:57.196550 2690 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 14:59:57.196713 kubelet[2690]: E1105 14:59:57.196678 2690 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m2mgw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9kr5x_calico-system(1c59cb2f-c3de-4d4b-a46e-9bc0038d4b98): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 14:59:57.199514 containerd[1553]: time="2025-11-05T14:59:57.199452946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 14:59:57.238898 containerd[1553]: time="2025-11-05T14:59:57.238818021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-rthls,Uid:75461a76-a686-4ba2-aacc-266a6fc4971c,Namespace:calico-system,Attempt:0,}" Nov 5 14:59:57.239195 containerd[1553]: time="2025-11-05T14:59:57.238856261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fbd6ccfdb-9smqx,Uid:947e4c3f-edaa-4455-9701-8eca3788c1c9,Namespace:calico-apiserver,Attempt:0,}" Nov 5 14:59:57.248094 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount531380938.mount: Deactivated successfully. Nov 5 14:59:57.363353 systemd-networkd[1463]: calida6fbde65d7: Link UP Nov 5 14:59:57.363681 systemd-networkd[1463]: calida6fbde65d7: Gained carrier Nov 5 14:59:57.375290 containerd[1553]: 2025-11-05 14:59:57.276 [INFO][4889] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 14:59:57.375290 containerd[1553]: 2025-11-05 14:59:57.295 [INFO][4889] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--rthls-eth0 goldmane-666569f655- calico-system 75461a76-a686-4ba2-aacc-266a6fc4971c 884 0 2025-11-05 14:59:33 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-rthls eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calida6fbde65d7 [] [] }} ContainerID="7eb67a78e8f88aea11d6b3bf17176b6d1d134b2adbe7c2902f95429f721fa0c3" Namespace="calico-system" Pod="goldmane-666569f655-rthls" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--rthls-" Nov 5 14:59:57.375290 containerd[1553]: 2025-11-05 14:59:57.295 [INFO][4889] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7eb67a78e8f88aea11d6b3bf17176b6d1d134b2adbe7c2902f95429f721fa0c3" Namespace="calico-system" Pod="goldmane-666569f655-rthls" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--rthls-eth0" Nov 5 14:59:57.375290 containerd[1553]: 2025-11-05 14:59:57.322 [INFO][4916] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7eb67a78e8f88aea11d6b3bf17176b6d1d134b2adbe7c2902f95429f721fa0c3" HandleID="k8s-pod-network.7eb67a78e8f88aea11d6b3bf17176b6d1d134b2adbe7c2902f95429f721fa0c3" Workload="localhost-k8s-goldmane--666569f655--rthls-eth0" Nov 5 14:59:57.375290 containerd[1553]: 2025-11-05 14:59:57.322 [INFO][4916] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7eb67a78e8f88aea11d6b3bf17176b6d1d134b2adbe7c2902f95429f721fa0c3" HandleID="k8s-pod-network.7eb67a78e8f88aea11d6b3bf17176b6d1d134b2adbe7c2902f95429f721fa0c3" Workload="localhost-k8s-goldmane--666569f655--rthls-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c2fd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-rthls", "timestamp":"2025-11-05 14:59:57.322494087 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 14:59:57.375290 containerd[1553]: 2025-11-05 14:59:57.322 [INFO][4916] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 14:59:57.375290 containerd[1553]: 2025-11-05 14:59:57.322 [INFO][4916] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 14:59:57.375290 containerd[1553]: 2025-11-05 14:59:57.322 [INFO][4916] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 14:59:57.375290 containerd[1553]: 2025-11-05 14:59:57.332 [INFO][4916] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7eb67a78e8f88aea11d6b3bf17176b6d1d134b2adbe7c2902f95429f721fa0c3" host="localhost" Nov 5 14:59:57.375290 containerd[1553]: 2025-11-05 14:59:57.336 [INFO][4916] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 14:59:57.375290 containerd[1553]: 2025-11-05 14:59:57.340 [INFO][4916] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 14:59:57.375290 containerd[1553]: 2025-11-05 14:59:57.342 [INFO][4916] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 14:59:57.375290 containerd[1553]: 2025-11-05 14:59:57.344 [INFO][4916] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 14:59:57.375290 containerd[1553]: 2025-11-05 14:59:57.344 [INFO][4916] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7eb67a78e8f88aea11d6b3bf17176b6d1d134b2adbe7c2902f95429f721fa0c3" host="localhost" Nov 5 14:59:57.375290 containerd[1553]: 2025-11-05 14:59:57.347 [INFO][4916] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7eb67a78e8f88aea11d6b3bf17176b6d1d134b2adbe7c2902f95429f721fa0c3 Nov 5 14:59:57.375290 containerd[1553]: 2025-11-05 14:59:57.350 [INFO][4916] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7eb67a78e8f88aea11d6b3bf17176b6d1d134b2adbe7c2902f95429f721fa0c3" host="localhost" Nov 5 14:59:57.375290 containerd[1553]: 2025-11-05 14:59:57.356 [INFO][4916] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.7eb67a78e8f88aea11d6b3bf17176b6d1d134b2adbe7c2902f95429f721fa0c3" host="localhost" Nov 5 14:59:57.375290 containerd[1553]: 2025-11-05 14:59:57.356 [INFO][4916] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.7eb67a78e8f88aea11d6b3bf17176b6d1d134b2adbe7c2902f95429f721fa0c3" host="localhost" Nov 5 14:59:57.375290 containerd[1553]: 2025-11-05 14:59:57.356 [INFO][4916] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 14:59:57.375290 containerd[1553]: 2025-11-05 14:59:57.356 [INFO][4916] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="7eb67a78e8f88aea11d6b3bf17176b6d1d134b2adbe7c2902f95429f721fa0c3" HandleID="k8s-pod-network.7eb67a78e8f88aea11d6b3bf17176b6d1d134b2adbe7c2902f95429f721fa0c3" Workload="localhost-k8s-goldmane--666569f655--rthls-eth0" Nov 5 14:59:57.375853 containerd[1553]: 2025-11-05 14:59:57.358 [INFO][4889] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7eb67a78e8f88aea11d6b3bf17176b6d1d134b2adbe7c2902f95429f721fa0c3" Namespace="calico-system" Pod="goldmane-666569f655-rthls" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--rthls-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--rthls-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"75461a76-a686-4ba2-aacc-266a6fc4971c", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 14, 59, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-rthls", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calida6fbde65d7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 14:59:57.375853 containerd[1553]: 2025-11-05 14:59:57.358 [INFO][4889] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="7eb67a78e8f88aea11d6b3bf17176b6d1d134b2adbe7c2902f95429f721fa0c3" Namespace="calico-system" Pod="goldmane-666569f655-rthls" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--rthls-eth0" Nov 5 14:59:57.375853 containerd[1553]: 2025-11-05 14:59:57.358 [INFO][4889] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calida6fbde65d7 ContainerID="7eb67a78e8f88aea11d6b3bf17176b6d1d134b2adbe7c2902f95429f721fa0c3" Namespace="calico-system" Pod="goldmane-666569f655-rthls" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--rthls-eth0" Nov 5 14:59:57.375853 containerd[1553]: 2025-11-05 14:59:57.362 [INFO][4889] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7eb67a78e8f88aea11d6b3bf17176b6d1d134b2adbe7c2902f95429f721fa0c3" Namespace="calico-system" Pod="goldmane-666569f655-rthls" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--rthls-eth0" Nov 5 14:59:57.375853 containerd[1553]: 2025-11-05 14:59:57.362 [INFO][4889] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7eb67a78e8f88aea11d6b3bf17176b6d1d134b2adbe7c2902f95429f721fa0c3" Namespace="calico-system" Pod="goldmane-666569f655-rthls" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--rthls-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--rthls-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"75461a76-a686-4ba2-aacc-266a6fc4971c", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 14, 59, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7eb67a78e8f88aea11d6b3bf17176b6d1d134b2adbe7c2902f95429f721fa0c3", Pod:"goldmane-666569f655-rthls", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calida6fbde65d7", MAC:"ee:ec:59:96:a5:07", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 14:59:57.375853 containerd[1553]: 2025-11-05 14:59:57.373 [INFO][4889] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7eb67a78e8f88aea11d6b3bf17176b6d1d134b2adbe7c2902f95429f721fa0c3" Namespace="calico-system" Pod="goldmane-666569f655-rthls" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--rthls-eth0" Nov 5 14:59:57.403078 containerd[1553]: time="2025-11-05T14:59:57.403033556Z" level=info msg="connecting to shim 7eb67a78e8f88aea11d6b3bf17176b6d1d134b2adbe7c2902f95429f721fa0c3" address="unix:///run/containerd/s/b887ef0da2bb5aefdefdab025b82510b2dd25b1990ef64fb01df23e412397b9d" namespace=k8s.io protocol=ttrpc version=3 Nov 5 14:59:57.424658 kubelet[2690]: E1105 14:59:57.424026 2690 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbd6ccfdb-hgtng" podUID="bac974d5-2052-4432-9839-70f531dc6657" Nov 5 14:59:57.426629 kubelet[2690]: E1105 14:59:57.425413 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:57.428538 containerd[1553]: time="2025-11-05T14:59:57.428279167Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 14:59:57.429558 kubelet[2690]: E1105 14:59:57.429484 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:57.429751 containerd[1553]: time="2025-11-05T14:59:57.429717486Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 14:59:57.429927 containerd[1553]: time="2025-11-05T14:59:57.429890205Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 14:59:57.430519 kubelet[2690]: E1105 14:59:57.430305 2690 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 14:59:57.430519 kubelet[2690]: E1105 14:59:57.430343 2690 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 14:59:57.430519 kubelet[2690]: E1105 14:59:57.430461 2690 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m2mgw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9kr5x_calico-system(1c59cb2f-c3de-4d4b-a46e-9bc0038d4b98): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 14:59:57.431659 kubelet[2690]: E1105 14:59:57.431624 2690 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9kr5x" podUID="1c59cb2f-c3de-4d4b-a46e-9bc0038d4b98" Nov 5 14:59:57.432980 kubelet[2690]: E1105 14:59:57.432189 2690 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59c8c4d79f-vmsfn" podUID="870c178e-41c4-4ab0-8a1e-1bcbcc89ae10" Nov 5 14:59:57.433388 systemd[1]: Started cri-containerd-7eb67a78e8f88aea11d6b3bf17176b6d1d134b2adbe7c2902f95429f721fa0c3.scope - libcontainer container 7eb67a78e8f88aea11d6b3bf17176b6d1d134b2adbe7c2902f95429f721fa0c3. Nov 5 14:59:57.437308 kubelet[2690]: E1105 14:59:57.437276 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:57.453114 kubelet[2690]: I1105 14:59:57.452982 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6z6fv" podStartSLOduration=38.452963299 podStartE2EDuration="38.452963299s" podCreationTimestamp="2025-11-05 14:59:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 14:59:57.45202874 +0000 UTC m=+44.334601898" watchObservedRunningTime="2025-11-05 14:59:57.452963299 +0000 UTC m=+44.335536457" Nov 5 14:59:57.481624 systemd-resolved[1272]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 14:59:57.493458 systemd-networkd[1463]: calidf2bf6f5139: Link UP Nov 5 14:59:57.495385 systemd-networkd[1463]: calidf2bf6f5139: Gained carrier Nov 5 14:59:57.514994 containerd[1553]: 2025-11-05 14:59:57.280 [INFO][4893] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 14:59:57.514994 containerd[1553]: 2025-11-05 14:59:57.295 [INFO][4893] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--fbd6ccfdb--9smqx-eth0 calico-apiserver-fbd6ccfdb- calico-apiserver 947e4c3f-edaa-4455-9701-8eca3788c1c9 874 0 2025-11-05 14:59:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:fbd6ccfdb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-fbd6ccfdb-9smqx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidf2bf6f5139 [] [] }} ContainerID="361ac21383658a081589a9cbb043dbcb6ed1a826125c336f72552eff8fd4c982" Namespace="calico-apiserver" Pod="calico-apiserver-fbd6ccfdb-9smqx" WorkloadEndpoint="localhost-k8s-calico--apiserver--fbd6ccfdb--9smqx-" Nov 5 14:59:57.514994 containerd[1553]: 2025-11-05 14:59:57.295 [INFO][4893] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="361ac21383658a081589a9cbb043dbcb6ed1a826125c336f72552eff8fd4c982" Namespace="calico-apiserver" Pod="calico-apiserver-fbd6ccfdb-9smqx" WorkloadEndpoint="localhost-k8s-calico--apiserver--fbd6ccfdb--9smqx-eth0" Nov 5 14:59:57.514994 containerd[1553]: 2025-11-05 14:59:57.323 [INFO][4917] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="361ac21383658a081589a9cbb043dbcb6ed1a826125c336f72552eff8fd4c982" HandleID="k8s-pod-network.361ac21383658a081589a9cbb043dbcb6ed1a826125c336f72552eff8fd4c982" Workload="localhost-k8s-calico--apiserver--fbd6ccfdb--9smqx-eth0" Nov 5 14:59:57.514994 containerd[1553]: 2025-11-05 14:59:57.323 [INFO][4917] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="361ac21383658a081589a9cbb043dbcb6ed1a826125c336f72552eff8fd4c982" HandleID="k8s-pod-network.361ac21383658a081589a9cbb043dbcb6ed1a826125c336f72552eff8fd4c982" Workload="localhost-k8s-calico--apiserver--fbd6ccfdb--9smqx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c760), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-fbd6ccfdb-9smqx", "timestamp":"2025-11-05 14:59:57.323188926 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 14:59:57.514994 containerd[1553]: 2025-11-05 14:59:57.323 [INFO][4917] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 14:59:57.514994 containerd[1553]: 2025-11-05 14:59:57.356 [INFO][4917] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 14:59:57.514994 containerd[1553]: 2025-11-05 14:59:57.356 [INFO][4917] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 14:59:57.514994 containerd[1553]: 2025-11-05 14:59:57.433 [INFO][4917] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.361ac21383658a081589a9cbb043dbcb6ed1a826125c336f72552eff8fd4c982" host="localhost" Nov 5 14:59:57.514994 containerd[1553]: 2025-11-05 14:59:57.442 [INFO][4917] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 14:59:57.514994 containerd[1553]: 2025-11-05 14:59:57.450 [INFO][4917] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 14:59:57.514994 containerd[1553]: 2025-11-05 14:59:57.453 [INFO][4917] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 14:59:57.514994 containerd[1553]: 2025-11-05 14:59:57.458 [INFO][4917] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 14:59:57.514994 containerd[1553]: 2025-11-05 14:59:57.460 [INFO][4917] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.361ac21383658a081589a9cbb043dbcb6ed1a826125c336f72552eff8fd4c982" host="localhost" Nov 5 14:59:57.514994 containerd[1553]: 2025-11-05 14:59:57.464 [INFO][4917] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.361ac21383658a081589a9cbb043dbcb6ed1a826125c336f72552eff8fd4c982 Nov 5 14:59:57.514994 containerd[1553]: 2025-11-05 14:59:57.469 [INFO][4917] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.361ac21383658a081589a9cbb043dbcb6ed1a826125c336f72552eff8fd4c982" host="localhost" Nov 5 14:59:57.514994 containerd[1553]: 2025-11-05 14:59:57.482 [INFO][4917] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.361ac21383658a081589a9cbb043dbcb6ed1a826125c336f72552eff8fd4c982" host="localhost" Nov 5 14:59:57.514994 containerd[1553]: 2025-11-05 14:59:57.482 [INFO][4917] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.361ac21383658a081589a9cbb043dbcb6ed1a826125c336f72552eff8fd4c982" host="localhost" Nov 5 14:59:57.514994 containerd[1553]: 2025-11-05 14:59:57.483 [INFO][4917] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 14:59:57.514994 containerd[1553]: 2025-11-05 14:59:57.483 [INFO][4917] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="361ac21383658a081589a9cbb043dbcb6ed1a826125c336f72552eff8fd4c982" HandleID="k8s-pod-network.361ac21383658a081589a9cbb043dbcb6ed1a826125c336f72552eff8fd4c982" Workload="localhost-k8s-calico--apiserver--fbd6ccfdb--9smqx-eth0" Nov 5 14:59:57.515806 containerd[1553]: 2025-11-05 14:59:57.490 [INFO][4893] cni-plugin/k8s.go 418: Populated endpoint ContainerID="361ac21383658a081589a9cbb043dbcb6ed1a826125c336f72552eff8fd4c982" Namespace="calico-apiserver" Pod="calico-apiserver-fbd6ccfdb-9smqx" WorkloadEndpoint="localhost-k8s-calico--apiserver--fbd6ccfdb--9smqx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--fbd6ccfdb--9smqx-eth0", GenerateName:"calico-apiserver-fbd6ccfdb-", Namespace:"calico-apiserver", SelfLink:"", UID:"947e4c3f-edaa-4455-9701-8eca3788c1c9", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 14, 59, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fbd6ccfdb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-fbd6ccfdb-9smqx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidf2bf6f5139", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 14:59:57.515806 containerd[1553]: 2025-11-05 14:59:57.490 [INFO][4893] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="361ac21383658a081589a9cbb043dbcb6ed1a826125c336f72552eff8fd4c982" Namespace="calico-apiserver" Pod="calico-apiserver-fbd6ccfdb-9smqx" WorkloadEndpoint="localhost-k8s-calico--apiserver--fbd6ccfdb--9smqx-eth0" Nov 5 14:59:57.515806 containerd[1553]: 2025-11-05 14:59:57.490 [INFO][4893] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidf2bf6f5139 ContainerID="361ac21383658a081589a9cbb043dbcb6ed1a826125c336f72552eff8fd4c982" Namespace="calico-apiserver" Pod="calico-apiserver-fbd6ccfdb-9smqx" WorkloadEndpoint="localhost-k8s-calico--apiserver--fbd6ccfdb--9smqx-eth0" Nov 5 14:59:57.515806 containerd[1553]: 2025-11-05 14:59:57.496 [INFO][4893] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="361ac21383658a081589a9cbb043dbcb6ed1a826125c336f72552eff8fd4c982" Namespace="calico-apiserver" Pod="calico-apiserver-fbd6ccfdb-9smqx" WorkloadEndpoint="localhost-k8s-calico--apiserver--fbd6ccfdb--9smqx-eth0" Nov 5 14:59:57.515806 containerd[1553]: 2025-11-05 14:59:57.496 [INFO][4893] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="361ac21383658a081589a9cbb043dbcb6ed1a826125c336f72552eff8fd4c982" Namespace="calico-apiserver" Pod="calico-apiserver-fbd6ccfdb-9smqx" WorkloadEndpoint="localhost-k8s-calico--apiserver--fbd6ccfdb--9smqx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--fbd6ccfdb--9smqx-eth0", GenerateName:"calico-apiserver-fbd6ccfdb-", Namespace:"calico-apiserver", SelfLink:"", UID:"947e4c3f-edaa-4455-9701-8eca3788c1c9", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 14, 59, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fbd6ccfdb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"361ac21383658a081589a9cbb043dbcb6ed1a826125c336f72552eff8fd4c982", Pod:"calico-apiserver-fbd6ccfdb-9smqx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidf2bf6f5139", MAC:"4e:1c:70:3a:b9:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 14:59:57.515806 containerd[1553]: 2025-11-05 14:59:57.510 [INFO][4893] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="361ac21383658a081589a9cbb043dbcb6ed1a826125c336f72552eff8fd4c982" Namespace="calico-apiserver" Pod="calico-apiserver-fbd6ccfdb-9smqx" WorkloadEndpoint="localhost-k8s-calico--apiserver--fbd6ccfdb--9smqx-eth0" Nov 5 14:59:57.525847 containerd[1553]: time="2025-11-05T14:59:57.525806297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-rthls,Uid:75461a76-a686-4ba2-aacc-266a6fc4971c,Namespace:calico-system,Attempt:0,} returns sandbox id \"7eb67a78e8f88aea11d6b3bf17176b6d1d134b2adbe7c2902f95429f721fa0c3\"" Nov 5 14:59:57.527670 containerd[1553]: time="2025-11-05T14:59:57.527644015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 14:59:57.536071 containerd[1553]: time="2025-11-05T14:59:57.536033285Z" level=info msg="connecting to shim 361ac21383658a081589a9cbb043dbcb6ed1a826125c336f72552eff8fd4c982" address="unix:///run/containerd/s/5f37c644801f9df95717f192fd6e2acc1ee5d0ccf74091ad7d9886efc3f8a687" namespace=k8s.io protocol=ttrpc version=3 Nov 5 14:59:57.561402 systemd[1]: Started cri-containerd-361ac21383658a081589a9cbb043dbcb6ed1a826125c336f72552eff8fd4c982.scope - libcontainer container 361ac21383658a081589a9cbb043dbcb6ed1a826125c336f72552eff8fd4c982. Nov 5 14:59:57.572160 systemd-resolved[1272]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 14:59:57.592571 containerd[1553]: time="2025-11-05T14:59:57.592530262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fbd6ccfdb-9smqx,Uid:947e4c3f-edaa-4455-9701-8eca3788c1c9,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"361ac21383658a081589a9cbb043dbcb6ed1a826125c336f72552eff8fd4c982\"" Nov 5 14:59:57.764542 containerd[1553]: time="2025-11-05T14:59:57.764499867Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 14:59:57.765883 containerd[1553]: time="2025-11-05T14:59:57.765847146Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 14:59:57.766025 containerd[1553]: time="2025-11-05T14:59:57.765910586Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 14:59:57.766454 kubelet[2690]: E1105 14:59:57.766314 2690 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 14:59:57.766454 kubelet[2690]: E1105 14:59:57.766414 2690 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 14:59:57.767082 containerd[1553]: time="2025-11-05T14:59:57.767033944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 14:59:57.769574 kubelet[2690]: E1105 14:59:57.766651 2690 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5gtsq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-rthls_calico-system(75461a76-a686-4ba2-aacc-266a6fc4971c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 14:59:57.771287 kubelet[2690]: E1105 14:59:57.771259 2690 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rthls" podUID="75461a76-a686-4ba2-aacc-266a6fc4971c" Nov 5 14:59:57.779366 systemd-networkd[1463]: cali961e37b83fa: Gained IPv6LL Nov 5 14:59:57.985231 containerd[1553]: time="2025-11-05T14:59:57.984446099Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 14:59:57.985548 containerd[1553]: time="2025-11-05T14:59:57.985377138Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 14:59:57.985548 containerd[1553]: time="2025-11-05T14:59:57.985488817Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 14:59:57.985668 kubelet[2690]: E1105 14:59:57.985625 2690 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 14:59:57.985744 kubelet[2690]: E1105 14:59:57.985680 2690 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 14:59:57.985907 kubelet[2690]: E1105 14:59:57.985819 2690 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bgvn7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-fbd6ccfdb-9smqx_calico-apiserver(947e4c3f-edaa-4455-9701-8eca3788c1c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 14:59:57.987027 kubelet[2690]: E1105 14:59:57.986990 2690 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbd6ccfdb-9smqx" podUID="947e4c3f-edaa-4455-9701-8eca3788c1c9" Nov 5 14:59:57.992364 systemd-networkd[1463]: vxlan.calico: Link UP Nov 5 14:59:57.992371 systemd-networkd[1463]: vxlan.calico: Gained carrier Nov 5 14:59:58.036425 systemd-networkd[1463]: cali15a8a5dd73d: Gained IPv6LL Nov 5 14:59:58.291432 systemd-networkd[1463]: cali239a3f74777: Gained IPv6LL Nov 5 14:59:58.435957 kubelet[2690]: E1105 14:59:58.435822 2690 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rthls" podUID="75461a76-a686-4ba2-aacc-266a6fc4971c" Nov 5 14:59:58.437843 kubelet[2690]: E1105 14:59:58.437185 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:58.440819 kubelet[2690]: E1105 14:59:58.440750 2690 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbd6ccfdb-hgtng" podUID="bac974d5-2052-4432-9839-70f531dc6657" Nov 5 14:59:58.441145 kubelet[2690]: E1105 14:59:58.441117 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:58.443826 kubelet[2690]: E1105 14:59:58.443411 2690 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9kr5x" podUID="1c59cb2f-c3de-4d4b-a46e-9bc0038d4b98" Nov 5 14:59:58.447356 kubelet[2690]: E1105 14:59:58.447297 2690 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbd6ccfdb-9smqx" podUID="947e4c3f-edaa-4455-9701-8eca3788c1c9" Nov 5 14:59:58.739431 systemd-networkd[1463]: calida6fbde65d7: Gained IPv6LL Nov 5 14:59:58.867395 systemd-networkd[1463]: calidf2bf6f5139: Gained IPv6LL Nov 5 14:59:59.059411 systemd-networkd[1463]: vxlan.calico: Gained IPv6LL Nov 5 14:59:59.439653 kubelet[2690]: E1105 14:59:59.439358 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:59.440446 kubelet[2690]: E1105 14:59:59.440336 2690 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbd6ccfdb-9smqx" podUID="947e4c3f-edaa-4455-9701-8eca3788c1c9" Nov 5 14:59:59.440446 kubelet[2690]: E1105 14:59:59.440374 2690 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rthls" podUID="75461a76-a686-4ba2-aacc-266a6fc4971c" Nov 5 15:00:00.127968 systemd[1]: Started sshd@9-10.0.0.21:22-10.0.0.1:40174.service - OpenSSH per-connection server daemon (10.0.0.1:40174). Nov 5 15:00:00.186900 sshd[5147]: Accepted publickey for core from 10.0.0.1 port 40174 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 15:00:00.188946 sshd-session[5147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:00:00.193488 systemd-logind[1534]: New session 10 of user core. Nov 5 15:00:00.202426 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 5 15:00:00.380345 sshd[5150]: Connection closed by 10.0.0.1 port 40174 Nov 5 15:00:00.380852 sshd-session[5147]: pam_unix(sshd:session): session closed for user core Nov 5 15:00:00.391559 systemd[1]: sshd@9-10.0.0.21:22-10.0.0.1:40174.service: Deactivated successfully. Nov 5 15:00:00.393300 systemd[1]: session-10.scope: Deactivated successfully. Nov 5 15:00:00.393959 systemd-logind[1534]: Session 10 logged out. Waiting for processes to exit. Nov 5 15:00:00.396479 systemd[1]: Started sshd@10-10.0.0.21:22-10.0.0.1:40178.service - OpenSSH per-connection server daemon (10.0.0.1:40178). Nov 5 15:00:00.397287 systemd-logind[1534]: Removed session 10. Nov 5 15:00:00.459426 sshd[5167]: Accepted publickey for core from 10.0.0.1 port 40178 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 15:00:00.461456 sshd-session[5167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:00:00.468131 systemd-logind[1534]: New session 11 of user core. Nov 5 15:00:00.474772 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 5 15:00:00.665970 sshd[5170]: Connection closed by 10.0.0.1 port 40178 Nov 5 15:00:00.666863 sshd-session[5167]: pam_unix(sshd:session): session closed for user core Nov 5 15:00:00.680541 systemd[1]: sshd@10-10.0.0.21:22-10.0.0.1:40178.service: Deactivated successfully. Nov 5 15:00:00.686387 systemd[1]: session-11.scope: Deactivated successfully. Nov 5 15:00:00.689550 systemd-logind[1534]: Session 11 logged out. Waiting for processes to exit. Nov 5 15:00:00.695509 systemd[1]: Started sshd@11-10.0.0.21:22-10.0.0.1:40188.service - OpenSSH per-connection server daemon (10.0.0.1:40188). Nov 5 15:00:00.696079 systemd-logind[1534]: Removed session 11. Nov 5 15:00:00.754336 sshd[5182]: Accepted publickey for core from 10.0.0.1 port 40188 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 15:00:00.755722 sshd-session[5182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:00:00.761876 systemd-logind[1534]: New session 12 of user core. Nov 5 15:00:00.769427 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 5 15:00:00.936675 sshd[5185]: Connection closed by 10.0.0.1 port 40188 Nov 5 15:00:00.936963 sshd-session[5182]: pam_unix(sshd:session): session closed for user core Nov 5 15:00:00.941382 systemd-logind[1534]: Session 12 logged out. Waiting for processes to exit. Nov 5 15:00:00.942115 systemd[1]: sshd@11-10.0.0.21:22-10.0.0.1:40188.service: Deactivated successfully. Nov 5 15:00:00.945178 systemd[1]: session-12.scope: Deactivated successfully. Nov 5 15:00:00.946823 systemd-logind[1534]: Removed session 12. Nov 5 15:00:02.239976 containerd[1553]: time="2025-11-05T15:00:02.239351088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:00:02.890136 containerd[1553]: time="2025-11-05T15:00:02.890076069Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:00:02.891172 containerd[1553]: time="2025-11-05T15:00:02.891131948Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:00:02.891242 containerd[1553]: time="2025-11-05T15:00:02.891212828Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:00:02.891373 kubelet[2690]: E1105 15:00:02.891337 2690 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:00:02.891713 kubelet[2690]: E1105 15:00:02.891389 2690 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:00:02.891713 kubelet[2690]: E1105 15:00:02.891492 2690 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6dc3985fa5584b0d9571186f991028cd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8kpwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6fbd6bddd9-f68mf_calico-system(4b663924-3b8e-4932-b608-4cd05d743871): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:00:02.893466 containerd[1553]: time="2025-11-05T15:00:02.893372825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:00:05.961944 systemd[1]: Started sshd@12-10.0.0.21:22-10.0.0.1:40198.service - OpenSSH per-connection server daemon (10.0.0.1:40198). Nov 5 15:00:06.026738 sshd[5210]: Accepted publickey for core from 10.0.0.1 port 40198 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 15:00:06.028076 sshd-session[5210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:00:06.032912 systemd-logind[1534]: New session 13 of user core. Nov 5 15:00:06.042412 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 5 15:00:06.170992 sshd[5213]: Connection closed by 10.0.0.1 port 40198 Nov 5 15:00:06.171415 sshd-session[5210]: pam_unix(sshd:session): session closed for user core Nov 5 15:00:06.175515 systemd[1]: sshd@12-10.0.0.21:22-10.0.0.1:40198.service: Deactivated successfully. Nov 5 15:00:06.177118 systemd[1]: session-13.scope: Deactivated successfully. Nov 5 15:00:06.177919 systemd-logind[1534]: Session 13 logged out. Waiting for processes to exit. Nov 5 15:00:06.178857 systemd-logind[1534]: Removed session 13. Nov 5 15:00:08.217607 containerd[1553]: time="2025-11-05T15:00:08.217301202Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:00:08.223816 containerd[1553]: time="2025-11-05T15:00:08.221650678Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:00:08.223816 containerd[1553]: time="2025-11-05T15:00:08.221701278Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:00:08.224267 kubelet[2690]: E1105 15:00:08.224224 2690 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:00:08.225174 kubelet[2690]: E1105 15:00:08.224589 2690 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:00:08.225174 kubelet[2690]: E1105 15:00:08.224721 2690 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8kpwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6fbd6bddd9-f68mf_calico-system(4b663924-3b8e-4932-b608-4cd05d743871): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:00:08.226102 kubelet[2690]: E1105 15:00:08.226054 2690 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fbd6bddd9-f68mf" podUID="4b663924-3b8e-4932-b608-4cd05d743871" Nov 5 15:00:08.242556 containerd[1553]: time="2025-11-05T15:00:08.242275459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:00:08.466993 containerd[1553]: time="2025-11-05T15:00:08.466788734Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:00:08.469018 containerd[1553]: time="2025-11-05T15:00:08.468598772Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:00:08.469018 containerd[1553]: time="2025-11-05T15:00:08.468656812Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:00:08.470610 kubelet[2690]: E1105 15:00:08.468870 2690 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:00:08.470610 kubelet[2690]: E1105 15:00:08.468932 2690 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:00:08.470610 kubelet[2690]: E1105 15:00:08.469325 2690 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvjgv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59c8c4d79f-vmsfn_calico-apiserver(870c178e-41c4-4ab0-8a1e-1bcbcc89ae10): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:00:08.471101 kubelet[2690]: E1105 15:00:08.470880 2690 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59c8c4d79f-vmsfn" podUID="870c178e-41c4-4ab0-8a1e-1bcbcc89ae10" Nov 5 15:00:09.239241 containerd[1553]: time="2025-11-05T15:00:09.239124631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 15:00:09.485784 containerd[1553]: time="2025-11-05T15:00:09.485700449Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:00:09.486872 containerd[1553]: time="2025-11-05T15:00:09.486796888Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 15:00:09.486872 containerd[1553]: time="2025-11-05T15:00:09.486848888Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 15:00:09.487163 kubelet[2690]: E1105 15:00:09.487100 2690 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:00:09.487163 kubelet[2690]: E1105 15:00:09.487155 2690 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:00:09.487605 kubelet[2690]: E1105 15:00:09.487531 2690 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dlcwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7cd5cf7f85-nwqhp_calico-system(bcf60f8c-179e-4bff-8ac8-93bb2db7eacf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 15:00:09.488695 kubelet[2690]: E1105 15:00:09.488649 2690 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd5cf7f85-nwqhp" podUID="bcf60f8c-179e-4bff-8ac8-93bb2db7eacf" Nov 5 15:00:11.182775 systemd[1]: Started sshd@13-10.0.0.21:22-10.0.0.1:42954.service - OpenSSH per-connection server daemon (10.0.0.1:42954). Nov 5 15:00:11.237696 sshd[5237]: Accepted publickey for core from 10.0.0.1 port 42954 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 15:00:11.239964 sshd-session[5237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:00:11.240840 containerd[1553]: time="2025-11-05T15:00:11.240763727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:00:11.248380 systemd-logind[1534]: New session 14 of user core. Nov 5 15:00:11.256052 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 5 15:00:11.425395 sshd[5240]: Connection closed by 10.0.0.1 port 42954 Nov 5 15:00:11.425911 sshd-session[5237]: pam_unix(sshd:session): session closed for user core Nov 5 15:00:11.437015 systemd[1]: sshd@13-10.0.0.21:22-10.0.0.1:42954.service: Deactivated successfully. Nov 5 15:00:11.443247 systemd[1]: session-14.scope: Deactivated successfully. Nov 5 15:00:11.447330 systemd-logind[1534]: Session 14 logged out. Waiting for processes to exit. Nov 5 15:00:11.450591 systemd[1]: Started sshd@14-10.0.0.21:22-10.0.0.1:42970.service - OpenSSH per-connection server daemon (10.0.0.1:42970). Nov 5 15:00:11.451502 systemd-logind[1534]: Removed session 14. Nov 5 15:00:11.469227 containerd[1553]: time="2025-11-05T15:00:11.469102486Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:00:11.470193 containerd[1553]: time="2025-11-05T15:00:11.470156725Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:00:11.470277 containerd[1553]: time="2025-11-05T15:00:11.470253925Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:00:11.471367 kubelet[2690]: E1105 15:00:11.471327 2690 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:00:11.471896 kubelet[2690]: E1105 15:00:11.471383 2690 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:00:11.472697 containerd[1553]: time="2025-11-05T15:00:11.472409804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 15:00:11.472864 kubelet[2690]: E1105 15:00:11.472037 2690 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5gtsq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-rthls_calico-system(75461a76-a686-4ba2-aacc-266a6fc4971c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:00:11.474436 kubelet[2690]: E1105 15:00:11.474144 2690 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rthls" podUID="75461a76-a686-4ba2-aacc-266a6fc4971c" Nov 5 15:00:11.515986 sshd[5253]: Accepted publickey for core from 10.0.0.1 port 42970 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 15:00:11.517390 sshd-session[5253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:00:11.521373 systemd-logind[1534]: New session 15 of user core. Nov 5 15:00:11.529445 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 5 15:00:11.692532 containerd[1553]: time="2025-11-05T15:00:11.692419171Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:00:11.693683 containerd[1553]: time="2025-11-05T15:00:11.693640930Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 15:00:11.693776 containerd[1553]: time="2025-11-05T15:00:11.693665730Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 15:00:11.693894 kubelet[2690]: E1105 15:00:11.693857 2690 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:00:11.693944 kubelet[2690]: E1105 15:00:11.693908 2690 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:00:11.694091 kubelet[2690]: E1105 15:00:11.694053 2690 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m2mgw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9kr5x_calico-system(1c59cb2f-c3de-4d4b-a46e-9bc0038d4b98): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 15:00:11.696283 containerd[1553]: time="2025-11-05T15:00:11.696247847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 15:00:11.746088 sshd[5256]: Connection closed by 10.0.0.1 port 42970 Nov 5 15:00:11.745590 sshd-session[5253]: pam_unix(sshd:session): session closed for user core Nov 5 15:00:11.756924 systemd[1]: sshd@14-10.0.0.21:22-10.0.0.1:42970.service: Deactivated successfully. Nov 5 15:00:11.759255 systemd[1]: session-15.scope: Deactivated successfully. Nov 5 15:00:11.760821 systemd-logind[1534]: Session 15 logged out. Waiting for processes to exit. Nov 5 15:00:11.765097 systemd[1]: Started sshd@15-10.0.0.21:22-10.0.0.1:42974.service - OpenSSH per-connection server daemon (10.0.0.1:42974). Nov 5 15:00:11.765680 systemd-logind[1534]: Removed session 15. Nov 5 15:00:11.827274 sshd[5267]: Accepted publickey for core from 10.0.0.1 port 42974 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 15:00:11.828891 sshd-session[5267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:00:11.833746 systemd-logind[1534]: New session 16 of user core. Nov 5 15:00:11.843420 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 5 15:00:11.919406 containerd[1553]: time="2025-11-05T15:00:11.919346012Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:00:11.921017 containerd[1553]: time="2025-11-05T15:00:11.920918850Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 15:00:11.921017 containerd[1553]: time="2025-11-05T15:00:11.920996890Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 15:00:11.921394 kubelet[2690]: E1105 15:00:11.921123 2690 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:00:11.921394 kubelet[2690]: E1105 15:00:11.921171 2690 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:00:11.921394 kubelet[2690]: E1105 15:00:11.921290 2690 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m2mgw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9kr5x_calico-system(1c59cb2f-c3de-4d4b-a46e-9bc0038d4b98): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 15:00:11.922544 kubelet[2690]: E1105 15:00:11.922439 2690 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9kr5x" podUID="1c59cb2f-c3de-4d4b-a46e-9bc0038d4b98" Nov 5 15:00:12.442914 sshd[5270]: Connection closed by 10.0.0.1 port 42974 Nov 5 15:00:12.443370 sshd-session[5267]: pam_unix(sshd:session): session closed for user core Nov 5 15:00:12.454120 systemd[1]: sshd@15-10.0.0.21:22-10.0.0.1:42974.service: Deactivated successfully. Nov 5 15:00:12.458555 systemd[1]: session-16.scope: Deactivated successfully. Nov 5 15:00:12.460297 systemd-logind[1534]: Session 16 logged out. Waiting for processes to exit. Nov 5 15:00:12.465464 systemd[1]: Started sshd@16-10.0.0.21:22-10.0.0.1:42988.service - OpenSSH per-connection server daemon (10.0.0.1:42988). Nov 5 15:00:12.466899 systemd-logind[1534]: Removed session 16. Nov 5 15:00:12.556832 sshd[5291]: Accepted publickey for core from 10.0.0.1 port 42988 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 15:00:12.558196 sshd-session[5291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:00:12.563051 systemd-logind[1534]: New session 17 of user core. Nov 5 15:00:12.575500 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 5 15:00:12.839351 sshd[5294]: Connection closed by 10.0.0.1 port 42988 Nov 5 15:00:12.840389 sshd-session[5291]: pam_unix(sshd:session): session closed for user core Nov 5 15:00:12.852880 systemd[1]: sshd@16-10.0.0.21:22-10.0.0.1:42988.service: Deactivated successfully. Nov 5 15:00:12.856009 systemd[1]: session-17.scope: Deactivated successfully. Nov 5 15:00:12.858449 systemd-logind[1534]: Session 17 logged out. Waiting for processes to exit. Nov 5 15:00:12.860473 systemd-logind[1534]: Removed session 17. Nov 5 15:00:12.862460 systemd[1]: Started sshd@17-10.0.0.21:22-10.0.0.1:42998.service - OpenSSH per-connection server daemon (10.0.0.1:42998). Nov 5 15:00:12.928658 sshd[5305]: Accepted publickey for core from 10.0.0.1 port 42998 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 15:00:12.930040 sshd-session[5305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:00:12.934208 systemd-logind[1534]: New session 18 of user core. Nov 5 15:00:12.947404 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 5 15:00:13.080885 sshd[5308]: Connection closed by 10.0.0.1 port 42998 Nov 5 15:00:13.080283 sshd-session[5305]: pam_unix(sshd:session): session closed for user core Nov 5 15:00:13.084393 systemd-logind[1534]: Session 18 logged out. Waiting for processes to exit. Nov 5 15:00:13.084666 systemd[1]: sshd@17-10.0.0.21:22-10.0.0.1:42998.service: Deactivated successfully. Nov 5 15:00:13.087934 systemd[1]: session-18.scope: Deactivated successfully. Nov 5 15:00:13.089721 systemd-logind[1534]: Removed session 18. Nov 5 15:00:13.242846 containerd[1553]: time="2025-11-05T15:00:13.242699107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:00:18.095791 systemd[1]: Started sshd@18-10.0.0.21:22-10.0.0.1:43008.service - OpenSSH per-connection server daemon (10.0.0.1:43008). Nov 5 15:00:18.158065 sshd[5324]: Accepted publickey for core from 10.0.0.1 port 43008 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 15:00:18.159500 sshd-session[5324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:00:18.164308 systemd-logind[1534]: New session 19 of user core. Nov 5 15:00:18.176463 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 5 15:00:18.377306 sshd[5327]: Connection closed by 10.0.0.1 port 43008 Nov 5 15:00:18.377346 sshd-session[5324]: pam_unix(sshd:session): session closed for user core Nov 5 15:00:18.381031 systemd[1]: sshd@18-10.0.0.21:22-10.0.0.1:43008.service: Deactivated successfully. Nov 5 15:00:18.383065 systemd[1]: session-19.scope: Deactivated successfully. Nov 5 15:00:18.385480 systemd-logind[1534]: Session 19 logged out. Waiting for processes to exit. Nov 5 15:00:18.386508 systemd-logind[1534]: Removed session 19. Nov 5 15:00:18.865325 containerd[1553]: time="2025-11-05T15:00:18.865221821Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:00:18.866295 containerd[1553]: time="2025-11-05T15:00:18.866257340Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:00:18.866446 containerd[1553]: time="2025-11-05T15:00:18.866334420Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:00:18.866621 kubelet[2690]: E1105 15:00:18.866562 2690 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:00:18.866972 kubelet[2690]: E1105 15:00:18.866634 2690 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:00:18.866972 kubelet[2690]: E1105 15:00:18.866870 2690 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tn4xz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-fbd6ccfdb-hgtng_calico-apiserver(bac974d5-2052-4432-9839-70f531dc6657): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:00:18.867544 containerd[1553]: time="2025-11-05T15:00:18.867274739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:00:18.868293 kubelet[2690]: E1105 15:00:18.868225 2690 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbd6ccfdb-hgtng" podUID="bac974d5-2052-4432-9839-70f531dc6657" Nov 5 15:00:19.083538 containerd[1553]: time="2025-11-05T15:00:19.083496244Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:00:19.084807 containerd[1553]: time="2025-11-05T15:00:19.084673923Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:00:19.084807 containerd[1553]: time="2025-11-05T15:00:19.084767403Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:00:19.085063 kubelet[2690]: E1105 15:00:19.084957 2690 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:00:19.085063 kubelet[2690]: E1105 15:00:19.085025 2690 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:00:19.085267 kubelet[2690]: E1105 15:00:19.085165 2690 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bgvn7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-fbd6ccfdb-9smqx_calico-apiserver(947e4c3f-edaa-4455-9701-8eca3788c1c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:00:19.086414 kubelet[2690]: E1105 15:00:19.086313 2690 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbd6ccfdb-9smqx" podUID="947e4c3f-edaa-4455-9701-8eca3788c1c9" Nov 5 15:00:19.239612 kubelet[2690]: E1105 15:00:19.239571 2690 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59c8c4d79f-vmsfn" podUID="870c178e-41c4-4ab0-8a1e-1bcbcc89ae10" Nov 5 15:00:20.239539 kubelet[2690]: E1105 15:00:20.239468 2690 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd5cf7f85-nwqhp" podUID="bcf60f8c-179e-4bff-8ac8-93bb2db7eacf" Nov 5 15:00:20.240357 kubelet[2690]: E1105 15:00:20.240304 2690 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fbd6bddd9-f68mf" podUID="4b663924-3b8e-4932-b608-4cd05d743871" Nov 5 15:00:20.989943 containerd[1553]: time="2025-11-05T15:00:20.989901328Z" level=info msg="TaskExit event in podsandbox handler container_id:\"286acd6536114b96812e286abcc3e1860b5fdc7fdec4dcb37726b7e1627e4e76\" id:\"2092df07fd11cdbefe5fde203d3b78bd5916782b11b5b19736da793d7293c2f4\" pid:5364 exited_at:{seconds:1762354820 nanos:989607689}" Nov 5 15:00:20.992280 kubelet[2690]: E1105 15:00:20.992255 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:00:23.240538 kubelet[2690]: E1105 15:00:23.240466 2690 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rthls" podUID="75461a76-a686-4ba2-aacc-266a6fc4971c" Nov 5 15:00:23.241523 kubelet[2690]: E1105 15:00:23.241309 2690 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9kr5x" podUID="1c59cb2f-c3de-4d4b-a46e-9bc0038d4b98" Nov 5 15:00:23.400630 systemd[1]: Started sshd@19-10.0.0.21:22-10.0.0.1:57586.service - OpenSSH per-connection server daemon (10.0.0.1:57586). Nov 5 15:00:23.464661 sshd[5377]: Accepted publickey for core from 10.0.0.1 port 57586 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 15:00:23.469154 sshd-session[5377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:00:23.474566 systemd-logind[1534]: New session 20 of user core. Nov 5 15:00:23.482348 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 5 15:00:23.629322 sshd[5381]: Connection closed by 10.0.0.1 port 57586 Nov 5 15:00:23.629304 sshd-session[5377]: pam_unix(sshd:session): session closed for user core Nov 5 15:00:23.634565 systemd-logind[1534]: Session 20 logged out. Waiting for processes to exit. Nov 5 15:00:23.634679 systemd[1]: sshd@19-10.0.0.21:22-10.0.0.1:57586.service: Deactivated successfully. Nov 5 15:00:23.636810 systemd[1]: session-20.scope: Deactivated successfully. Nov 5 15:00:23.641317 systemd-logind[1534]: Removed session 20. Nov 5 15:00:28.643271 systemd[1]: Started sshd@20-10.0.0.21:22-10.0.0.1:57594.service - OpenSSH per-connection server daemon (10.0.0.1:57594). Nov 5 15:00:28.716630 sshd[5396]: Accepted publickey for core from 10.0.0.1 port 57594 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 15:00:28.718085 sshd-session[5396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:00:28.723258 systemd-logind[1534]: New session 21 of user core. Nov 5 15:00:28.738304 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 5 15:00:28.881232 sshd[5399]: Connection closed by 10.0.0.1 port 57594 Nov 5 15:00:28.880192 sshd-session[5396]: pam_unix(sshd:session): session closed for user core Nov 5 15:00:28.884476 systemd[1]: sshd@20-10.0.0.21:22-10.0.0.1:57594.service: Deactivated successfully. Nov 5 15:00:28.886306 systemd[1]: session-21.scope: Deactivated successfully. Nov 5 15:00:28.888733 systemd-logind[1534]: Session 21 logged out. Waiting for processes to exit. Nov 5 15:00:28.889610 systemd-logind[1534]: Removed session 21. Nov 5 15:00:32.240440 kubelet[2690]: E1105 15:00:32.240398 2690 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbd6ccfdb-9smqx" podUID="947e4c3f-edaa-4455-9701-8eca3788c1c9" Nov 5 15:00:32.242055 kubelet[2690]: E1105 15:00:32.241849 2690 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbd6ccfdb-hgtng" podUID="bac974d5-2052-4432-9839-70f531dc6657" Nov 5 15:00:32.242108 containerd[1553]: time="2025-11-05T15:00:32.241576131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:00:32.498218 containerd[1553]: time="2025-11-05T15:00:32.498076296Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:00:32.499100 containerd[1553]: time="2025-11-05T15:00:32.499061447Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:00:32.499183 containerd[1553]: time="2025-11-05T15:00:32.499144206Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:00:32.499373 kubelet[2690]: E1105 15:00:32.499328 2690 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:00:32.499423 kubelet[2690]: E1105 15:00:32.499387 2690 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:00:32.499808 kubelet[2690]: E1105 15:00:32.499635 2690 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6dc3985fa5584b0d9571186f991028cd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8kpwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6fbd6bddd9-f68mf_calico-system(4b663924-3b8e-4932-b608-4cd05d743871): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:00:32.499974 containerd[1553]: time="2025-11-05T15:00:32.499924118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:00:32.759502 containerd[1553]: time="2025-11-05T15:00:32.759323696Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:00:32.760450 containerd[1553]: time="2025-11-05T15:00:32.760373646Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:00:32.760450 containerd[1553]: time="2025-11-05T15:00:32.760412805Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:00:32.760647 kubelet[2690]: E1105 15:00:32.760609 2690 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:00:32.760706 kubelet[2690]: E1105 15:00:32.760661 2690 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:00:32.761014 containerd[1553]: time="2025-11-05T15:00:32.760979320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:00:32.761172 kubelet[2690]: E1105 15:00:32.761130 2690 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvjgv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59c8c4d79f-vmsfn_calico-apiserver(870c178e-41c4-4ab0-8a1e-1bcbcc89ae10): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:00:32.762503 kubelet[2690]: E1105 15:00:32.762471 2690 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59c8c4d79f-vmsfn" podUID="870c178e-41c4-4ab0-8a1e-1bcbcc89ae10" Nov 5 15:00:32.992189 containerd[1553]: time="2025-11-05T15:00:32.992098406Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:00:32.993077 containerd[1553]: time="2025-11-05T15:00:32.993042797Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:00:32.993148 containerd[1553]: time="2025-11-05T15:00:32.993126196Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:00:32.993330 kubelet[2690]: E1105 15:00:32.993289 2690 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:00:32.993379 kubelet[2690]: E1105 15:00:32.993339 2690 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:00:32.993487 kubelet[2690]: E1105 15:00:32.993445 2690 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8kpwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6fbd6bddd9-f68mf_calico-system(4b663924-3b8e-4932-b608-4cd05d743871): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:00:32.994667 kubelet[2690]: E1105 15:00:32.994621 2690 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fbd6bddd9-f68mf" podUID="4b663924-3b8e-4932-b608-4cd05d743871" Nov 5 15:00:33.894190 systemd[1]: Started sshd@21-10.0.0.21:22-10.0.0.1:44652.service - OpenSSH per-connection server daemon (10.0.0.1:44652). Nov 5 15:00:33.959648 sshd[5412]: Accepted publickey for core from 10.0.0.1 port 44652 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 15:00:33.961122 sshd-session[5412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:00:33.968077 systemd-logind[1534]: New session 22 of user core. Nov 5 15:00:33.972393 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 5 15:00:34.177925 sshd[5415]: Connection closed by 10.0.0.1 port 44652 Nov 5 15:00:34.178258 sshd-session[5412]: pam_unix(sshd:session): session closed for user core Nov 5 15:00:34.183573 systemd[1]: sshd@21-10.0.0.21:22-10.0.0.1:44652.service: Deactivated successfully. Nov 5 15:00:34.186994 systemd[1]: session-22.scope: Deactivated successfully. Nov 5 15:00:34.188077 systemd-logind[1534]: Session 22 logged out. Waiting for processes to exit. Nov 5 15:00:34.189525 systemd-logind[1534]: Removed session 22. Nov 5 15:00:34.240954 containerd[1553]: time="2025-11-05T15:00:34.240199499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 15:00:35.238925 kubelet[2690]: E1105 15:00:35.238849 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"